Some AI safety ideas delegate key decisions to our idealised selves. This is sometimes phrased as "allowing versions of yourself to think for ten thousand years", or similar sentiments.
Occasionally, when I've objected to these ideas, it's been pointed out that any attempt to construct a safe AI design would involve a lot of thinking, so therefore there can't be anything wrong with delegating this thinking to an algorithm or an algorithmic version of myself.
But there is a tension between "more thinking" in the sense of "solve specific problems" and in the sense of "change your own values".
An unrestricted "do whatever a copy of Stuart Armstrong would have done after he thought about morality for ten thousand years" seems to positively beg for value drift (worsened by the difficulty in defining what we mean by "a copy of Stuart Armstrong [...] thought [...] for ten thousand years").
A more narrow "have ten copies of Stuart think about these ten theorems for a subjective week each and give me a proof or counter-example" seems much safer.
In between those two extremes, how do we assess the degree of value drift and its potential importance to the question being asked? Ideally, we'd have a theory of human values to help distinguish the cases. Even without that, we can use some common sense on issues like length of thought, nature of problem, bandwidth of output, and so on.
Why do you think this problem needs to be solved now? Couldn't the idealized version of yourself spend the first few years to figure out how best to protect again value drift during the rest of the available time? It seems to me that a more urgent problem is, given that a person thinking alone for even a few years would likely go crazy, how do we set up the initial social dynamics for a group of virtual humans?
Because I've already found problems with these systems in the past few years, problems that other people did not expect there to be. If one of them had been put into such a setup then, I expect that it would have failed. Sure, if current me was put in the system, maybe I could find a few more problems and patch them, because I expect to find them.
But I wouldn't trust many others, and I barely trust myself. Because the difference is large between what the setup will be in practice, and what current research is in practice. The more we can solve these issues ahead of time, the more we can delegate.
I don't know which problems/systems you're referring to. Maybe you could cite these in the post to give more motivation?
What are the most important differences that you foresee?
The main one is when I realised the problems with CEV: https://www.lesswrong.com/posts/vgFvnr7FefZ3s3tHp/mahatma-armstrong-ceved-to-death
The others are mainly oral, with people coming up with plans that involve simulating humans for long periods of time, me doing the equivalent of saying "have you considered value drift" and (often) the reaction from the other revealing that no, they had not considered value drift.
The most important differences I foresee are the unforseen :-) I mean that seriously, because anything that is easy to foresee will possibly be patched before implementation.
But if we look at how research happens nowadays, it has a variety of different approaches and institutional cultures, certain levels of feedback both from within the AI safety community and the surrounding world, grounding our morality and keeping us connected to the flow of culture (such as it is).
Most of the simulation ideas do away with that. If someone suggested that the best idea for AI safety would be to lock up AI safety researchers in an isolated internet-free house for ten years and see what they came up with, we'd be all over the flaws in this plan (and not just the opportunity costs). But replace that physical, grounded idea with a similar one that involves "simulation", and suddenly people flip into far mode and are more willing to accept it. In practice, a simulation is likely to be far more alien and alienating that just locking people up in a house. We have certain levels of control in a simulation that we wouldn't have in reality, but even that could hurt - I'm not sure how I would react if I knew my mind and emotions and state of tiredness were open to manipulation.
So what I'm mainly trying to say is that using simulations (or predictions about simulations) to do safety work is a difficult and subtle project, and needs to be thoroughly planned out with, at minimum, a lot of psychologists and some anthropologists. I think it can be done, but not glibly and not easily.
Ah, value drift has been on my mind for so long that it's surprising to me that people could be thinking about simulating humans for long periods of time without thinking about value drift. Thanks for the update!
I guess my perspective here is that pretty soon we'll be forced to live in a real environment that will be quite alien / drift-inducing already, so maybe it wouldn't be so hard to construct a virtual environment that would be better in comparison, so the risk-minimizing thing to do would be to put yourself in such an environment as soon as possible and then work on further risk reduction from there. (See this recent news as another sign pointing to that coming soon.)
Yeah I agree that getting the social aspect right is probably the hardest part, and we might need more than a small group of virtual humans to do that.
I think this framing makes sense.
Also, on a more minor note, I expect that if I try and preserve myself from value drift, using only the resources I had in the simulation - I expect to fail. Social dynamics might work though, so we do need to think about those.
People will have to do a lot of maths and philosophy to get an AI system that works at all.
Suppose you have a lead of 1 week over any ufai projects, and you have your AI system to the point where it can predict what you would do in a box. (Actually, we can say the AI has developed mind uploading tech + lotsa compute) The human team needs say 5 years of thinking to come up with better metaethics, defense against value drift or whatever. You want to simulate the humans in some reasonably human friendly environment for a few years to work this thing out. You pick a nice town, and ask the AI to create a virtual copy of the town. (More specifically, you randomly sample from the AI's probability distribution, after conditioning on enough data that the town will be townlike.) The virtual town is created with no people except the research team in it. All the services are set to work without any maintenance. (Water in virtual pipes, food in virtual shops, virtual internet works.). The team of people uploaded into this town is at least 30, ideally a few hundred, including plenty of friends and family.
This "virtual me in a box" seems likely to be useful and unlikely to be dangerous. I agree that any virtual box trick that involves people thinking for a long time compared to current lifespans is dangerous. A single person trapped in low res polygon land would likely go crazy from the sensory deprivation.
You need an environment with a realistic level of socializing and leisure activities to support psycologically healthy humans. Any well done "virtual me in a box" is going to look more like a virtual AI safety camp or research department than 1 person in a blank white room containing only a keyboard.
Unfortunately, all those details would be hard to manually hard code in. You seem to need an AI that can be trusted to follow reasonably clear and specific goals without adversarial optimization. You want a virtual park, manually creating it would be a lot of hard work, see current video games. You need an AI that can fill in thousands of little details in a manor not optimized to mess with humans. This is not an especially high bar.