Edouard Harris

Independent researcher.

Wiki Contributions

Comments

This is really interesting. It's hard to speak too definitively about theories of human values, but for what it's worth these ideas do pass my intuitive smell test.

One intriguing aspect is that, assuming I've followed correctly, this theory aims to unify different cognitive concepts in a way that might be testable:

  • On the one hand, it seems to suggest a path to generalizing circuits-type work to the model-based RL paradigm. (With shards, which bid for outcomes on a contextually activated basis, being analogous to circuits, which contribute to prediction probabilities on a contextually activated basis.)
  • On the other hand, it also seems to generalize the psychological concept of classical conditioning (Pavlov's salivating dog, etc.), which has tended to be studied over the short term for practical reasons, to arbitrarily (?) longer planning horizons. The discussion of learning in babies also puts one in mind of the unfortunate Little Albert Experiment, done in the 1920s:

For the experiment proper, by which point Albert was 11 months old, he was put on a mattress on a table in the middle of a room. A white laboratory rat was placed near Albert and he was allowed to play with it. At this point, Watson and Rayner made a loud sound behind Albert's back by striking a suspended steel bar with a hammer each time the baby touched the rat. Albert responded to the noise by crying and showing fear. After several such pairings of the two stimuli, Albert was presented with only the rat. Upon seeing the rat, Albert became very distressed, crying and crawling away.

[...]

In further experiments, Little Albert seemed to generalize his response to the white rat. He became distressed at the sight of several other furry objects, such as a rabbit, a furry dog, and a seal-skin coat, and even a Santa Claus mask with white cotton balls in the beard.

A couple more random thoughts on stories one could tell through the lens of shard theory:

  • As we age, if all goes well, we develop shards with longer planning horizons. Planning over longer horizons requires more cognitive capacity (all else equal), and long-horizon shards do seem to have some ability to either reinforce or dampen the influence of shorter-horizon shards. This is part of the continuing process of "internally aligning" a human mind.
  • Introspectively, I think there is also an energy cost involved in switching between "active" shards. Software developers understand this as context-switching, actively dislike it, and evolve strategies to minimize it in their daily work. I suspect a lot of the biases you might categorize under "resistance to change" (projection bias, sunk cost fallacy and so on) have this as a factor.

I do have a question about your claim that shards are not full subagents. I understand that in general different shards will share parameters over their world-model, so in that sense they aren't fully distinct — is this all you mean? Or are you arguing that even a very complicated shard with a long planning horizon (e.g., "earn money in the stock market" or some such) isn't agentic by some definition?

Anyway, great post. Looking forward to more.

Nice. Congrats on the launch! This is an extremely necessary line of effort.

Interesting. The specific idea you're proposing here may or may not be workable, but it's an intriguing example of a more general strategy that I've previously tried to articulate in another context. The idea is that it may be viable to use an AI to create a "platform" that accelerates human progress in an area of interest to existential safety, as opposed to using an AI to directly solve the problem or perform the action.

Essentially:

  1. A "platform" for work in domain X is something that removes key constraints that would otherwise have consumed human time and effort when working in X. This allows humans to explore solutions in X they wouldn't have previously — whether because they'd considered and rejected those solution paths, or because they'd subconsciously trained themselves not to look in places where the initial effort barrier was too high. Thus, developing an excellent platform for X allows humans to accelerate progress in domain X relative to other domains, ceteris paribus. (Every successful platform company does this. e.g., Shopify, Amazon, etc., make valuable businesses possible that wouldn't otherwise exist.)
  2. For certain carefully selected domains X, a platform for X may plausibly be relatively easier to secure & validate than an agent that's targeted at some specific task x ∈ X would be. (Not easy; easier.) It's less risky to validate the outputs of a platform and leave the really dangerous last-mile stuff to humans, than it would be to give an end-to-end trained AI agent a pivotal command in the real world (i.e., "melt all GPUs") that necessarily takes the whole system far outside its training distribution. Fundamentally, the bet is that if humans are the ones doing the out-of-distribution part of the work, then the output that comes out the other end is less likely to have been adversarially selected against us.

(Note that platforms are tools, and tools want to be agents, so a strategy like this is unlikely to arise along the "natural" path of capabilities progress other than transiently.)

There are some obvious problems with this strategy. One is that point 1 above is no help if you can't tell which of the solutions the humans come up with are good, and which are bad. So the approach can only work on problems that humans would otherwise have been smart enough to solve eventually, given enough time to do so (as you already pointed out in your example). If AI alignment is such a problem, then it could be a viable candidate for such an approach. Ditto for a pivotal act.

Another obvious problem is that capabilities research might benefit from the similar platforms that alignment research can. So actually implementing this in the real world might just accelerate the timeline for everything, leaving us worse off. (Absent an intervention at some higher level of coordination.)

A third concern is that point 2 above could be flat-out wrong in practice. Asking an AI to build a platform means asking for generalization, even if it is just "generalization within X", and that's playing a lethally dangerous game. In fact, it might well be lethal for any useful X, though that isn't currently obvious to me. e.g., AlphaFold2 is a primitive example of a platform that that's useful and non-dangerous, though it's not useful enough for this.

On top of all that, there are all the steganographic considerations — AI embedding dangerous things in the tool itself, etc. — that you pointed out in your example.

But this strategy still seems like it could bring us closer to the Pareto frontier for critical domains (alignment problem, pivotal act), than it would be to directly train an AI to do the dangerous action.

Yep, I'd say I intuitively agree with all of that, though I'd add that if you want to specify the set of "outcomes" differently from the set of "goals", then that must mean you're implicitly defining a mapping from outcomes to goals. One analogy could be that an outcome is like a thermodynamic microstate (in the sense that it's a complete description of all the features of the universe) while a goal is like a thermodynamic macrostate (in the sense that it's a complete description of the features of the universe that the system can perceive).

This mapping from outcomes to goals won't be injective for any real embedded system. But in the unrealistic limit where your system is so capable that it has a "perfect ontology" — i.e., its perception apparatus can resolve every outcome / microstate from any other — then this mapping converges to the identity function, and the system's set of possible goals converges to its set of possible outcomes. (This is the dualistic case, e.g., AIXI and such. But plausibly, we also should expect a self-improving systems to improve its own perception apparatus such that its effective goal-set becomes finer and finer with each improvement cycle. So even this partition over goals can't be treated as constant in the general case.)

Gotcha. I definitely agree with what you're saying about the effectiveness of incentive structures. And to be clear, I also agree that some of the affordances in the quote reasonably fall under "alignment": e.g., if you explicitly set a specific mission statement, that's a good tactic for aligning your organization around that specific mission statement.

But some of the other affordances aren't as clearly goal-dependent. For example, iterating quickly is an instrumentally effective strategy across a pretty broad set of goals a company might have. That (in my view) makes it closer to a capability technique than to an alignment technique. i.e., you could imagine a scenario where I succeeded in building a company that iterated quickly, but I failed to also align it around the mission statement I wanted it to have. In this scenario, my company was capable, but it wasn't aligned with the goal I wanted.

Of course, this is a spectrum. Even setting a specific mission statement is an instrumentally effective strategy across all the goals that are plausible interpretations of that mission statement. And most real mission statements don't admit a unique interpretation. So you could also argue that setting a mission statement increases the company's capability to accomplish goals that are consistent with any interpretation of it. But as a heuristic, I tend to think of a capability as something that lowers the cost to the system of accomplishing any goal (averaged across the system's goal-space with a reasonable prior). Whereas I tend to think of alignment as something that increases the relative cost to the system of accomplishing classes of goals that the operator doesn't want.

I'd be interested to hear whether you have a different mental model of the difference, and if so, what it is. It's definitely possible I've missed something here, since I'm really just describing an intuition.

Thanks, great post.

These include formulating and repeating a clear mission statement, setting up a system for promotions that rewards well-calibrated risk taking, and iterating quickly at the beginning of the company in order to habituate a rhythm of quick iteration cycles.

I may be misunderstanding, but wouldn't these techniques fall more under the heading of capabilities rather than under alignment? These are tactics that should increase a company's effectiveness in general, for most reasonable mission statements or products the company could have.

This is fantastic. Really appreciate both the detailed deep-dive in the document, and the summary here. This is also timely, given that teams working on superscale models with concerning capabilities haven't generally been too forthcoming with compute estimates. (There are exceptions.)

As you and Alex point out in the sibling thread, the biggest remaining fudge factors seem to be:

  1. Mixture models (or any kind of parameter-sharing, really) for the first method, which will cause you to systematically overestimate the "Operations per forward pass" factor; and
  2. Variable effective utilization rates of custom hardware for the second method, which will cause an unknown distribution of errors in the "utilization rate" factor.

Nonetheless, my flying guess would be that your method is pretty much guaranteed to be right within an OOM, and probably within a factor of 2 or less. That seems pretty good! It's certainly an improvement over anything I've seen previously along these lines. Congrats!

It's simply because we each (myself more than her) have an inclination to apply a fair amount of adjustment in a conservative direction, for generic "burden of proof" reasons, rather than go with the timelines that seem most reasonable based on the report in a vacuum.

While one can sympathize with the view that the burden of proof ought to lie with advocates of shorter timelines when it comes to the pure inference problem ("When will AGI occur?"), it's worth observing that in the decision problem ("What should we do about it?") this situation is reversed. The burden of proof in the decision problem probably ought instead to lie with advocates of non-action: when one's timelines are >1 generation, it is a bit too easy to kick the can down the road in various ways — leaving one unprepared if the future turns out to move faster than we expected. Conversely someone whose timelines are relatively short may take actions today that will leave us in a better position in the future, even if that future arrives more slowly than they believed originally.

(I don't think OpenPhil is confusing these two, just that in a conversation like this it is particularly worth emphasizing the difference.)

This is an excellent point and it's indeed one of the fundamental limitations of a public tracking approach. Extrapolating trends in an information environment like this can quickly degenerate into pure fantasy. All one can really be sure of is that the public numbers are merely lower bounds — and plausibly, very weak ones.

Load More