orthonormal

Comments

Sorted by

With the sudden simultaneous exits of Mira Murati, Barret Zoph, and Bob McGrew, I thought I'd update my tally of the departures from OpenAI, collated with how quickly the ex-employee had signed the loyalty letter to Sam Altman last November. 

The letter was leaked at 505 signatures, 667 signatures, and finally 702 signatures; in the end, it was reported that 737 of 770 employees signed. Since then, I've been able to verify 56 departures of people who were full-time employees (as far as I can tell, contractors were not allowed to sign, but all FTEs were).

I still think I'm missing some, so these are lower bounds (modulo any mistakes I've made).

Headline numbers:

  • Attrition for the 505 OpenAI employees who signed before the letter was first leaked: at least 24/505 = 4.8%
  • Attrition for the next 197 to sign (it was leaked again at 667 signatures, and one last time at 702): at least 13/197 = 6.6%
  • Attrition for the (reported) 68 who had not signed by the last leak: at least 19/68 = 27.9%.

Reportedly, 737 out of the 770 signed in the end, and many of the Superalignment team chose not to sign at all.

Below are my current tallies of some notable subsets. Please comment with any corrections!

People from the Superalignment team who never signed as of the 702 leak (including some policy/governance people who seem to have been closely connected) and are now gone:

  • Carroll Wainwright
  • Collin Burns
  • Cullen O'Keefe
  • Daniel Kokotajlo
  • Jan Leike (though he did separately Tweet that the board should resign)
  • Jeffrey Wu
  • Jonathan Uesato
  • Leopold Aschenbrenner
  • Mati Roy
  • William Saunders
  • Yuri Burda

People from the Superalignment team (and close collaborators) who did sign before the final leak but are now gone:

  • Jan Hendrik Kirchner (signed between 668 and 702)
  • Steven Bills (signed between 668 and 702)
  • John Schulman (signed between 506 and 667)
  • Sherry Lachman (signed between 506 and 667)
  • Ilya Sutskever (signed by 505)
  • Pavel Izmailov (signed by 505)
  • Ryan Lowe (signed by 505)
  • Todor Markov (signed by 505)

Others who didn't sign as of the 702 leak (some of whom may have just been AFK for the wrong weekend, though I doubt that was true of Karpathy) and are now gone:

  • Andrei Alexandru (Research Engineer)
  • Andrej Karpathy (Co-Founder)
  • Austin Wiseman (Finance/Accounting)
  • Girish Sastry (Policy)
  • Jay Joshi (Recruiting)
  • Katarina Slama (Member of Technical Staff)
  • Lucas Negritto (Member of Technical Staff, then Developer Community Ambassador)
  • Zarina Stanik (Marketing)

Notable other ex-employees:

  • Barrett Zoph (VP of Research, Post-Training; signed by 505)
  • Bob McGrew (Chief Research Officer; signed by 505)
  • Chris Clark (Head of Nonprofit and Strategic Initiatives; signed by 505)
  • Diane Yoon (VP of People; signed by 505)
  • Gretchen Krueger (Policy; signed by 505; posted a significant Twitter thread at the time she left)
  • Mira Murati (CTO; signed by 505)

I have to further compliment my past self: this section aged extremely well, prefiguring the Shoggoth-with-a-smiley-face analogies several years in advance.

GPT-3 is trained simply to predict continuations of text. So what would it actually optimize for, if it had a pretty good model of the world including itself and the ability to make plans in that world?

One might hope that because it's learning to imitate humans in an unsupervised way, that it would end up fairly human, or at least act in that way. I very much doubt this, for the following reason:

  • Two humans are fairly similar to each other, because they have very similar architectures and are learning to succeed in the same environment.
  • Two convergently evolved species will be similar in some ways but not others, because they have different architectures but the same environmental pressures.
  • A mimic species will be similar in some ways but not others to the species it mimics, because even if they share recent ancestry, the environmental pressures on the poisonous one are different from the environmental pressures on the mimic.

What we have with the GPTs is the first deep learning architecture we've found that scales this well in the domain (so, probably not that much like our particular architecture), learning to mimic humans rather than growing in an environment with similar pressures. Why should we expect it to be anything but very alien under the hood, or to continue acting human once its actions take us outside of the training distribution?

Moreover, there may be much more going on under the hood than we realize; it may take much more general cognitive power to learn and imitate the patterns of humans, than it requires us to execute those patterns.

The chess example is meant to make specific points about RL*F concealing a capability that remains (or is even amplified); I'm not trying to claim that the "put up a good fight but lose" criterion is analogous to current RL*F criteria. (Though it does rhyme qualitatively with "be helpful and harmless".)

I agree that "helpful-only" RL*F would result in a model that scores higher on capabilities evals than the base model, possibly much higher. I'm frankly a bit worried about even training that model.

Thank you! I'd forgotten about that.

I agree with "When you say 'there's a good chance AGI is near', the general public will hear 'AGI is near'".

However, the general public isn't everyone, and the people who can distinguish between the two claims are the most important to reach (per capita, and possibly in sum).

So we'll do better by saying what we actually believe, while taking into account that some audiences will round probabilities off (and seeking ways to be rounded closer to the truth while still communicating accurately to anyone who does understand probabilistic claims). The marginal gain by rounding ourselves off at the start isn't worth the marginal loss by looking transparently overconfident to those who can tell the difference.

I reached this via Joachim pointing it out as an example of someone urging epistemic defection around AI alignment, and I have to agree with him there. I think the higher difficulty posed by communicating "we think there's a substantial probability that AGI happens in the next 10 years" vs "AGI is near" is worth it even from a PR perspective, because pretending you know the day and the hour smells like bullshit to the most important people who need convincing that AI alignment is nontrivial.

I can imagine this coming from the equivalent of "adapt someone else's StackOverflow code" level capability, which is still pretty impressive. 

In my opinion, the scariest thing I've seen so far is coding Game Of Life Pong, which doesn't seem to resemble any code GPT-4 would have had in its training data. Stitching those things together means coding for real for real.

Kudos for talking about learning empathy in a way that seems meaningfully different and less immediately broken than adjacent proposals.

I think what you should expect from this approach, should it in fact succeed, is not nothing- but still something more alien than the way we empathize with lower animals, let alone higher animals. Consider the empathy we have towards cats... and the way it is complicated by their desire to be a predator, and specifically to enjoy causing fear/suffering. Our empathy with cats doesn't lead us to abandon our empathy for their prey, and so we are inclined to make compromises with that empathy.

Given better technology, we could make non-sentient artificial mice that are indistinguishable by the cats (but their extrapolated volition, to some degree, would feel deceived and betrayed by this), or we could just ensure that cats no longer seek to cause fear/suffering.

I hope that humans' extrapolated volitions aren't cruel (though maybe they are when judged by Superhappy standards). Regardless, an AI that's guaranteed to have empathy for us is not guaranteed, and in general quite unlikely, to have no other conflicts with our volitions; and the kind of compromises it will analogously make will probably be larger and stranger than the cat example.

Better than paperclips, but perhaps missing many dimensions we care about.

Very cool! How does this affect your quest for bounded analogues of Löbian reasoning?

Load More