Raymond Arnold

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Raymond Arnold's Comments

Possible takeaways from the coronavirus pandemic for slow AI takeoff

Curated. 

I personally agree with the OP, and have found at least the US's response to Covid-19 fairly important for modeling how it might respond to AI. I also found it particularly interesting that it focused on the "Slow Takeoff" scenario. I wouldn't have thought to make that specific comparison, and found it surprisingly apt. 

I also think that, regardless of whether one agrees with the OP, I think "how humanity collectively responded to Covid-19" is still important evidence in some form about how we can expect them to handle other catastrophes, and worth paying attention to, and perhaps debating.

Possible takeaways from the coronavirus pandemic for slow AI takeoff

Are you saying you think that wasn't a fair characterization of the FDA, or that the hypothetical AI Governance bodies would be different from the FDA?

(The statement was certainly not very fair to the FDA, and I do expect there was more going on under the hood than that motivation. But, I do broadly think governing bodies do what they are incentivized to do, which includes justifying themselves, especially after being around a couple decades and gradually being infiltrated by careerists)

Possible takeaways from the coronavirus pandemic for slow AI takeoff

I do definitely expect different institutional failure in the case of Soft Takeoff. But it sort of depends on what level of abstraction you're looking at the institutional failure through. Like, the FDA won't be involved. But there's a decent chance that some other regulatory will be involved, which is following the underlying FDA impulse of "Wield the one hammer we know how to wield to justify our jobs." (In a large company, it's possible that regulatory body could be a department inside the org, rather than a government agency)

In reasonably good outcomes, the decisions are mostly being made by tech companies full of specialists who well understand the problem. In that case the institutional failures will look more like "what ways do tech companies normally screw up due to internal politics?"

There's a decent chance the military or someone will try to commandeer the project, in which case more typical government institutional failures will become more relevant.

One thing that seems significant is that 2 years prior to The Big Transition, you'll have multiple companies with similar-ish tech. And some of them will be appropriately cautious (like New Zealand, Singapore), and others will not have the political wherewithal to slow down and think carefully and figure out what inconvenient things they need to do and do them (like many other countries in covid)

Possible takeaways from the coronavirus pandemic for slow AI takeoff

Ah, okay. I think I need to at least think a bit harder to figure out if I still disagree in that case. 

Possible takeaways from the coronavirus pandemic for slow AI takeoff

I think given that we didn't suppress COVID, mitigating its damage probably involved new problems that we didn't have solutions for before.

Hmm. This just doesn't seem like what was going on to me at all. I think I disagree a lot about this, and it seems less about "how things will shake out in Slow AI Takeoff" and more about "how badly and obviously-in-advance and easily-preventably did we screw up our covid response."

(I expect we also disagree about how Slow Takeoff would look, but I don't think that's the cruxy bit for me here). 

I'm sort of hesitant to jump into the "why covid obviously looks like mass institutional failure, given a very straightforward, well understood scenario" argument because I feel like it's been hashed out a lot in the past 3 months and I'm not sure where to go with it – I'm assuming you've read the relevant arguments and didn't find them convincing. 

The sort of things I have in mind include:

  • FDA actively hampers efforts to scale up testing
  • Hospitals don't start re-using PPE, when it was clear they were going to have to start doing so in a month
  • Everyone delays 3 weeks before declaring lockdowns, at a time where the simple math clearly indicated we needed to lock down promptly if we wanted to have a chance at squashing.
  • Media actively downplays risk and attributes it to racism
  • CDC and WHO making actively misleading statements

These problems all seemed fairly straightforward and understood. There might also be novel problems going on but they don't seem necessary to hypothesize given the above types of failure.

Possible takeaways from the coronavirus pandemic for slow AI takeoff

1. Many new problems arose during this pandemic for which we did not have historical experience, e.g. in supply chains. (Perhaps we had historical precedent in the Spanish flu, but that was sufficiently long ago that I don’t expect those lessons to generalize, or for us to remember those lessons.) In contrast, I expect that with AI alignment the problems will not change much as the AI systems become more powerful. Certainly the effects of misaligned powerful AI systems will change dramatically and be harder to mitigate, but I expect the underlying causes of misalignment will not change much, and that’s what we need to gain consensus about and find solutions for.

Wait... you think there will be fewer novel problems arising during AI (a completely unprecedented phenomenon) than in Covid? Even in my most relaxed, responsible slow-takeoff scenarios, that seems like an extremely surprising claim.

I'm also somewhat confused what facts you think we didn't know about covid that prevented us from preparing – I don't currently have examples of such facts in mind. (The fact that some countries seem to be doing just fine makes it look to me like its totally doable to have solved covid given the information we had at the time, or at least to have responded dramatically more adequately than many countries did). 

AGIs as populations

(serious question, I'm not sure what the right process here is)

What do you think should happen instead of "read through and object to Wei_Dai's existing blogposts?". Is there a different process that would work better? Or you think this generally isn't worth the time? Or you think Wei Dai should write a blogpost that more clearly passes your "sniff test" of "probably compelling enough to be worth more of my attention?"

Demons in Imperfect Search

Pedagogical note: something that feels like it's missing from the fable is a "realistic" sense of how demons get created and how they can manipulate the hill. 

Fortunately your subsequent real-world examples all have this, and, like, I did know what you meant. But it felt sort of arbitrary to have this combo of "Well, there's a very concrete, visceral example of the ball rolling downhill – I know what that means. But then there are some entities that can arbitrarily shape the hill. Why are the demons weak at the beginning and stronger the more you fold into demon space? What are the mechanics there?

It's not the worst thing, and I don't have any ideas to tighten it. Overall I do think the post did a good job of communicating the idea it was aiming at.

Realism about rationality

Which you could round off to "biologists don't need to know about evolution", in the sense that it is not the best use of their time.

The most obvious thing is understanding why overuse of antibiotics might weaken the effect of antibiotics.

Realism about rationality

I guess the main thing I want is an actual tally on "how many people definitively found this post to represent their crux", vs "how many people think that this represented other people's cruxes"

Load More