Matthew Barnett

Just someone who wants to learn about the world. I think about AI risk sometimes, but I still have a lot to learn.

I also change my views often. Anything I wrote that's more than 10 days old should be treated as potentially outdated.

Matthew Barnett's Comments

[AN #80]: Why AI risk might be solved without additional intervention from longtermists
see above about trying to conform with the way terms are used, rather than defining terms and trying to drag everyone else along.

This seems odd given your objection to "soft/slow" takeoff usage and your advocacy of "continuous takeoff" ;)

[AN #80]: Why AI risk might be solved without additional intervention from longtermists
Does this make sense to you?

Yeah that makes sense. Your points about "bio" not being short for "biological" were valid, but the fact that as a listener I didn't know that fact implies that it seems really easy to mess up the language usage here. I'm starting to think that the real fight should be about using terms that aren't self explanatory.

Have you actually observed it being used in ways that you fear (and which would be prevented if we were to redefine it more narrowly)?

I'm not sure about whether it would have been prevented by using the term more narrowly, but in my experience the most common reaction people outside of EA/LW (and even sometimes within) have to hearing about AI risk is to assume that it's not technical, and to assume that it's not about accidents. In that sense, I have seen been exposed to quite a bit of this already.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I agree that this is troubling, though I think it's similar to how I wouldn't want the term biorisk to be expanded to include biodiversity loss (a risk, but not the right type), regular human terrorism (humans are biological, but it's a totally different issue), zombie uprisings (they are biological, but it's totally ridiculous), alien invasions etc.

Not to say that's what you are doing with AI risk. I'm worried about what others will do with it if the term gets expanded.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I appreciate the arguments, and I think you've mostly convinced me, mostly because of the historical argument.

I do still have some remaining apprehension about using AI risk to describe every type of risk arising from AI.

I want to include philosophical failures, as long as the consequences of the failures flow through AI, because (aside from historical usage) technical problems and philosophical problems blend into each other, and I don't see a point in drawing an arbitrary and potentially contentious border between them.

That is true. The way I see it, UDT is definitely on the technical side, even though it incorporates a large amount of philosophical background. When I say technical, I mostly mean "specific, uses math, has clear meaning within the language of computer science" rather than a more narrow meaning of "is related to machine learning" or something similar.

My issue with arguing for philosophical failure is that, as I'm sure you're aware, there's a well known failure mode of worrying about vague philosophical problems rather than more concrete ones. Within academic philosophy, the majority of discussion surrounding AI is centered around consciousness, intentionality, whether it's possible to even construct a human-like machine, whether they should have rights etc.

There's a unique thread of philosophy that arose from Lesswrong, which includes work on decision theory, that doesn't focus on these thorny and low priority questions. While I'm comfortable with you arguing that philosophical failure is important, my impression is that the overly philosophical approach used by many people has done more harm than good for the field in the past, and continues to do so.

It is therefore sometimes nice to tell people that the problems that people work on here are concrete and specific, and don't require doing a ton of abstract philosophy or political advocacy.

I don't think this is a good argument, because even within "accidental technical AI risk" there are different problems that aren't equally worthwhile to solve, so why aren't you already worried about outsiders thinking all those problems are equally worthwhile?

This is true, but my impression is that when you tell people that a problem is "technical" it generally makes them refrain from having a strong opinion before understanding a lot about it. "Accidental" also reframes the discussion by reducing the risk of polarizing biases. This is a common theme in many fields:

  • Physicists sometimes get frustrated with people arguing about "the philosophy of the interpretation of quantum mechanics" because there's a large subset of people who think that since it's philosophical, then you don't need to have any subject-level expertise to talk about it.
  • Economists try to emphasize that they use models and empirical data, because a lot of people think that their field of study is more-or-less just high status opinion + math. Emphasizing that there are real, specific models that they study helps to reduce this impression. Same with political science.
  • A large fraction of tech workers are frustrated about the use of Machine Learning as a buzzword right now, and part of it is that people started saying Machine Learning = AI rather than Machine Learning = Statistics, and so a lot of people thought that even if they don't understand statistics, they can understand AI since that's like philosophy and stuff.

Scott Aaronson has said

But I’ve drawn much closer to the community over the last few years, because of a combination of factors: [...] The AI-risk folks started publishing some research papers that I found interesting—some with relatively approachable problems that I could see myself trying to think about if quantum computing ever got boring. This shift seems to have happened at roughly around the same time my former student, Paul Christiano, “defected” from quantum computing to AI-risk research.

My guess is that this shift in his thinking occurred because a lot of people started talking about technical risks from AI, rather than framing it as a philosophy problem, or a problem of eliminating bad actors. Eliezer has shared this viewpoint for years, writing in the CEV document,

Warning: Beware of things that are fun to argue.

reflecting the temptation to derail discussions about technical accidental risks.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

AI risk is just a shorthand for "accidental technical AI risk." To the extent that people are confused, I agree it's probably worth clarifying the type of risk by adding "accidental" and "technical" whenever we can.

However, I disagree with the idea that we should expand the word AI risk to include philosophical failures and intentional risks. If you open the term up, these outcomes might start to happen:

  • It becomes unclear in conversation what people mean when they say AI risk
  • Like The Singularity, it becomes a buzzword.
  • Journalists start projecting Terminator scenarios onto the words, and now have justification because even the researchers say that AI risk can mean a lot of different things.
  • It puts a whole bunch of types of risk into one basket, suggesting to outsiders that all attempts to reduce "AI risk" might be equally worthwhile.
  • ML researchers start to distrust AI risk researchers, because people who are worried about the Terminator are using the same words as the AI risk researchers and therefore get associated with them.

This can all be avoided by having a community norm to clarify that we mean technical accidental risk when we say AI risk, and when we're talking about other types of risks we use more precise terminology.

Soft takeoff can still lead to decisive strategic advantage
The concern with AI is that an initially tiny entity might take over the world.

This is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I'm imagining you saying, "It's not unique to AI" but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren't any good governing principles established beforehand.

Load More