1 min read23rd Mar 202211 comments
This is a special post for quick takes by AprilSR. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

11 comments, sorted by Click to highlight new comments since: Today at 2:02 PM

wait i just got the pun

I think a position some AI safety people have is: “Powerful AI is necessary to take a pivotal act.”

I can buy that it is impossible to safely have an AI make extremely advanced progress in ie nanotechnology. But it seems somewhat surprising to me if you need a general AI to stop anyone else from making a general AI.

Political solutions for example certainly seem very hard, but… all solutions seem very hard? The reasons AI-based solutions are hard don’t seem obviously weaker than political-based solutions to me or anything.

(The described position is probably a strawman, mostly posting this to help further my own thought than as a criticism of anyone else in particular or anything.)

Does anyone know if there is a PDF version of the Sequence Highlights anywhere? (Or any ebook format is fine probably.)

Humans are a proof by example that you can have a general AI which is not a very good utility function maximizer; arguably they even suggest that this is what you get by default.

I mostly buy that eventually you get a “spark of consequentialism” in some sense, but that might actually happen at a substantially superhuman level.

I’m not sure this actually extends timelines very much if it is true, but I’m updating towards expecting the foom to happen a little later into the AI improvement curve.

...to the extent that evolved intelligence is similar to AI, at least.

That's fair, my brain might be doing great at maximizing something which isn't especially correlated with what I actually want / what actually makes me happy.

I think we should have a community norm that threatening libel suits (or actually suing) is incredibly unacceptable in almost all cases—I'm not sure what the exact exceptions should be, but maybe it should require "they were knowingly making false claims."

I feel unsure whether it would be good to enforce such a norm regarding the current Nonlinear situation because there wasn't common knowledge beforehand and because I feel too strongly about this norm to not be afraid that I'm biased (and because hearing them out is the principled thing to do). But I think building common knowledge of such a norm would be good.

Under this community norm, how does Alice respond when Bob lies about her in public in a way that hurts her commercial business?

I'm more confident that we should generally have norms prohibiting using threats of legal action to prevent exchange of information than I am of the exact form those norms should take. But to give my immediate thoughts:

I think the best thing for Alice to do if Bob is lying about her is to just refute the lies. In an ideal world, this is sufficient. In practice, I guess maybe it's insufficient, or maybe refuting the lies would require sharing private information, so if necessary I would next escalate to informing forum moderators, presenting evidence privately, and requesting a ban.

Only once those avenues are exhausted might I consider threatening a libel suit acceptable.

I do notice now that the Nonlinear situation in particular is impacted by Ben Pace being a LessWrong admin—so if step 1 doesn't work, step 2 might have issues, so maybe escalating to step 3 might be acceptable sooner than usual.

Concerns have been raised that there might be some sort of large first-mover advantage. I'm not sure I buy this—my instinct is that the Nonlinear cofounders are just bad-faith actors making any arguments that seem advantageous to them (though out of principle I'm trying to withhold final judgement). That said, I could definitely imagine deciding in the future that this is a large enough concern to justify weaker norms against rapid escalation.