Lukas Finnveden

Wiki Contributions

Comments

Prizes for ELK proposals
I agree that i does slightly worse than t on consistency-checks, but i also does better on other regularizers you're (maybe implicitly) using like speed/simplicity, so as long as i doesn't do too much worse it'll still beat out the direct translator.

Any articulable reason for why i just does slightly worse than t? Why would a 2N-node model fix a large majority of disrepancys between an N-node model and a 1e12*N-node model? I'd expect it to just fix a small fraction of them.

I think this rapidly runs into other issues with consistency checks, like the fact that the human is going to be systematically wrong about some correlations, so i potentially is more consistent than t.

Yeah, if you can get better-looking consistency than the direct translator in some cases, I agree that a sufficiently high consistency penalty will just push towards exploiting that (even if the intermediate model needs to be almost as large as the full predictor to exploit it properly).

I'm curious whether you think this is the main obstacle. If we had a version of the correlation-consistency approach that always gave the direct translator minimal expected consistency loss, do we as-of-yet lack a counterexample for it?

Prizes for ELK proposals

Hypothesis: Maybe you're actually not considering a reporter i that always use an intermediate model; but instead a reporter i' that does translations on hard questions, and just uses the intermediate model on questions where it's confident that the intermediate model understands everything relevant. I see three different possible issues with that idea:

1. To do this, i' needs an efficient way (ie one that doesn't scale with the size of the predictor) to (on at least some inputs) be highly confident that the intermediate model understands everything relevant about the situation. I think this is a reasonable "worst-case" assumption, but I'm not sure. If you're using it, I'd be curious to know.

2. Even when the reporter gets inputs that the intermediate model fully understands, it seems like the reporter will run into issues if its sampling-process (for generating distributions) runs into some inputs that the intermediate model doesn't fully understand. (I.e., if i' gets an input for which X1 is true, and then simulates many more random inputs for which X1 is true, and by chance X2 is true for one of them, then the reporter will have to do translation on that sample). Which makes it seem unlikely that i' wouldn't have to do translation at least once per input.

3. If the model is confident that the intermediate model understands everything relevant about the situation, it seems more efficient to return the intermediate model's answer to the question at hand than to loop over it many times, trying to fix correlations. So really we should get a reporter i'' that does translation on the predictor on hard questions and returns an intermediate model's latent knowledge on easy questions. That seems like an ok reporter to get.

Prizes for ELK proposals

I don't understand your counterexample in the appendix Details for penalizing inconsistencies across different inputs. You present a cheating strategy that requires the reporter to run and interpret the predictor a bunch of times, which seems plausibly slower than doing honest translation. And then you say you fix this issue with:

But this dependence could be avoided if there was an intermediate model between the predictor’s Bayes net (which we are assuming is very large) and the human’s Bayes net. Errors identified by the intermediate model are likely to be “real” errors in the human’s model. But we can identify those errors with an amount of computation that depends only on the intermediate model, rather than needing to actually run the full predictor. For example, if the human’s Bayes net has size N, the intermediate model has size 2N, and the predictor’s Bayes net has size a 1e12 x N, then the cost of using the intermediate model many times can still be small relative to the cost of direct translation.

Roughly speaking, I don't see how the reporter with an intermediate model systematically wins out over the translator given that the intermediate model will miss many things that the predictor understands.

Taking it more slowly, using the example from the report:

Let's say we have X1='do rowhammer' (which the intermediate model does understand) and X2='do eldritch hacks that the intermediate model isn't good enough to understand'.

Let's say we have reporters:

  • s, which does naive human simulation,
  • i, which has an intermediate model which it always uses to look for an fix strange correlations.
  • t, a direct translator.

Let's say the reporter is asked "Is the diamond safe?" on an input that includes X1, and the human runs consistency checks across many actions containing X1 (and none containing X2). Here, it seems like i and t both pass the consistency checks, and i is faster than t, so i wins.

But now let's say that the reporter is asked "Is the diamond safe?" on an input that includes X2, and the human runs consistency checks across many actions containing X2. In this case, the reporter i will fail the consistency checks (since the intermediate model's predictions won't match the predictor's predictions, and so the reporter will fail to adjust for the revealing correlations), so t will come out ahead.

So if these reporters are the only competitors, it seems like we should be able to tune the regularization to make t win.

Biology-Inspired AGI Timelines: The Trick That Never Works

It's very easy to construct probability distributions that have earlier timelines, that look more intuitively unconfident, and that have higher entropy than the bio-anchors forecast. You can just take some of the probability mass from the peak around 2050 and redistribute it among earlier years, especially years that are very close to the present, where bioanchors are reasonably confident that AGI is unlikely.

Yudkowsky and Christiano discuss "Takeoff Speeds"
Oh, come on. That is straight-up not how simple continuous toy models of RSI work. Between a neutron multiplication factor of 0.999 and 1.001 there is a very huge gap in output behavior.

Nitpick: I think that particular analogy isn't great.

For nuclear stuff, we have two state variables: amount of fissile material and current number of neutrons flying around. The amount of fissile material determines the "neutron multiplication factor", but it is the number of neutrons that goes crazy, not fissile material. And the current number of neurons doesn't matter for whether the pile will eventually go crazy or not.

But in the simplest toy models of RSI, we just have one variable: intelligence. We can't change the "intelligence multiplication factor", there's just intelligence figuring out how to build more intelligence.

Maybe exothermic chemical reactions, like fire, is a better analogy. Either you have enough heat to create a self-sustaining reaction, or you don't.

Christiano, Cotra, and Yudkowsky on AI progress

While GPT-4 wouldn't be a lot bigger than GPT-3, Sam Altman did indicate that it'd use a lot more compute. That's consistent with Stack More Layers still working; they might just have found an even better use for compute.

(The increased compute-usage also makes me think that a Paul-esque view would allow for GPT-4 to be a lot more impressive than GPT-3, beyond just modest algorithmic improvements.)

Christiano, Cotra, and Yudkowsky on AI progress
and some of my sense here is that if Paul offered a portfolio bet of this kind, I might not take it myself, but EAs who were better at noticing their own surprise might say, "Wait, that's how unpredictable Paul thinks the world is?"

If Eliezer endorses this on reflection, that would seem to suggest that Paul actually has good models about how often trend breaks happen, and that the problem-by-Eliezer's-lights is relatively more about, either:

  • that Paul's long-term predictions do not adequately take into account his good sense of short-term trend breaks.
  • that Paul's long-term predictions are actually fine and good, but that his communication about it is somehow misleading to EAs.

That would be a very different kind of disagreement than I thought this was about. (Though actually kind-of consistent with the way that Eliezer previously didn't quite diss Paul's track-record, but instead dissed "the sort of person who is taken in by this essay [is the same sort of person who gets taken in by Hanson's arguments in 2008 and gets caught flatfooted by AlphaGo and GPT-3 and AlphaFold 2]"?)

Also, none of this erases the value of putting forward the predictions mentioned in the original quote, since that would then be a good method of communicating Paul's (supposedly miscommunicated) views.

Yudkowsky and Christiano discuss "Takeoff Speeds"

No, the form says that 1=Paul. It's just the first sentence under the spoiler that's wrong.

Yudkowsky and Christiano discuss "Takeoff Speeds"

Presumably you're referring to this graph. The y-axis looks like the kind of score that ranges between 0 and 1, in which case this looks sort-of like a sigmoid to me, which accelerates when it gets closer to ~50% performance (and decelarates when it gets closer to 100% performance).

If so, we might want to ask whether these tasks are chosen ~randomly (among tasks that are indicative of how useful AI is) or if they're selected for difficulty in some way. In particular, assume that most tasks look sort-of like a sigmoid as they're scaled up (accelerating around 50%, improving slower when they're closer to 0% and 100%). Then you might think that the most exciting tasks to submit to big bench would be the tasks that can't be handled by small models, but that large models rapidly improve upon (as opposed to tasks that are basically-solved already by 10^10 parameters). In which case the aggregation of all these tasks could be expected to look sort-of like this, improving faster after 10^10 than before.

...is one story I can tell, but idk if I would have predicted that beforehand, and fast acceleration after 10^10 is certainly consistent with many people's qualitative impressions of GPT-3. So maybe there is some real acceleration going on.

(Also, see this post for similar curves, but for the benchmarks that OpenAI tested GPT-3 on. There's no real acceleration visible there, other than for arithmetic.)

Discussion with Eliezer Yudkowsky on AGI interventions
95% of all ML researchers don't think it's a problem, or think it's something we'll solve easily

The 2016 survey of people in AI asked people about the alignment problem as described by Stuart Russell, and 39% said it was an important problem and 33% that it's a harder problem than most other problem in the field.

Load More