DMs open.
However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time.
Perhaps language is a zero-one, i.e. language renders a mind "cognitively complete" in the sense that the mind can represent anything about the external world, and make any inferences using those representations. But intelligence is not thereby zero-one because intelligence depends on continuous variables like computional speed, memory, etc.
More concretely, I am sceptic that "we end up with AI geniuses, but not AI gods", because running a genius at 10,000x speed, parallelised over 10,000x cores, with instantaneous access to the internet does (I think) make an AI god. A difference is quantity is a difference in kind.
Thar said, there might exist plausible threat models which require an AI which doesn't spatiotemporally decompose into less smart AIs. Could you sketch one out?
In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases;
I'm not sure how well this metric tracks what people care about — performance on particular downstream tasks (e.g. passing a law exam, writing bugless code, automating alignment research, etc)
Yep, you're correct. The original argument in the Waluigi mega-post was sloppy.
You're correct. The finite context window biases the dynamics towards simulacra which can be evidenced by short prompts, i.e. biases away from luigis and towards waluigis.
But let me be more pedantic and less dramatic than I was in the article — the waluigi transitions aren't inevitable. The waluigi are approximately-absorbing classes in the Markov chain, but there are other approximately-absorbing classes which the luigi can fall into. For example, endlessly cycling through the same word (mode-collapse) is also an approximately-absorbing class.
Yep I think you might be right about the maths actually.
I'm thinking that waluigis with 50% A and 50% B have been eliminated by llm pretraining and definitely by rlhf. The only waluigis that remain are deceptive-at-initialisation.
So what we have left is a superposition of a bunch of luigis and a bunch of waluigis, where the waluigis are deceptive, and for each waluigi there is a different phrase that would trigger them.
I'm not claiming basin of attraction is the entire space of interpolation between waluigis and luigis.
Actually, maybe "attractor" is the wrong technical word to use here. What I want to convey is that the amplitude of the luigis can only grow very slowly and can be reversed, but the amplitude of the waluigi can suddenly jump to 100% in a single token and would remain there permanently. What's the right dynamical-systemy term for that?
Yes — this is exactly what I've been thinking about!
Can we use RLHF or finetuning to coerce the LLM into interpreting the outside-text as undoubtably literally true.
If the answer is "yes", then that's a big chunk of the alignment problem solved, because we just send a sufficiently large language model the prompt with our queries and see what happens.
Hey Nisan. Check the following passage from Domain Theory (Samson Abramsky and Achim Jung). This might be helpful for equipping Δ with an appropriate domain structure. (You mention [JP89] yourself.)
During my own incursion into agent foundations and game theory, I also bumped into this exact obstacle — namely, that there is no obvious way to equip Δ with a least-fixed-point constructor FixΔX:(X→ΔX)→ΔX. In contrast, we can equip P with a LFP constructor FixPX:(X→PX)→PX,g↦{x∈X:x∈g(x)}.
One trick is to define FixΔX(g) to be the distribution π∈ΔX which maximises the entropy H(π) subject to the constraint gΔ(π)=π.
The justification here is the Principle of Maximum Entropy:
More generally, we should define FixΔX(g) to be the distribution π∈ΔX which minimises cross-entropy DKL(π||π0) subject to the constraint π=g(π), where π0 is some uninformed prior such as Solomonoff. The previous result is a special case by considering π0 to be the uniform prior. The proof generalises by noting that DKL(−||π0):ΔX→R is continuous and strictly convex. See the Principle of Minimum Discrimination.
Ideally, we'd like FixP and FixΔ to "coincide" modulo the maps Supp:ΔX→PX, i.e.Supp(FixΔ(g))=FixP(Supp∘g) for all g:X→ΔX. Unfortunately, this isn't the case — if g:H↦0.5⋅|H⟩+0.5⋅|T⟩,T↦|T⟩ then FixP(Supp∘g)={H,T} but Supp(FixΔ(g))={T}.
Alternatively, we could consider the convex sets of distributions over X.
Let C(X) denote the set of convex sets of distributions over X. There is an ordering ≤ on C(X) where A≤B⟺A⊇B. We have a LFP operator FixCX:(X→CX)→CX via g↦⋃{S∈CX:gCX(S)=S} where gC:CX→CX,S↦{∑ni=1αi⋅πi|πi∈g(xi),∑ni=1αi|xi⟩∈S} is the lift of g:X→CX via the C monad.