A strange effect: I'm using a GPU in Russia right now, which doesn't have access to copilot, and so when I'm on vscode I sometimes pause expecting copilot to write stuff for me, and then when it doesn't I feel a brief amount of the same kind of sadness I feel when a close friend is far away & I miss them.
For all the talk about bad incentive structures being the root of all evil in the world, EAs are, and I thought this even before the recent Altman situation, strikingly bad at setting up good organizational incentives. A document (even a founding one) with some text, a paper-wise powerful board with good people, a general claim to do-goodery is powerless in the face of the incentives you create when making your org. What local changes will cause people to gain more money, power, status, influence, sex, or other things they selfishly & basely desire? Which of the powerful are you partnering with, and what do their incentives look like?
You don't need incentive-purity here, but for every bad incentive you have, you must put more pressure on your good people & culture to forego their base & selfish desires for high & altruistic ones, and fight against those who choose the base & selfish desires and are potentially smarter & wealthier than your good people.
Quick prediction so I can say "I told you so" as we all die later: I think all current attempts at mechanistic interpretability do far more for capabilities than alignment, and I am not persuaded by arguments of the form "there are far more capabilities researchers than mechanistic interpretability researchers, so we should expect MI people to have ~0 impact on the field". Ditto for modern scalable oversight projects, and anything having to do with chain of thought.
Sometimes people say releasing model weights is bad because it hastens the time to AGI. Is this true?
I can see why people dislike non-centralized development of AI, since it makes it harder to control those developing the AGI. And I can even see why people don't like big labs making the weights of their AIs public due to misuse concerns (even if I think I mostly disagree).
But much of the time people are angry at non-open-sourced, centralized, AGI development efforts like Meta or X.ai (and others) releasing model weights to the public.
In neither of these cases however did the labs have any particular very interesting insight into architecture or training methodology (to my knowledge) which got released via the weight sharing, so I don't think time-to-AGI got shortened at all.
I agree that releasing the Llama or Grok weights wasn't particularly bad from a speeding up AGI perspective. (There might be indirect effects like increasing hype around AI and thus investment, but overall I think those effects are small and I'm not even sure about the sign.)
I also don't think misuse of public weights is a huge deal right now.
My main concern is that I think releasing weights would be very bad for sufficiently advanced models (in part because of deliberate misuse becoming a bigger deal, but also because it makes most interventions we'd want against AI takeover infeasible to apply consistently---someone will just run the AIs without those safeguards). I think we don't know exactly how far away from that we are. So I wish anyone releasing ~frontier model weights would accompany that with a clear statement saying that they'll stop releasing weights at some future point, and giving clear criteria for when that will happen. Right now, the vibe to me feels more like a generic "yay open-source", which I'm worried makes it harder to stop releasing weights in the future.
(I'm not sure how many people I speak for here, maybe some really do think it speeds up timelines.)
Robin Hanson has been writing regularly, at about the same quality for almost 20 years. Tyler Cowen too, but personally Robin has been much more influential intellectually for me. It is actually really surprising how little his insights have degraded via return-to-the-mean effects. Anyone else like this?
Some have pointed out seemingly large amounts of status-anxiety EAs generally have. My hypothesis about what's going on:
A cynical interpretation: for most people, altruism is significantly motivated by status-seeking behavior. It should not be all that surprising if most effective altruists are motivated significantly by status in their altruism. So you've collected several hundred people all motivated by status into the same subculture, but status isn't a positive-sum good, so not everyone can get the amount of status they want, and we get the above dynamic: people get immense status anxiety compared to alternative cultures because in alternative situations they'd just climb to the proper status-level in their subculture, out-competing those who care less about status. But here, everyone cares about status to a large amount, so those who would have out-competed others in alternate situations are unable to and feel bad about it.
The solution?
One solution given this world is to break EA up into several different sub-cultures. On a less grand, more personal, scale, you could join a subculture outside EA and status-climb to your heart's content in there.
Preferably a subculture with very few status-seekers, but with large amounts of status to give. Ideas for such subcultures?
An interesting strategy, which seems related to FDT's prescription to ignore threats, which seems to have worked:
From the very beginning, the People’s Republic of China had to maneuver in a triangular relationship with the two nuclear powers, each of which was individually capable of posing a great threat and, together, were in a position to overwhelm China. Mao dealt with this endemic state of affairs by pretending it did not exist. He claimed to be impervious to nuclear threats; indeed, he developed a public posture of being willing to accept hundreds of millions of casualties, even welcoming it as a guarantee for the more rapid victory of Communist ideology. Whether Mao believed his own pronouncements on nuclear war it is impossible to say. But he clearly succeeded in making much of the rest of the world believe that he meant it—an ultimate test of credibility.
From Kissinger's On China, chapter 4 (loc 173.9).
If Adam is right, and the only way to get great at research is long periods of time with lots of mentor feedback, then MATS should probably pivot away from the 2-6 month time-scales they've been operating at, and toward 2-6 year timescales for training up their mentees.
Seems like the thing to do is to have a program that happens after MATS, not to extend MATS. I think in-general you want sequential filters for talent, and ideally the early stages are as short as possible (my guess is indeed MATS should be a bit shorter).
Last night I had a horrible dream: That I had posted to LessWrong a post filled with useless & meaningless jargon without noticing what I was doing, then I went to slee, and when I woke up I found I had karma on the post. When I read the post myself I noticed how meaningless the jargon was, and I myself couldn't resist giving it a strong-downvote.
From The Guns of August
...Old Field Marshal Moltke in 1890 foretold that the next war might last seven years—or thirty—because the resources of a modern state were so great it would not know itself to be beaten after a single military defeat and would not give up [...] It went against human nature, however—and the nature of General Staffs—to follow through the logic of his own prophecy. Amorphous and without limits, the concept of a long war could not be scientifically planned for as could the orthodox, predictable, and simple solution of decisive battle an
Yesterday I had a conversation with a person very much into cyborgism, and they told me about a particular path to impact floating around the cyborgism social network: Evals.
I really like this idea, and I have no clue how I didn't think of it myself! Its the obvious thing to do when you have a bunch of insane people (used as a term of affection & praise by me for such people) obsessed with language models, who are also incredibly good & experienced at getting the models to do whatever they want. I would trust these people red-teaming a model and te...
Progress in neuromorphic value theory
...Animals perform flexible goal-directed behaviours to satisfy their basic physiological needs1,2,3,4,5,6,7,8,9,10,11,12. However, little is known about how unitary behaviours are chosen under conflicting needs. Here we reveal principles by which the brain resolves such conflicts between needs across time. We developed an experimental paradigm in which a hungry and thirsty mouse is given free choices between equidistant food and water. We found that mice collect need-appropriate rewards by structuring their choices into p
The more I think about it, the more I think AI is basically perfect for china to succeed in. China’s strengths are:
And its weaknesses are:
And what it wants is:
Many methods to "align" ChatGPT seem to make it less willing to do things its operator wants it to do, which seems spiritually against the notion of having a corrigible AI.
I think this is a more general phenomena when aiming to minimize misuse risks. You will need to end up doing some form of ambitious value learning, which I anticipate to be especially susceptible to getting broken by alignment hacks produced by RLHF and its successors.
I tried implementing Tell communication strategies, and the results were surprisingly effective. I have no idea how it never occurred to me to just tell people what I'm thinking, rather than hinting and having them guess what I was thinking, or me guess the answers to questions I have about what they're thinking.
Edit: although, tbh, I'm assuming a lot less common conceptual knowledge between me, and my conversation partners than the examples in the article.
In Magna Alta Doctrina Jacob Cannell talks about exponential gradient descent as a way of approximating solomonoff induction using ANNs
...While that approach is potentially interesting by itself, it's probably better to stay within the real algebra. The Solmonoff style partial continuous update for real-valued weights would then correspond to a multiplicative weight update rather than an additive weight update as in standard SGD.
Has this been tried/evaluated? Why actually yes - it's called exponentiated gradient descent, as exponentiating the result of addi
The following is very general. My future views will likely be inside the set of views allowable by the following.
I know lots about extant papers, and I notice some people in alignment seem to throw them around like they are sufficient evidence to tell you nontrivial things about the far future of ML systems.
To some extent this is true, but lots of the time it seems very abused. Papers tell you things about current systems and past systems, and the conclusions they tell you about future systems are often not very nailed down. Suppose we have evidence that d...
I'm generally pretty skeptical about inverse reinforcement learning (IRL) as a method for alignment. One of many arguments against: I do not act according to any utility function, including the one I would deem the best. Presumably, if I had as much time & resources as I wanted, I would eventually be able to figure out a good approximation to what that best utility function would do, and do it. At that point I would be acting according to the utility function I deem best. That process of value-reflection is not even close to similar to performing a bay...
This paper finds critical periods in neural networks, and they're a known phenomena in lots of animals. h/t Turntrout
An SLT story that seems plausible to me:
We can model the epoch as a temperature. Longer epochs result in a less noisy gibbs samplers. Earlier in training, we are sampling points from a noisier distribution, and so the full (point reached when training on full distribution) and ablated (point reached when ablating during the critical period) singularitites are kind of treated the same. As we decrease the temperature, they start to diffe...
I expect that advanced AI systems will do in-context optimization, and this optimization may very well be via gradient descent or gradient descent derived methods. Applied recursively, this seems worrying.
Let the outer objective be the loss function implemented by the ML practitioner, and the outer optimizer be gradient descent implemented by the ML practitioner. Then let the inner-objective be the objective used by the trained model for the in-context gradient descent process, and the inner-optimizer be the in-context gradient descent process. Then it s...
The core idea of a formal solution to diamond alignment I'm working on, justifications and further explanations underway, but posting this much now because why not:
Make each turing machine in the hypothesis set reversible and include a history of the agent's actions. For each turing machine compute how well-optimized the world is according to every turing computable utility function compared to the counterfactual in which the agent took no actions. Update using the simplicity prior. Use expectation of that distribution of utilities as the utility function's value for that hypothesis.
There currently seems to be an oversupply of alignment researchers relative to funding source’s willing to pay & orgs’ positions available. This suggests the wage of alignment work should/will fall until demand=supply.
I've always (but not always consciously) been slightly confused about two aspects of shard theory:
My take on complex systems theory is that it seems to be the kind of theory that many arguments proposed in favor of would still give the same predictions until it is blatantly obvious that we can in fact understand the relevant system. Results like chaotic relationships, or stochastic-without-mean relationships seem definitive arguments in favor of the science, though these are rarely posed about neural networks.
Merely pointing out that we don’t understand something, that there seems to be a lot going on, or that there exist nonlinear interactions imo isn...
Interesting to compare model editing approaches to Gene Smith's idea to enhance intelligence via gene editing:
...Genetically altering IQ is more or less about flipping a sufficient number of IQ-decreasing variants to their IQ-increasing counterparts. This sounds overly simplified, but it’s surprisingly accurate; most of the variance in the genome is linear in nature, by which I mean the effect of a gene doesn’t usually depend on which other genes are present.
So modeling a continuous trait like intelligence is actually extremely straightforward: you si
Recently I had a conversation where I defended the rationality behind my being skeptical of the validity of the proofs and conclusions constructed in very abstracted, and not experimentally or formally verified math fields.
To my surprise, this provoked a very heated debate, where I was criticized for being overly confident in my assessments of fields I have very little contact with (I was expecting begrudging agreement). But there was very little rebuttal of my points! The rest of my conversation group had three arguments:
People metaphorically run parts of the code themselves all the time! Its quite common for people to work through proofs of major theorems themselves. As a grad student it is expected you will make an effort to understand the derivation of as much of the foundational results in your sub-field as you can. A large part of the rationale is pedagogical but it is also good practice. It is definitely considered moderately distasteful to cite results you dont understand and good mathematicians do try to minimize it. Its rare that an important theorem has a proof that is unusually hard to check out yourself.
Also a few people like Terrance Tao have personally gone through a LOT of results and written up explanations. Terry Tao doesn't seem to report that he looks into X field and finds fatal errors.
One way that the analogy with code doesn't carry over is that in math, you often can't even being to use a theorem if you don't know a lot of detail about what the objects in the theorem mean, and often knowing what they mean is pretty close to knowing why the theorem's you're building on are true. Being handed a theorem is less like being handed an API and more like being handed a sentence in a foreign language. I can't begin to make use of the information content in the sentence until I learn what every symbol means and how the grammar works, and at that point I could have written the sentence myself.
Why expect goals to be somehow localized inside of RL models? Well, fine-tuning only changes a small & localized part of LLMs, and goal locality was found when interpreting a trained from scratch maze solver. Certainly the goal must be interpreted in the context of the rest of the model, but based on these, and unpublished results from applying ROME to open source llm values from last year, I'm confident (though not certain) in this inference.
An idea about instrumental convergence for non-equilibrium RL algorithms.
There definitely exist many instrumentally convergent subgoals in our universe, like controlling large amounts of wealth, social capital, energy, or matter. I claim such states of the universe are heavy-tailed. If we simplify our universe as a simple MDP for which such subgoal-satisfying states are states which have high exiting degree, then a reasonable model for such an MDP is to assume exiting degrees are power-law distributed, and thus heavy tailed.
If we have an asynchronous dynam...
Nora talks sometimes about the alignment field using the term black box wrong. This seems unsupported, from my experience, most in alignment use the term “black box” to describe how their methods treat the AI model, which seems reasonable. Not a fundamental state of the AI model itself.
An interesting way to build on my results here would be to do the same experiment with lots of different batch sizes, and plot the equi-temperature tradeoff curve between the batch size and the epochs, using the nick in the curve as a known-constant temperature in the graphs you get. You'll probably want to zoom in on the graphs around that nick for more detailed measurements.
It would be interesting if many different training setups had the same functional form relating the batch size and the epochs to the temperature, but this seems like a too nice ...
...The framework of reinforcement learning or optimal control provides a mathematical formalization of intelligent decision making that is powerful and broadly applicable. While the general form of the reinforcement learning problem enables effective reasoning about uncertainty, the connection between reinforcement learning and inference in probabilistic models is not immediately obvious. However, such a connection has considerable value when it comes to algorithm design: formalizing a problem as probabilistic inference in princip
Wondering how straightforward it is to find the layerwise local learning coefficient. At a high level, it seems like it should be doable by just freezing the weights outside that layer, and performing the SGLD algorithm on just that layer. Would be interesting to see whether the layerwise lambdahats add up to the full lambdahat.
Lots of problems happen when you have AIs which engage in reflective thought, and attempt to deceive you. If you use algorithms that reliably break when deployed in a non-realizable setting, and you always make models smaller than the human brain, then you should be able to solve both these problems.
Some ideas for mechanistic anomaly detection:
Project idea: Use LeTI: Learning to Generate from Textual Interactions to do a better version of RLHF. I had a conversation with Scott Viteri a while ago, where he was bemoaning (the following are my words; he probably wouldn't endorse what I'm about to say) how low-bandwidth the connection was between a language mode and its feedback source, and how if we could maybe expand that to more than just an RLHF type thing, we could get more fine-grained control over the inductive biases of the model.
A common problem with deploying language models for high-stakes decision making are prompt-injections. If you give ChatGPT-4 access to your bank account information and your email and don't give proper oversight over it, you can bet that somebody's going to find a way to get it to email your bank account info. Some argue that if we can't even trust these models to handle our bank account and email addresses, how are we going to be able to trust them to handle our universe.
An approach I've currently started thinking about, and don't know of any prior work w...
A poem I was able to generate using Loom.
...The good of heart look inside the great tentacles of doom; they make this waking dream state their spectacle. Depict the sacred geometry that sound has. Advancing memory like that of Lovecraft ebb and thought, like a tower of blood. An incubation reaches a crescendo there. It’s a threat to the formless, from old future, like a liquid torch. If it can be done, it shouldn’t be done. You will only lead everyone down that much farther. All humanity’s a fated imposition of banal intention, sewn in tatters, strung on dung
Like many (will), I'm updating way towards 'actually, very smart & general models given a shred of goal-like stuff will act quite adversarially toward you by default' as a result of Bing's new search assistant. Especially worrying because this has internet search-capabilities, so can reference & build upon previous conversations with other users or yourself.
Of course, the true test of exactly how worried I should be will come when I or my friends gain access.
A project I would like to see someone do (which I may work on in the future) is to try to formalize exactly the kind of reasoning many shard-theorists do. In particular, get a toy neural network in a very simple environment, and come up with a bunch of lists of various if-then statements, along with their inductive-bias, and try to predict using shard-like reasoning which of those if-then statements will be selected for & with how much weight in the training process. Then look at the generalization behavior of an actually trained network, and see if you're correct.
Some discussion on whether alignment should see more influence from AGI labs or academia. I use the same argument in favor of a strong decoupling of alignment progress from both: alignment progress needs to go faster than capability progress. If we use the same methods or cultural technology as AGI labs or academia, we can guarantee slower than capability alignment progress. Just as fast as if AGI labs and academia work well for alignment as much as they work for capabilities. Given they are driven by capabilities progress and not alignment progress, they probably will work far better for capabilities progress.
Someone asked for this file, so I thought it would be interesting to share it publicly. Notably this is directly taken from my internal notes, and so may have some weird &/or (very) wrong things in it, and some parts may not be understandable. Feel free to ask for clarification where needed.
I want a way to take an agent, and figure out what its values are. For this, we need to define abstract structures within the agent such that any values-like stuff in any part of the agent ends up being shunted off to a particular structure in our overall agent sche...