I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.
I think 4 is basically right, though human values aren’t just fuzzy, they’re also quite complex, perhaps on the order of complexity of the human’s mind, meaning you pretty much have to execute the human’s mind to evaluate their values exactly.
Some people, like very hardcore preference utilitarians, have values dominated by a term much simpler than their minds’. However, even those people usually have somewhat self-referential preferences in that they care at least a bit extra about themselves and those close to them, and this kind of self-reference drastically increases the complexity of values if you want to include it.
For instance, I value my current mind being able to do certain things in the future (learn stuff, prove theorems, seed planets with life) somewhat more than I would value that for a typical human’s mind (though I am fairly altruistic). I suppose that a pointer to me is probably a lot simpler than a description/model of me, but that pointer is very difficult to construct, whereas I can see how to construct a model using imitation learning (obviously this is a “practical” consideration). Also, the model of me is then the thing that becomes powerful, which satisfies my values much more than my values can be satisfied by an external alien thing rising to power (unless it just uploads me right away I suppose).
I’m not sure that even an individual’s values always settle down into a unique equilibrium, I would guess this depends on their environment.
unrelatedly, I am still not convinced we live in a mathematical multiverse, or even necessarily a mathematical universe. (Finding out we lived in a mathematical universe would make a mathematical multiverse seem very likely for the ensemble reasons we have discussed before)
It seems like the additional points make the exponential trendline look more plausible relative to the super exponential?
After some thought, I believe I understand everything except the proof of theorem 1, which seems unclear.
The existence proof for the oracles doesn't seem to be rigorous, which is probably the first thing to take care of before further consideration, but I do expect it to go through.
Also, it would be nice to have some upper bound on the power of the class...
I usually don't think of "building a safer LLM agent" as a viable route to aligned AI
I agree that building a safer LLM agent is an incredibly fraught path that probably doesn't work. My comment is in the context of Abram's first approach, developing safer AI tech that companies might (apparently voluntarily) switch to, and specifically the route of scaling up IB to compete with LLM agents. Note that Abram also seems to be discussing the AI 2027 report, which if taken seriously requires all of this to be done in about 2 years. Conditioning on this route, I suggest that most realistic paths look like what I described, but I am pretty pessimistic that this route will actually work. The reason is that I don't see explicitly Bayesian glass-box methods competing with massive black-box models at tasks like natural language prediction any time soon. But who knows, perhaps with the "true" (IB?) theory of agency in hand much more is possible.
More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.
I'm not sure it's possible to "complete" the theory of agents, and I am particularly skeptical that we can do it any time soon. However, I think we agree locally / directionally, because it also seems to me that a more rigorous theory of agency is necessary for alignment.
As to heuristic implementations of infra-Bayesianism, this is something I don't want to speculate about in public, it seems exfohazardous.
Fair enough, but in that case, it seems impossible for this conversation to meaningfully progress here.
It seems to me that an "implementation" of something like Infra-Bayesianism which can realistically compete with modern LLMs would ultimately look a lot like a semi-theoretically-justified modification to the loss function or optimizer of agentic fine-tuning / RL or possibly its scaffolding to encourage it to generalize conservatively. This intuition comes in two parts:
1: The pre-training phase is already finding a mesa-optimizer that does induction in context. I usually think of this as something like Solomonoff induction with a good inductive bias, but probably you would expect something more like logical induction. I expect the answer to be somewhere in between. I'll try to test this empirically at ARENA this May. The point is that I struggle to see how IB applies here, on the level of pure prediction, in practice. It's possible that this is just a result of my ignorance or lack of creativity.
2: I'm pessimistic about learning results for MDPs or environments "without traps" having anything to do with building a safe LLM agent.
If IB is only used in this heuristic way, we might expect fewer of the mathematical results to transfer, and instead just port over some sort of pessimism about uncertainty. In fact, Michael Cohen's work follows pretty much exactly this approach at times (I've read him mention IB about once, apparently as a source of intuition but not technical results).
None of this is really a criticism of IB; rather, I think it's important to keep in mind when considering which aspects of IB or IB-like theories are most worth developing.
This called a Hurwicz decision rule / criterion (your t is usually alpha).
I think the content of this argument is not that maxmin is fundamental, but rather that simplicity priors "look like" or justify Hurwicz-like decision rules. Simple versions of this are easy to prove but (as far as I know) do not appear in the literature.
It’s wild to me that you’ve concentrated a full 50% of your measure in the next <3 years. What if there are some aspects of intelligence which we don’t know we don’t know about yet? It’s been over ~40 years of progress since the perceptron, how do you know we’re in the last ~10% today?
There is a specific type of thinking, which I tried to gesture at in my original post, which I think LLMs seem to be literally incapable of. It’s possible to unpack the phrase “scientific insight” in more than one way, and some interpretations fall on either side of the line.
I think the argument you’re making is that since LLMs can make eps > 0 progress, they can repeat it N times to make unbounded progress. But this is not the structure of conceptual insight as a general rule. Concretely, it fails for the architectural reasons I explained in the original post.
My plan isn’t dependent on that meta-ethical assumption. It may be that there is a correct way to complete your values but not everyone is capable of it, but as long as some uploads can figure their value completion out, those uploads can prosper. Or if they can only figure out how to build an AGI that works out how to complete their values, they will have plenty of time to do that after this acute period of risk ends. And it seems that if no one can figure out their values, or eventually figure out how to build an AGI to complete their values, the situation would be rather intractable.
I don’t understand your thinking here. I’m suggesting a plan to prevent extinction from AGI. Why is it a breaking issue if some uploads don’t work out exactly what they “should” want? This is already true for many people. At worst it just requires that the initial few batches of uploads are carefully selected for philosophical competence (pre-upload) so that some potential misconception is not locked in. But I don’t see a reason that my plan runs a particular risk of locking in misconceptions.
yes, generalization in deep learning is hard, but it’s getting rapidly more effective in practice and better understood through AIT and mostly(?) SLT.
I think this is tractable. Insofar as it’s not tractable, I think it can be made equally intractable for capabilities and alignment (possibly at some alignment tax). I have more detailed ideas about this, many of which are expressed in the post (and many of which are not). But I think that’s the high level reason for optimism.