Oliver Habryka

Coding day in and out on LessWrong 2.0. You can reach me at habryka@lesswrong.com

Wiki Contributions

Comments

Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.

I also found these very valuable! I wonder whether a better title might help more people see how great these are, but not sure.

Measuring hardware overhang

Replaced the image in the post with this image.

Alex Turner's Research, Comprehensive Information Gathering

Minor meta feedback: I think it's better to put the "Comprehensive Information Gathering" part of the title at the end, if you want to have many of these. That makes it much easier to see differences in the title and skim a list of them.

[AN #152]: How we’ve overestimated few-shot learning capabilities

The newsletter is back! I missed these! Glad to have these back.

Rogue AGI Embodies Valuable Intellectual Property

Promoted to curated: I've had a number of disagreements with a perspective on AI that generates arguments like the above, which takes something like "ownership of material resources" as a really fundamental unit of analysis, and I feel like this post has both helped me get a better grasp on that paradigm of thinking, and also helped me get a bit of a better sense of what feels off to me, and I have a feeling this post will be useful in bridging that gap eventually. 

AMA: Paul Christiano, alignment researcher

When I begin a comment with a quotation, I don't know how to insert new un-quoted text at the top (other than by cutting the quotation, adding some blank lines, then pasting the quotation back). That would be great.

You can do this by pressing enter in an empty paragraph of a quoted block. That should cause you to remove the block. See this gif: 

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

This is great, thank you! 

Minor formatting note: The italics font on both the AI Alignment Forum and LessWrong isn't super well suited to large block of text, so I took the liberty to unitalicize a bunch of the large blockquotes (which should be sufficiently distinguishable as blockquotes without the italics). Though I am totally happy to reverse it if you prefer the previous formatting. 

Utility Maximization = Description Length Minimization

Promoted to curated: As Adele says, this feels related to a bunch of the Jeffery-Bolker rotation ideas, which I've referenced many many times since then, but in a way that feels somewhat independent, which makes me more excited about there being some deeper underlying structure here.

I've also had something like this in my mind for a while, but haven't gotten around to formalizing it, and I think I've seen other people make similar arguments in the past, which makes this a valuable clarification and synthesis that I expect to get referenced a bunch.

Deducing Impact

So secret that even a spoiler tag wasn't good enough.

Commentary on AGI Safety from First Principles

Promoted to curated: This is a long and dense post, but I really liked it, and find this kind of commentary from a large variety of thinkers in the AI Alignment space quite useful. I found that it really helped me think about the implications of a lot of the topics discussed in the main sequence in much more detail, and in a much more robust way, and I have come back to this post multiple times since it's been published. 

Also, of course, the whole original sequence is great and I think currently the best short introduction to AI-Risk that exists out there.

Load More