Oliver Habryka

Coding day in and out on LessWrong 2.0

Comments

Draft report on AI timelines

I am organizing a reading group for this report next Tuesday in case you (or anyone else) wants to show up: 

https://www.lesswrong.com/posts/mMGzkX3Acb5WdFEaY/event-ajeya-s-timeline-report-reading-group-1-nov-17-6-30pm

Dutch-Booking CDT: Revised Argument

I... think this post was impacted by a bug in the LW API that GreaterWrong ran into, that made it so that it wasn't visible on the frontpage when it was published. It nevertheless appears to have gotten some amount of engagement, but maybe that was all from direct links? 

Given the substantial chance that a number of people have never seen this post, I reposted it. Its original publishing date was the 11th of June.

AGI safety from first principles: Introduction

Promoted to curated: I really enjoyed reading through this sequence. I have some disagreements with it, but overall it's one of the best plain language introductions to AI safety that I've seen, and I expect I will link to this as a good introduction many times in the future. I was also particularly happy with how the sequence bridged and synthesized a number of different perspectives that usually feel in conflict with each other.

My computational framework for the brain

Promoted to curated: This kind of thinking seems both very important, and also extremely difficult. I do think that trying to understand the underlying computational structure of the brain is quite useful for both thinking about Rationality and thinking about AI and AI Alignment, though it's also plausible to me that it's hard enough to get things right in this space that in the end overall it's very hard to extract useful lessons from this. 

Despite the difficulties I expect in this space, this post does strike me as overall pretty decent and to at the very least open up a number of interesting questions that one could ask to further deconfuse oneself on this topic. 

My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Promoted to curated! I held off on curating this post for a while, first because it's long and it took me a while to read through it, and second because we already had a lot of AI Alignment posts in the curation pipeline, and I wanted to make sure we have some diversity in our curation decisions. But overall, I really liked this post, and also want to mirror Rohin's comment in that I found this version more useful than the version where you got everything right, because this way I got to see the contrast between your interpretation and Paul's responses, which feels like it helped me locate the right hypothesis more effective than either would have on its own (even if more fleshed out). 

Comparing Utilities

Yep, fixed. Thank you!

Judging from the URL of those links, those images were hosted on a domain that you could access, but others could not, namely they were stored as Gmail image attachments, to which of course you as the recipient have access, but random LessWrong users do not. 

Comparing Utilities

Oh no! The two images starting from this point are broken for me: 

Updates and additions to "Embedded Agency"

Promoted to curated: These additions are really great, and they fill in a lot of the most confusing parts of the original Embedded Agency sequence, which was already one of my favorite pieces of content on all of Lesswrong. So it seems fitting to curate this update to it, which improves it even further. 

Radical Probabilism

Promoted to curated: This post is answering (of course not fully, but in parts) what seems to me one of the most important open questions in theoretical rationality, and I think does so in a really thorough and engaging way. It also draws connections to a substantial number of other parts of your and Scott's work in a way that has helped me understand those much more thoroughly. 

I am really excited about this post. I kind of wish I could curate it two or three times because I do really want a lot of people to have read this, and expect that it will change how I think about a substantial number of topics.

Looking for adversarial collaborators to test our Debate protocol

This sounds fun! I probably won't have enough time to participate, but I do wish I had enough time.

Load More