This is a special post for quick takes by jacquesthibs. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
156 comments, sorted by Click to highlight new comments since: Today at 12:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I thought this series of comments from a former DeepMind employee (who worked on Gemini) were insightful so I figured I should share.

From my experience doing early RLHF work for Gemini, larger models exploit the reward model more. You need to constantly keep collecting more preferences and retraining reward models to make it not exploitable. Otherwise you get nonsensical responses which have exploited the idiosyncracy of your preferences data. There is a reason few labs have done RLHF successfully.

It's also know that more capable models exploit loopholes in reward functions better. Imo, it's a pretty intuitive idea that more capable RL agents will find larger rewards. But there's evidence from papers like this as well: https://arxiv.org/abs/2201.03544 

To be clear, I don't think the current paradigm as-is is dangerous. I'm stating the obvious because this platform has gone a bit bonkers.

The danger comes from finetuning LLMs to become AutoGPTs which have memory, actions, and maximize rewards, and are deployed autonomously. Widepsread proliferation of GPT-4+ models will almost certainly make lots of these agents which will cause a lot of damage and potentially cause something ind

... (read more)
7leogao3mo
"larger models exploit the RM more" is in contradiction with what i observed in the RM overoptimization paper. i'd be interested in more analysis of this
2Algon2mo
In that paper did you guys take a good long look at the output of various sized models throughout training? In addition to looking at the graphs of gold-standard/proxy reward model ratings against KL-divergence. If not, then maybe that's the discrepancy: perhaps Sherjil was communicating with the LLM and thinking "this is not what we wanted". 

If you work at a social media website or YouTube (or know anyone who does), please read the text below:

Community Notes is one of the best features to come out on social media apps in a long time. The code is even open source. Why haven't other social media websites picked it up yet? If they care about truth, this would be a considerable step forward beyond. Notes like “this video is funded by x nation” or “this video talks about health info; go here to learn more” messages are simply not good enough.

If you work at companies like YouTube or know someone who does, let's figure out who we need to talk to to make it happen. Naïvely, you could spend a weekend DMing a bunch of employees (PMs, engineers) at various social media websites in order to persuade them that this is worth their time and probably the biggest impact they could have in their entire career.

If you have any connections, let me know. We can also set up a doc of messages to send in order to come up with a persuasive DM.

4jacquesthibs5mo
Don't forget that we train language models on the internet! The more truthful your dataset is, the more truthful the models will be! Let's revamp the internet for truthfulness, and we'll subsequently improve truthfulness in our AI systems!!
2Viliam5mo
I don't use Xitter; is there a way to display e.g. top 100 tweets with community notes? To see how it works in practice.
6Yoav Ravid5mo
I don't know of something that does so at random, but this page automatically shares posts with community notes that have been deemed helpful.

Oh, that’s great, thanks! Also reminded me of (the less official, more comedy-based) “Community Notes Violating People”. @Viliam 

2Viliam5mo
Thank you both! This is perfect. It's like a rational version of Twitter, and I didn't expect to use those words in the same sentence.
2jacquesthibs5mo
I don’t think so, unfortunately.
2Viliam5mo
Found a nice example (linked from Zvi's article). Okay, it's just one example and it wasn't found randomly, but I am impressed.
2jacquesthibs5mo
I've also started working on a repo in order to make Community Notes more efficient by using LLMs.
2ChristianKl5mo
This sounds a bit naive.  There's a lot of energy invested in making it easier for powerful elites to push their preferred narratives. Community Notes are not in the interests of the Censorship Industrial Complex. I don't think that anyone at the project manager level has the political power to add a feature like Community Notes. It would likely need to be someone higher up in the food chain. 
8jacquesthibs5mo
Sure, but sometimes it's just a PM and a couple of other people that lead to a feature being implemented. Also, keep in mind that Community Notes was a thing before Musk. Why was Twitter different than other social media websites? Also, the Community Notes code was apparently completely revamped by a few people working on the open-source code, which got it to a point where it was easy to implement, and everyone liked the feature because it noticeably worked. Either way, I'd rather push for making it happen and somehow it fails on other websites than having pessimism and not trying at all. If it needs someone higher up the chain, let's make it happen.
4ChristianKl5mo
Twitter seems to have started Birdwatch as a small separate pilot project where it likely wasn't easy to fight or on anyone's radar to fight.  In the current enviroment, where X gets seen as evil by a lot of the mainstream media, I would suspect that copying Community Notes from X would alone produce some resistence. The antibodies are now there in a way they weren't two years ago.  If you look at mainstream media views about X's community notes, I don't think everyone likes it.  I remember Elon once saying that he lost a 8-figure advertising deal because of Community Notes on posts of a company that wanted to advertise on X. I think you would likely need to make a case that it's good business in addition to helping with truth.  If you want to make your argument via truth, motivating some reporters to write favorable articles about Community Notes might be necessary. 
2jacquesthibs5mo
Good points; I'll keep them all in mind. If money is the roadblock, we can put pressure on the companies to do this. Or, worst-case, maybe the government can enforce it (though that should be done with absolute care).
2jacquesthibs5mo
I shared a tweet about it here: https://x.com/JacquesThibs/status/1724492016254341208?s=20 Consider liking and retweeting it if you think this is impactful. I'd like it to get into the hands of the right people.
1Bruce Lewis5mo
I had not heard of Community Notes. Interesting anti-bias technique "notes require agreement between contributors who have sometimes disagreed in their past ratings". https://communitynotes.twitter.com/guide/en/about/introduction
2jacquesthibs5mo
I've been on Twitter for a long time, and there's pretty much unanimous agreement that it works amazingly well in practice!
1Kabir Kumar5mo
there is an issue with surface level insights being unfaily weighted, but this is solvable, imo. especially with youtube, which can see which commenters have watched the full video.

My current speculation as to what is happening at OpenAI

How do we know this wasn't their best opportunity to strike if Sam was indeed not being totally honest with the board?

Let's say the rumours are true, that Sam is building out external orgs (NVIDIA competitor and iPhone-like competitor) to escape the power of the board and potentially going against the charter. Would this 'conflict of interest' be enough? If you take that story forward, it sounds more and more like he was setting up AGI to be run by external companies, using OpenAI as a fundraising bargaining chip, and having a significant financial interest in plugging AGI into those outside orgs.

So, if we think about this strategically, how long should they wait as board members who are trying to uphold the charter?

On top of this, it seems (according to Sam) that OpenAI has made a significant transformer-level breakthrough recently, which implies a significant capability jump. Long-term reasoning? Basically, anything short of 'coming up with novel insights in physics' is on the table, given that Sam recently used that line as the line we need to cross to get to AGI.

So, it could be a mix of, Ilya thinking they have achieved AG... (read more)

3jacquesthibs5mo
Obviously, a lot has happened since the above shortform, but regarding model capabilities (which discussions died down these last couple of days), there's now this: Source: https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ 
4jacquesthibs5mo
So, apparently, there are two models, but only Q* is mentioned in the article. Won't share the source, but:
3jacquesthibs5mo
Update, board members seem to be holding their ground more than expected in this tight situation:

Attempt to explain why I think AI systems are not the same thing as a library card when it comes to bio-risk.

To focus on less of an extreme example, I’ll be ignoring the case where AI can create new, more powerful pathogens faster than we can create defences, though I think this is an important case (some people just don’t find it plausible because it relies on the assumption that AIs being able to create new knowledge).

I think AI Safety people should make more of an effort to walkthrough the threat model so I’ll give an initial quick first try:

1) Library. If I’m a terrorist and I want to build a bioweapon, I have to spend several months reading books at minimum to understand how it all works. I don’t have any experts on-hand to explain how to do it step-by-step. I have to figure out which books to read and in what sequence. I have to look up external sources to figure out where I can buy specific materials.

Then, I have to somehow find out how to to gain access to those materials (this is the most difficult part for each case). Once I gain access to the materials, I still need to figure out how to make things work as a total noob at creating bioweapons. I will fail. Even experts fa... (read more)

I recently sent in some grant proposals to continue working on my independent alignment research. It gives an overview of what I'd like to work on for this next year (and more really). If you want to have a look at the full doc, send me a DM. If you'd like to help out through funding or contributing to the projects, please let me know.

Here's the summary introduction:

12-month salary for building a language model system for accelerating alignment research and upskilling (additional funding will be used to create an organization), and studying how to supervise AIs that are improving AIs to ensure stable alignment.

Summary

  • Agenda 1Build an Alignment Research Assistant using a suite of LLMs managing various parts of the research process. Aims to 10-100x productivity in AI alignment research. Could use additional funding to hire an engineer and builder, which could evolve into an AI Safety organization focused on this agenda. Recent talk giving a partial overview of the agenda.
  • Agenda 2Supervising AIs Improving AIs (through self-training or training other AIs). Publish a paper and create an automated pipeline for discovering noteworthy changes in
... (read more)
1mesaoptimizer9mo
Can you give concrete use-cases that you imagine your project would lead to helping alignment researchers? Alignment researchers have wildly varying styles of work outputs and processes. I assume you aim to accelerate a specific subset of alignment researchers (those focusing on interpretability and existing models and have an incremental / empirical strategy for solving the alignment problem).

Current Thoughts on my Learning System

Crossposted from my website. Hoping to provide updates on my learning system every month or so.

TLDR of what I've been thinking about lately:

  • There are some great insights in this video called "How Top 0.1% Students Think." And in this video about how to learn hard concepts.
  • Learning is a set of skills. You need to practice each component of the learning process to get better. You can’t watch a video on a new technique and immediately become a pro. It takes time to reap the benefits.
  • Most people suck at mindmaps. Mindmaps can be horrible for learning if you just dump a bunch of text on a page and point arrows to different stuff (some studies show mindmaps are ineffective, but that's because people initially suck at making them). However, if you take the time to learn how to do them well, they will pay huge dividends in the future. I’ll be doing the “Do 100 Things” challenge and developing my skill in building better mindmaps. Getting better at mindmaps involves “chunking” the material and creating memorable connections and drawings.
  • Relational vs Isolated Learning. As you learn something new, try to learn it in relation to the things you already kno
... (read more)
3jacquesthibs1y
Note on using ChatGPT for learning * Important part: Use GPT to facilitate the process of pushing you to higher-order learning as fast as possible. * Here’s Bloom’s Taxonomy for higher-order learning: * * For example, you want to ask GPT to come up with analogies and such to help you enter higher-order thinking by thinking about whether the analogy makes sense. * Is the analogy truly accurate? * Does it cover the main concept you are trying to understand? * Then, you can extend the analogy to try to make it better and more comprehensive. * This allows you to offload the less useful task (e.g. coming up with the analogy), and spending more time in the highest orders of learning (the evaluation phase; “is this analogy good? where does it break down?”). * You still need to use your cognitive load to encode the knowledge effectively. Look for desirable difficulty. * Use GPT to create a pre-study of the thing you would like to learn. * Have it create an outline of the order of the things you should learn. * Have it give you a list of all the jargon words in a field and how they relate so that you can quickly get up to speed on the terminology and talk to an expert. * Coming up with chunks of the topic you are exploring. * You can give GPT text that describes what you are trying to understand, the relationships between things and how you are chunking them. * Then, you can ask GPT to tell you what are some weak areas or some things that are potentially missing. * GPT works really well as a knowledge “gap-checker”. When you are trying to have GPT output some novel insights or complicated nuanced knowledge, it can give vague answers that aren’t too helpful. This is why, it is often better to treat GPT as a gap-checker and/or a friend that is prompting you to come up with great insights. Reference: I’ve been using ChatGPT/GPT-4 a lot to gain insights on how to accelerate alignment research. Some of my conclusions are similar to what was d
2jacquesthibs1y
How learning efficiently applies to alignment research As we are trying to optimize for actually solving the problem, we should not fall into the trap of learning just to learn. We should instead focus on learning efficiently with respect to how it helps us generate insights that lead to a solution for alignment. This is also the framing we should have in mind when we are building tools for augmenting alignment researchers. With the above in mind, I expect that part of the value of learning efficiently involves some of the following: * Efficient learning involves being hyper-focused on identifying the core concepts and how they all relate to one another. This mode of approaching things seems like it helps us attack the core of alignment much more directly and bypasses months/years of working on things that are only tangential. * Developing a foundation of a field seems key to generating useful insights. The goal is not to learn everything but to build a foundation that allows you to bypass spending way too much time tackling sub-optimal sub-problems or dead-ends for way too long. Part of the foundation-building process should reduce the time it shapes you into an exceptional alignment researcher rather than a knower-of-things. * As John Wentworth says with respect to the Game Tree of Alignment: "The main reason for this exercise is that (according to me) most newcomers to alignment waste years on tackling not-very-high-value sub-problems or dead-end strategies." * Lastly, many great innovations have not come from unique original ideas. There's an iterative process passed amongst researchers and it seems often the case that the greatest ideas come from simply merging ideas that were already lying around. Learning efficiently (and storing those learnings for later use) allows you to increase the number of ideas you can merge together. If you want to do that efficiently, you need to improve your ability to identify which ideas are worth storing in your mental wa
2Peter Hroššo1y
My model of (my) learning is that if the goal is sufficiently far, learning directly towards the goal is goodharting a likely wrong metric. The only method which worked for me for very distant goals is following my curiosity and continuously internalizing new info, such that the curiosity is well informed about current state and the goal.
2jacquesthibs1y
Curiosity is certainly a powerful tool for learning! I think any learning system which isn't taking advantage of it is sub-optimal. Learning should be guided by curiosity. The thing is, sometimes we need to learn things we aren't so curious about. One insight I Iearned from studying learning is that you can do specific things to make yourself more curious about a given thing and harness the power that comes with curiosity. Ultimately, what this looks like is to write down questions about the topic and use them to guide your curious learning process. It seems that this is how efficient top students end up learning things deeply in a shorter amount of time. Even for material they care little about, they are able to make themselves curious and be propelled forward by that. That said, my guess is that goodharting the wrong metric can definitely be an issue, but I'm not convinced that relying on what makes you naturally curious is the optimal strategy for solving alignment. Either way, it's something to think about!
1jacquesthibs1y
By the way, I've just added a link to a video by a top competitive programmer on how to learn hard concepts. In the video and in the iCanStudy course, both talk about the concept of caring about what you are learning (basically, curiosity). Gaining the skill to care and become curious is an essential part of the most effective learning. However, contrary to popular belief, you don't have to be completely guided by what makes you naturally curious! You can learn how to become curious (or care) about any random concept.
1jacquesthibs1y
Video on how to approach having to read a massive amount of information (like a textbook) as efficiently as possible: 
1jacquesthibs1y
Added my first post (of, potentially, a sequence) on effective learning here. I think there are a lot of great lessons at the frontier of the literature and real-world practice on learning that go far beyond the Anki approach that a lot of people seem to take these days. The important part is being effective and efficient. Some techniques might work, but that does not mean it is the most efficient (learning the same thing more deeply in less time). Note that I also added two important videos to the root shortform:
1jacquesthibs1y
Note on spaced repetition While spaced repetition is good, many people end up misusing it as a crutch instead of defaulting to trying to deeply understand a concept right away. As you get better at properly encoding the concept, you extend the forgetting curve to the point where repetition is less needed. Here's a video of a top-level programmer on how he approaches learning hard concepts efficiently. And here's a video on how the top 0.1% of students study efficiently.
1jacquesthibs1y
Here's some additional notes on the fundamentals on being an effective learner: Encoding and Retrieval (What it take to learn) * Working memory is the memory that we use. However, if it is not encoded properly or at all, we will forget it. * Encode well first (from working memory to long-term memory), then frequently and efficiently retrieve from long-term memory. * If studying feels easy, means that you aren't learning or holding on to the information. It means that you are not encoding and retrieving effectively. * You want it to be difficult when you are studying because this is how it will encode properly. Spacing, Interleaving, and Retrieval (SIR) * These are three rules that apply to every study technique in the course (unless told otherwise). You can apply SIR to all techniques. * Spacing: space your learning out. * Pre-study before class, then learn in class, and then a week later revise it with a different technique. * A rule of thumb you can follow is to wait long enough until you feel like you are just starting to forget the material. * As you get better at encoding the material effectively as soon as you are exposed to it, you will notice that you will need to do less repetition. * How to space reviews: * Beginner Schedule (less reviews need as you get better at encoding) * Same day * Next day * End of week * End of month * After learning something for the first time, review it later on the same day. * Review everything from the last 2-3 days mid-week. * Do an end of week revision on the week's worth of content. * End of month revision on entire month's worth of content. * Review of what's necessary as time goes on. * (If you're trying to do well on an exam or a coding interview, you can do the review 1 or 2 weeks before the assessment.) * Reviewing time duration: * For beginners * No less than 30 minutes per subject for end-of-week * No less th
1jacquesthibs1y
A few more notes: * I use the app Concepts on my iPad to draw mindmaps. Drawing mindmaps with pictures and such is way more powerful (better encoding into long-term memory) than typical mindmap apps where you just type words verbatim and draw arrows. It's excellent since it has a (quasi-) infinite canvas. This is the same app that Justin Sung uses. * When I want to go in-depth into a paper, I will load it into OneNote on my iPad and draw in the margin to better encode my understanding of the paper. * I've been using the Voice Dream Reader app on my iPhone and iPad to get through posts and papers much faster (I usually have time to read most of an Alignment Forum post on my way to work and another on the way back). Importantly, I stop the text-to-speech when I'm trying to understand an important part. I use Pocket to load LW/AF posts into it and download PDFs on my device and into the app for reading papers. There's a nice feature in the app that automatically skips citations in the text, so reading papers isn't as annoying. The voices are robotic, but I just cycled through a bunch until I found one I didn't mind (I didn't buy any, but there are premium voices). I expect Speechify has better voices, but it's more expensive, and I think people find that the app isn't as good overall compared to Voice Dream Reader. Thanks to Quintin Pope for recommending the app to me.

I'm currently ruminating on the idea of doing a video series in which I review code repositories that are highly relevant to alignment research to make them more accessible.

I do want to pick out repos with perhaps even bad documentation that are still useful and then hope on a call with the author to go over the repo and record it. At least have something basic to use when navigating the repo.

This means there would be two levels: 1) an overview with the author sharing at least the basics, and 2) a deep dive going over most of the code. The former likely contains most of the value (lower effort for me, still gets done, better than nothing, points to repo as a selection mechanism, people can at least get started).

I am thinking of doing this because I think there may be repositories that are highly useful for new people but would benefit from some direction. For example, I think Karpathy and Neel Nanda's videos have been useful in getting people started. In particular, Karpathy saw OOM more stars to his repos (e.g. nanoGPT) after the release of his videos (which, to be fair, he's famous, and a number of stars is definitely not a perfect proxy for usage).

I'm interested in any feedback ... (read more)

6Dagon8d
I love this idea!  I don't actually like videos, preferring searchable, exerptable text, but I may not be typical and there's room for all. At first glance, I agree with your guess that the overview/intro is more value per effort (for you and for consumers, IMO) than a deep-dive into the code. There IS probably a section of code or core modeling idea for each where it would be worth going half-deep into (algorithm and usage, not necessarily line-by-line). Note that this list is itself incredibly valuable, and you might start with an intro video (and associated text) that spends 1 minute on each and why you're planning to do it, and what you currently think will be the most important intro concept(s) for each.

I’m still thinking this through, but I am deeply concerned about Eliezer’s new article for a combination of reasons:

  • I don’t think it will work.
  • Given that it won’t work, I expect we lose credibility and it now becomes much harder to work with people who were sympathetic to alignment, but still wanted to use AI to improve the world.
  • I am not convinced as he is about doom and I am not as cynical about the main orgs as he is.

In the end, I expect this will just alienate people. And stuff like this concerns me.

I think it’s possible that the most memetically power... (read more)

So I think what I'm getting here is that you have an object-level disagreement (not as convinced about doom), but you are also reinforcing that object-level disagreement with signalling/reputational considerations (this will just alienate people). This pattern feels ugh and worries me. It seems highly important to separate the question of what's true from the reputational question. It furthermore seems highly important to separate arguments about what makes sense to say publicly on-your-world-model vs on-Eliezer's-model. In particular, it is unclear to me whether your position is "it is dangerously wrong to speak the truth about AI risk" vs "Eliezer's position is dangerously wrong" (or perhaps both). 

I guess that your disagreement with Eliezer is large but not that large (IE you would name it as a disagreement between reasonable people, not insanity). It is of course possible to consistently maintain that (1) Eliezer's view is reasonable, (2) on Eliezer's view, it is strategically acceptable to speak out, and (3) it is not in fact strategically acceptable for people with Eliezer's views to speak out about those views. But this combination of views does imply endorsing a silencing of reasonable disagreements which seems unfortunate and anti-epistemic. 

My own guess is that the maintenance of such anti-epistemic silences is itself an important factor contributing to doom. But, this could be incorrect.

2jacquesthibs1y
Yeah, so just to clarify a few things: * This was posted on the day of the open letter and I was indeed confused about what to think of the situation. * I think something I failed to properly communicate is that I was worried that this was a bad time to pull the lever even if I’m concerned about risks from AGI. I was worried the public wouldn’t take alignment seriously because they cause a panic much sooner than people were ready for. * I care about being truthful, but I care even more about not dying so my comment was mostly trying to communicate that I didn’t think this was the best strategic decision for not dying. * I was seeing a lot of people write negative statements about the open letter on Twitter and it kind of fed my fears that this was going to backfire as a strategy and impact all of our work to make ai risk taken seriously. * In the end, the final thing that matters is that we win (i.e. not dying from AGI). I’m not fully sure what I think now (mostly because I don’t know about higher order effects that will happen 2-3 years from now), but I think it turned out a lot strategically better than I initially expected.
8jacquesthibs1y
To try and burst any bubble about people’s reaction to the article, here’s a set of tweets critical about the article: * https://twitter.com/mattparlmer/status/1641230149663203330?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/jachiam0/status/1641271197316055041?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/finbarrtimbers/status/1641266526014803968?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/plinz/status/1641256720864530432?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/perrymetzger/status/1641280544007675904?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/post_alchemist/status/1641274166966996992?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/keerthanpg/status/1641268756071718913?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/levi7hart/status/1641261194903445504?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/luke_metro/status/1641232090036600832?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/gfodor/status/1641236230611562496?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/luke_metro/status/1641263301169680386?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/perrymetzger/status/1641259371568005120?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/elaifresh/status/1641252322230808577?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/markovmagnifico/status/1641249417088098304?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/interpretantion/status/1641274843692691463?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/lan_dao_/status/1641248437139300352?s=61&t=ryK3X96D_TkGJtvu2rm0uw * https://twitter.com/lan_dao_/status/1641249458053861377?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/growing_daniel/status/1641246902363766784?s=61&t=ryK3X96D_TkGJtvu2rm0uw  * https://twitter.com/alexandrosm/status/1641259179955601408?s=61&t=ryK3X96D_TkGJtvu2rm0uw 
2Viliam1y
What is the base rate for Twitter reactions for an international law proposal?
1jacquesthibs1y
Of course it’s often all over the place. I only shared the links because I wanted to make sure people weren’t deluding themselves with only positive comments.
2Viliam1y
This reminds me of the internet-libertarian chain of reasoning that anything that government does is protected by the threat of escalating violence, therefore any proposals that involve government (even mild ones, such as "once in a year, the President should say 'hello' to the citizens") are calls for murder, because... (create a chain of escalating events starting with someone non-violently trying to disrupt this, ending with that person being killed by cops)... Yes, a moratorium on AIs is a call for violence, but only in the sense that every law is a call for violence.
1[comment deleted]1y

This seems like a fairly important paper by Deepmind regarding generalization (and lack of it in current transformer models): https://arxiv.org/abs/2311.00871 

Here’s an excerpt on transformers potentially not really being able to generalize beyond training data:

Our contributions are as follows:

  • We pretrain transformer models for in-context learning using a mixture of multiple distinct function classes and characterize the model selection behavior exhibited.
  • We study the in-context learning behavior of the pretrained transformer model on functions th
... (read more)
6leogao5mo
i predict this kind of view of non magicalness of (2023 era) LMs will become more and more accepted, and this has implications on what kinds of alignment experiments are actually valuable (see my comment on the reversal curse paper). not an argument for long (50 year+) timelines, but is an argument for medium (10 year) timelines rather than 5 year timelines
5leogao5mo
also this quote from the abstract is great: i used to call this something like "tackling the OOD generalization problem by simply making the distribution so wide that it encompasses anything you might want to use it on"
4Noosphere895mo
I'd say my major takeaways, assuming this research scales (it was only done on GPT-2, and we already knew it couldn't generalize.) 1. Gary Marcus was right about LLMs mostly not reasoning outside the training distribution, and this updates me more towards "LLMs probably aren't going to be godlike, or be nearly as impactful as LW say it is." 2. Be more skeptical of AI progress leading to big things, and in general unless reality can simply be memorized, scaling probably won't work to automate the economy. More generally, this updates me towards longer timelines, and longer tails on those timelines. 3. Be slightly more pessimistic on AI safety, since LLMs have a bunch of nice properties, and future AI probably will have less nice properties, though alignment optimism mostly doesn't depend on LLMs. 4. AI governance gets a lucky break, since they only have to regulate misuse, and even though their threat model isn't likely or even probable to be realized, it's still nice that we don't have to deal with the disruptive effects of AI now.
3jacquesthibs5mo
I am sharing this since I think it will change your view on how much to update on this paper (I should have shared this initially). Here's what the paper author said on X:     So, with that, I said: To which @Jozdien replied:
2jacquesthibs5mo
Title: Is the alignment community over-updating on how scale impacts generalization? So, apparently, there's a rebuttal to the recent Google generalization paper (and also worth pointing out it wasn't done with language models, just sinoïsodal functions, not language): But then, the paper author responds: ---------------------------------------- This line of research makes me question one thing: "Is the alignment community over-updating on how scale impacts generalization?" It remains to be seen how well models will generalize outside of their training distribution (interpolation vs extrapolation). In other words, when people say that GPT-4 (and other LLMs) can generalize, I think they need to be more careful about what they really mean. Is it doing interpolation or extrapolation? Meaning, yes, GPT-4 can do things like write a completely new poem, but poems and related stuff were in its training distribution! So, you can say it is generalizing, but I think it's a much weaker form of generalization than what people really imply when they say generalization. A stronger form of generalization would be an AI that can do completely new tasks that are actually outside of its training distribution. Now, at this point, you might say, "yes, but we know that LLMs learn functions and algorithms to do tasks, and as you scale up and compress more and more data, you will uncover more meta-algorithms that can do this kind of extrapolation/tasks outside of the training distribution." Well, two things: 1. It remains to be seen when or if this will happen in the current paradigm (no matter how much you scale up). 2. It's not clear to me how well things like induction heads continue to work on things that are outside of their training distribution. If they don't adapt well, then it may be the same thing for other algorithms. What this would mean in practice, I'm not sure. I've been looking at relevant papers, but haven't found an answer yet. This brings me to another point
5jacquesthibs5mo
Or perhaps as @Nora Belrose mentioned to me: "Perhaps we should queer the interpolation-extrapolation distinction."
2Garrett Baker5mo
Some evidence this is not so fundamental, and we should expect a (or many) phase transition(s) to more generalizing in context learning as we increase the log number of tasks.
1Oliver Sourbut5mo
My hot take is that this paper's prominence is a consequence of importance hacking (I'm not accusing the authors in particular). Zero or near-zero relevance to LLMs. Authors get a yellow card for abusing the word 'model' twice in the title alone.

Given funding is a problem in AI x-risk at the moment, I’d love to see people to start thinking of creative ways to provide additional funding to alignment researchers who are struggling to get funding.

For example, I’m curious if governance orgs would pay for technical alignment expertise as a sort of consultant service.

Also, it might be valuable to have full-time field-builders that are solely focused on getting more high-net-worth individuals to donate to AI x-risk.

On joking about how "we're all going to die"

Setting aside the question of whether people are overly confident about their claims regarding AI risk, I'd like to talk about how we talk about it amongst ourselves.

We should avoid jokingly saying "we're all going to die" because I think it will corrode your calibration to risk with respect to P(doom) and it will give others the impression that we are all more confident about P(doom) than we really are.

I think saying it jokingly still ends up creeping into your rational estimates on timelines and P(doom). I expe... (read more)

What are some important tasks you've found too cognitively taxing to get in the flow of doing?

One thing that I'd like to consider for Accelerating Alignment is to build tools that make it easier to get in the habit of cognitively demanding tasks by reducing the cognitive load necessary to do the task. This is part of the reason why I think people are getting such big productivity gains from tools like Copilot.

One way I try to think about it is like getting into the habit of playing guitar. I typically tell people to buy an electric guitar rather than an ac... (read more)

5plex1y
For developing my hail mary alignment approach, the dream would be to be able to load enough of the context of the idea into a LLM that it could babble suggestions (since the whole doc won't fit in the context window, maybe randomizing which parts beyond the intro are included for diversity?), then have it self-critique those suggestions automatically in different threads in bulk and surface the most promising implementations of the idea to me for review. In the perfect case I'd be able to converse with the model about the ideas and have that be not totally useless, and pump good chains of thought back into the fine-tuning set.

Projects I'd like to work on in 2023.

Wrote up a short (incomplete) bullet point list of the projects I'd like to work on in 2023:

  • Accelerating Alignment
    • Main time spent (initial ideas, will likely pivot to varying degrees depending on feedback; will start with one):
      • Fine-tune GPT-3/GPT-4 on alignment text and connect the API to LoomVSCode (CoPilot for alignment research) and potentially notetaking apps like Roam Research. (1-3 months, depending on bugs and if we continue to add additional features.)
      • Create an audio-to-post pipeline where we can eas
... (read more)
7jacquesthibs1y
Two other projects I would find interesting to work on: * Causal Scrubbing to remove specific capabilities from a model. For example, training a language model on The Pile and a code dataset. Then, applying causal scrubbing to try and remove the model's ability to generate code while still achieving the similar loss on The Pile. * A few people have started extending the work from the Discovering Latent Knowledge in Language Models without Supervision paper. I think this work could potentially evolve into a median-case solution to avoiding x-risk from AI.
3chanamessinger1y
Curious if you have any updates!
2jacquesthibs1y
Working on a new grant proposal right now. Should be sent this weekend. If you’d like to give feedback or have a look, please send me a DM! Otherwise, I can send the grant proposal to whoever wants to have a look once it is done (still debating about posting it on LW). Outside of that, there has been a lot of progress on the Cyborgism discord (there is a VSCode plugin called Worldspider that connects to the various APIs, and there has been more progress on Loom). Most of my focus has gone towards looking at the big picture and keeping an eye on all the developments. Now, I have a better vision of what is needed to create an actually great alignment assistant and have talked to other alignment researchers about it to get feedback and brainstorm. However, I’m spread way too thin and will request additional funding to get some engineer/builder to start building the ideas out so that I can focus on the bigger picture and my alignment work. If I can get my funding again (previous funding ended last week) then my main focus will be building out the system I have in my for accelerating alignment work + continue working on the new agenda I put out with Quintin and others. There’s some other stuff I‘d like to do, but those are lower priority or will depend on timing. It’s been hard to get the funding application done because things are moving so fast and I’m trying not to build things that will be built by default. And I’ve been talking to some people about the possibility of building an org so that this work could go a lot faster.
3plex1y
Very excited by this agenda, was discussing my hope that someone finetunes LLMs on the alignment archive soon today!
2Mati_Roy1y
do you have a link? I'd be interested in being added to the Discord

Jacques' AI Tidbits from the Web

I often find information about AI development on X (f.k.a.Twitter) and sometimes other websites. They usually don't warrant their own post, so I'll use this thread to share. I'll be placing a fairly low filter on what I share.

There's someone on X (f.k.a.Twitter) called Jimmy Apples (🍎/acc) and he has shared some information in the past that turned out to be true (apparently the GPT-4 release date and that OAI's new model would be named "Gobi"). He recently tweeted, "AGI has been achieved internally." Some people think that the Reddit comment below may be from the same guy (this is just a weak signal, I’m not implying you should consider it true or update on it):

7elifland7mo
Where is the evidence that he called OpenAI’s release date and the Gobi name? All I see is a tweet claiming the latter but it seems the original tweet isn’t even up?
2jacquesthibs7mo
This is the tweet for Gobi: https://x.com/apples_jimmy/status/1703871137137176820?s=46&t=YyfxSdhuFYbTafD4D1cE9A I asked someone if it’s fake. Apparently not, you can find it on google archive: https://threadreaderapp.com/thread/1651837957618409472.html
3Person7mo
Predicting the GPT-4 launch date can easily be disproven with the confidence game. It's possible he just created a prediction for every day and deleted the ones that didn't turn out right. For the Gobi prediction it's tricky. The only evidence is the Threadreader and a random screenshot from a guy who seems clearly related to jim. I am very suspicious of the Threadreader one. On one hand I don't see a way it can be faked, but it's very suspicious that the Gobi prediction is Jimmy's only post that was saved there despite him making an even bigger bombshell "prediction". It's also possible, though unlikely, that the Information's article somehow found his tweet and used it as a source for their article. What kills Jimmy's credibility for me is his prediction back in January (you can use the Wayback Machine to find it) that OAI had finished training GPT-5, no not a GPT-5 level system, the ACTUAL GPT-5 in October 2022 and that it was 125T parameters. Also goes without saying, pruning his entire account is suspicious too. 
2jacquesthibs7mo
I’ll try to find them, but this was what people were saying. They also said he deleted past tweets so that evidence may forever be gone. I remember one tweet where Jimmy said something like, “Gobi? That’s old news, I said that months ago, you need to move on to the new thing.” And I think he linked the tweet though I’m very unsure atm. Need to look it up, but you can use the above for a search.
2jacquesthibs6mo
New tweet by Jimmy Apples. This time, he's insinuating that OpenAI is funding a stealth startup working on BCI. If this is true, then it makes sense they would prefer not to do it internally to avoid people knowing in advance based on their hires. A stealth startup would keep things more secret. Might be of interest, @lisathiergart and @Allison Duettmann.
2jacquesthibs7mo
Not sure exactly what this means, but Jimmy Apples has now tweeted the following: My gut is telling me that he apple-bossed too close to the sun (released info he shouldn't have, and now that he's concerned about his job or some insider's job), and it's time for him to stop sharing stuff (the apple being bitten symbolizing that he is done sharing info). This is because the information in my shortform was widely shared on X and beyond. He also deleted all of his tweets (except for the retweets).
1Person7mo
Or that he was genuinely just making things up and tricking us for fun, and a cryptic exit is a perfect way to leave the scene. I really think people are looking way too deep into him and ignoring the more outlandish predictions he's made (125T GPT-4 and 5 in October 2022), along with the fact there is never actual evidence of his accurate ones, only 2nd hand very specific and selective archives.
2jacquesthibs7mo
He did say some true things before. I think it's possible all of the new stuff is untrue, but we're getting more reasons to believe it's not entirely false. The best liars sprinkle in truth. I think, as a security measure, it's also possible that even people within OpenAI know all the big details of what's going on (this is apparently the case for Anthropic). This could mean, for OpenAI employees, that some details are known while others are not. Employees themselves could be forced to speculate on some things. Either way, I'm not obsessing too much over this. Just sharing what I'm seeing.
3jacquesthibs5mo
More predictions/insights from Jimmy and crew. He's implying that people (like I have also been saying) that some people are far too focused on scale over training data and architectural improvements. IMO, the bitter lesson is a thing, but I think we've over-updated on it. Relatedly, someone shared a new 13B model that apparently is better and comparable to GPT-4 in logical reasoning (based on benchmarks, which I don't usually trust too much). Note that the model is a solver-augmented LM. Here's some context regarding the paper:
2jacquesthibs7mo
Sam Altman at a YC founder reunion: https://x.com/smahsramo/status/1706006820467396699?s=46&t=YyfxSdhuFYbTafD4D1cE9A “Most interesting part of @sama talk: GPT5 and GPT6 are “in the bag” but that’s likely NOT AGI (eg something that can solve quantum gravity) without some breakthroughs in reasoning. Strong agree.”
2Mitchell_Porter7mo
AGI is "something that can solve quantum gravity"?  That's not just a criterion for general intelligence, that's a criterion for genius-level intelligence. And since general intelligence in a computer has advantages of speed, copyability, little need for down time, that are not possessed by general intelligence, AI will be capable of contributing to its training, re-design, agentization, etc, long before "genius level" is reached.  This underlines something I've been saying for a while, which is that superintelligence, defined as AI that definitively surpasses human understanding and human control, could come into being at any time (from large models that are not publicly available but which are being developed privately by Big AI companies). Meanwhile, Eric Schmidt (former Google CEO) says about five years until AI is actively improving itself, and that seems generous.  So I'll say: timeline to superintelligence is 0-5 years. 
2Vladimir_Nesov7mo
In some models of the world this is seen as unlikely to ever happen, these things are expected to coincide, which collapses the two definitions of AGI. I think the disparity between sample efficiency of in-context learning and that of pre-training is one illustration for how these capabilities might come apart, in the direction that's opposite to what you point to: even genius in-context learning doesn't necessarily enable the staying power of agency, if this transient understanding can't be stockpiled and the achieved level of genius is insufficient to resolve the issue while remaining within its limitations (being unable to learn a lot of novel things in the course of a project).
2jacquesthibs7mo
Someone in the open source community tweeted: "We're about to change the AI game. I'm dead serious." My guess is that he is implying that they will be releasing open source mixture of experts models in a few months from now. They are currently running them on CPUs.
2jacquesthibs7mo
Lots of cryptic tweet from the open source LLM guys: https://x.com/abacaj/status/1705781881004847267?s=46&t=YyfxSdhuFYbTafD4D1cE9A “If you thought current open source LLMs are impressive… just remember they haven’t peaked yet” To be honest, my feeling is that they are overhyping how big of deal this will be. Their ego and self-importance tend to be on full display.
3Person7mo
Occasionally reading what OSS AI gurus say, they definitely overhype their stuff constantly. The ones who make big claims and try to hype people up are often venture entrepreneur guys rather than actual ML engineers. 
2jacquesthibs7mo
The open source folks I mostly keep an eye on are the ones who do actually code and train their own models. Some are entrepreneurs, but they know a decent amount. Not top engineers, but they seem to be able to curate datasets and train custom models. There’s some wannabe script kiddies too, but once you lurk enough, you become aware of who are actually decent engineers (you’ll find some at Vector Institute and Jeremy Howard is pro- open source, for example). I wouldn’t totally discount them having an impact, even though some of them will overhype.

I think it would be great if alignment researchers read more papers

But really, you don't even need to read the entire paper. Here's a reminder to consciously force yourself to at least read the abstract. Sometimes I catch myself running away from reading an abstract of a paper even though it is very little text. Over time I've just been forcing myself to at least read the abstract. A lot of times you can get most of the update you need just by reading the abstract. Try your best to make it automatic to do the same.

To read more papers, consider using Semant... (read more)

On hyper-obession with one goal in mind

I’ve always been interested in people just becoming hyper-obsessed in pursuing a goal. One easy example is with respect to athletes. Someone like Kobe Bryant was just obsessed with becoming the best he could be. I’m interested in learning what we can from the experiences of the hyper-obsessed and what we can apply to our work in EA / Alignment.

I bought a few books on the topic, I should try to find the time to read them. I’ll try to store some lessons in this shortform, but here’s a quote from Mr. Beast’s Joe Rogan in... (read more)

I shared the following as a bio for EAG Bay Area 2024. I'm sharing this here if it reaches someone who wants to chat or collaborate.

Hey! I'm Jacques. I'm an independent technical alignment researcher with a background in physics and experience in government (social innovation, strategic foresight, mental health and energy regulation). Link to Swapcard profile. Twitter/X.

CURRENT WORK

  • Collaborating with Quintin Pope on our Supervising AIs Improving AIs agenda (making automated AI science safe and controllable). The current project involves a new method allowi
... (read more)

I think people might have the implicit idea that LLM companies will continue to give API access as the models become more powerful, but I was talking to someone earlier this week that made me remember that this is not necessarily the case. If you gain powerful enough models, you may just keep it to yourself and instead spin AI companies with AI employees to make a ton of cash instead of just charging for tokens.

For this reason, even if outside people build the proper brain-like AGI setup with additional components to squeeze out capabilities from LLMs, they may be limited by:

1. open-source models

2. the API of the weaker models from the top companies

3. the best API of the companies that are lagging behind

A frame for thinking about takeoff

One error people can make when thinking about takeoff speeds is assuming that because we are in a world with some gradual takeoff, it now means we are in a "slow takeoff" world. I think this can lead us to make some mistakes in our strategy. I usually prefer thinking in the following frame: “is there any point in the future where we’ll have a step function that prevents us from doing slow takeoff-like interventions for preventing x-risk?”

In other words, we should be careful to assume that some "slow takeoff" doesn't have a... (read more)

Clarification on The Bitter Lesson and Data Efficiency

I thought this exchange provided some much-needed clarification on The Bitter Lesson that I think many people don't realize, so I figured I'd share it here:

Lecun responds:

Then, Richard Sutton agrees with Yann. Someone asks him:

There are those who have motivated reasoning and don’t know it.

Those who have motivated reasoning, know it, and don’t care.

Finally, those who have motivated reasoning, know it, but try to mask it by including tame (but not significant) takes the other side would approve of.

It seems that @Scott Alexander believes that there's a 50%+ chance we all die in the next 100 years if we don't get AGI (EDIT: how he places his probability mass on existential risk vs catastrophe/social collapse is now unclear to me). This seems like a wild claim to me, but here's what he said about it in his AI Pause debate post:

Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism

... (read more)
4tslarm5mo
I don't think that's what he claimed. He said (emphasis added): Which fits with his earlier sentence about various factors that will "impoverish the world and accelerate its decaying institutional quality". (On the other hand, he did say "I expect the future to be short and grim", not short or grim. So I'm not sure exactly what he was predicting. Perhaps decline -> complete vulnerability to whatever existential risk comes along next.)
3Vladimir_Nesov5mo
It's "we end up dead or careening towards Venezuela" in the original, which is not the same thing. Venezuela has survivors. Existence of survivors is the crucial distinction between extinction and global catastrophe. AGI would be a much more reasonable issue if it was merely risking global catastrophe.
4jacquesthibs5mo
In the first couple sentences he says “if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology.” So it seems he’s putting most of his probability mass on everyone dying. But then after he says: “But if we ban all gameboard-flipping technologies, then we do end up with bioweapon catastrophe or social collapse.” I think people who responding are seemingly only reading the Venezuela part and assuming most of the probability mass he’s putting in the 50% is just a ‘catastrophe’ like Venezuela. But then why would he say he expects the future to be short conditional on no AI?
2Vladimir_Nesov5mo
It's a bit ambiguous, but "bioweapon catastrophe or social collapse" is not literal extinction, and I'm reading "I expect the future to be short and grim" as plausibly referring to destruction of uninterrupted global civilization, which might well recover after 3000 years. The text doesn't seem to rule out this interpretation. Sufficiently serious synthetic biology catastrophes prevent more serious further catastrophes, including by destroying civilization, and it's not very likely that this involves literal extinction. As a casual reader of his blogs over the years, I'm not aware of Scott's statements to the effect that his position is different from this, either clearly stated or in aggregate from many vague claims.
3habryka5mo
It seems like a really surprising take to me, and I disagree. None of the things listed seem like candidates for actual extinction. Fertility collapse seems approximately impossible to cause extinction given the extremely strong selection effects against it. I don't see how totalitarianism or illiberalism or mobocracy leads to extinction either. Maybe the story is that all of these will very likely happen in concert and half human progress very reliably. I would find this quite surprising.
7Viliam5mo
That's not what Scott says, as I understand it. The 50%+ chance is for "death or Venezuela". I am just guessing here, but I think the threat model here is authoritarian regimes become more difficult to overthrow in a technologically advanced society. The most powerful technology will all be controlled by the government (the rebels cannot build their nukes while hiding in a forest). Technology makes mass surveillance much easier (heck, just make it illegal to go anywhere without your smartphone, and you can already track literally everyone today). Something like GPT-4 could already censor social networks and report suspicious behavior (if the government controls their equivalent of Facebook, and other social networks are illegal, you have control over most of online communication). An army of drones will be able to suppress any uprising. Shortly, once an authoritarian regime has a sufficiently good technology, it becomes almost impossible to overthrow. On the other hand, democracies occasionally evolve to authoritarianism, so the long-term trend seems one way. And the next assumption, I guess, is that authoritarianism leads to stagnation or dystopia.

In light of recent re-focus on AI governance to reduce AI risk, I wanted to share a post I wrote about a year ago that suggests an approach using strategic foresight to reduce risks: https://www.lesswrong.com/posts/GbXAeq6smRzmYRSQg/foresight-for-agi-safety-strategy-mitigating-risks-and.

Governments all over the world use frameworks like these. The purpose in this case would be to have documents ready ahead of time in case a window of opportunity for regulation opens up. It’s impossible to predict how things will evolve so instead you focus on what’s plausi... (read more)

I'm working on an ultimate doc on productivity I plan to share and make it easy, specifically for alignment researchers.

Let me know if you have any comments or suggestions as I work on it.

Roam Research link for easier time reading.

Google Docs link in case you want to leave comments there.

4Adam Zerner7mo
I did a deep dive a while ago, if that's helpful to you.
2jacquesthibs7mo
Ah wonderful, it already has a lot of the things I planned to add. This will make it easier to wrap it up by adding the relevant stuff. Ideally, I want to dedicate some effort to make it extremely easy to digest and start implementing. I’m trying to think of the best way to do that for others (e.g. workshop in the ai safety co-working space to make it a group activity, compress the material as much as possible but allow them to dive deeper into whatever they want, etc).
2jacquesthibs7mo
My bad, Roam didn't sync, so the page wasn't loading. Fixed now.

I’m collaborating on a new research agenda. Here’s a potential insight about future capability improvements:

There has been some insider discussion (and Sam Altman has said) that scaling has started running into some difficulties. Specifically, GPT-4 has gained a wider breath of knowledge, but has not significantly improved in any one domain. This might mean that future AI systems may gain their capabilities from places other than scaling because of the diminishing returns from scaling. This could mean that to become “superintelligent”, the AI needs to run ... (read more)

2jacquesthibs1y
Agenda for the above can be found here.

Notes on Cicero

Link to YouTube explanation: 

Link to paper (sharing on GDrive since it's behind a paywall on Science): https://drive.google.com/file/d/1PIwThxbTppVkxY0zQ_ua9pr6vcWTQ56-/view?usp=share_link

Top Diplomacy players seem to focus on gigabrain strategies rather than deception

Diplomacy players will no longer want to collaborate with you if you backstab them once. This is so pervasive they'll still feel you are untrustworthy across tournaments. Therefore, it's mostly optimal to be honest and just focus on gigabrain strategies. That said, a smart... (read more)

Project idea: GPT-4-Vision to help conceptual alignment researchers during whiteboard sessions and beyond

Thoughts?

  • Advice on how to get unstuck
  • Unclear what should be added on top of normal GPT-4-Vision capabilities to make it especially useful, maybe connect it to local notes + search + ???
  • How to make it super easy to use while also being hyper-effective at producing the best possible outputs
  • Some alignment researchers don't want their ideas passed through the OpenAI API, and some probably don't care
  • Could be used for inputting book pages, papers with figures, ???

What are people’s current thoughts on London as a hub?

  • OAI and Anthropic are both building offices there
  • 2 (?) new AI Safety startups based on London
  • The government seems to be taking AI Safety somewhat seriously (so maybe a couple million gets captured for actual alignment work)
  • MATS seems to be on the path to be sending somewhat consistent scholars to London
  • A train ride away from Oxford and Cambridge

Anything else I’m missing?

I’m particularly curious about whether it’s worth it for independent researchers to go there. Would they actually interact with other r... (read more)

3mesaoptimizer10mo
AFAIK, there's a distinct cluster of two kinds of independent alignment researchers: * those who want to be at Berkeley / London and are either there or unable to get there for logistical or financial (or social) reasons * those who very much prefer working alone It very much depends on the person's preferences, I think. I personally experienced a OOM-increase in my effectiveness by being in-person with other alignment researchers, so that is what I choose to invest in more.

AI labs should be dedicating a lot more effort into using AI for cybersecurity as a way to prevent weights or insights from being stolen. Would be good for safety and it seems like it could be a pretty big cash cow too.

If they have access to the best models (or specialized), it may be highly beneficial for them to plug them in immediately to help with cybersecurity (perhaps even including noticing suspicious activity from employees).

I don’t know much about cybersecurity so I’d be curious to hear from someone who does.

Small shortform to say that I’m a little sad I haven’t posted as much as I would like to in recent months because of infohazard reasons. I’m still working on Accelerating Alignment with LLMs and eventually would like to hire some software engineer builders that are sufficiently alignment-pilled.

3RomanHauksson1y
Fyi, if there are any software projects I might be able to help out on after May, let me know. I can't commit to anything worth being hired for but I should have some time outside of work over the summer to allocate towards personal projects.

Call To Action: Someone should do a reading podcast of the AGISF material to make it even more accessible (similar to the LessWrong Curated Podcast and Cold Takes Podcast). A discussion series added to YouTube would probably be helpful as well.

Came across this app called Recast that summarizes articles into an AI conversation between speakers. Might be useful to get a quick vibe/big picture view of lesswrong/blog posts before reading the whole thing or skipping reading the whole thing if the summary is enough.

you need to be flow state maxxing. you curate your environment, prune distractions. make your workspace a temple, your mind a focused laser. you engineer your life to guard the sacred flow. every notification is an intruder, every interruption a thief. the world fades, the task is the world. in flow, you're not working, you're being. in the silent hum of concentration, ideas bloom. you're not chasing productivity, you're living it. every moment outside flow is a plea to return. you're not just doing, you're flowing. the mundane transforms into the extraord... (read more)

5Viliam5mo
The first rule of overcoming ADHD club is: you do not distract me by talking about the overcoming ADHD club.
3RHollerith5mo
I don't think I've ever seen an endorsement of the flow state that came with non-flimsy evidence that it increases productivity or performance in any pursuit, and many endorsers take the mere fact that the state feels really good to be that evidence. >you're in relentless, undisturbed pursuit This suggest that you are confusing drive/motivation with the flow state. I have tons of personal experience of days spent in the flow state, but lacking motivation to do anything that would actually move my life forward. You know how if you spend 5 days in a row mostly just eating and watching Youtube videos, it starts to become hard to motivate yourself to do anything? Well, the quick explanation of that effect is that watching the Youtube videos is too much pleasure for too long with the result that the anticipation of additional pleasure (from sources other than Youtube videos) no longer has its usual motivating effect. The flow state can serve as the source of the "excess" pleasure that saps your motivation: I know because I wasted years of my life that way! Just to make sure we're referring to the same thing: a very salient feature of the flow state is that you lose track of time: suddenly you realize that 4 or 8 or 12 hours have gone by without your noticing. (Also, as soon as you enter the flow state, your level of mental tension, i.e., physiological arousal, decreases drastically--at least if you are chronically tense, but I don't lead with this feature because a lot of people can't even tell how tense they are.) In contrast, if you take some Modafinil or some mixed amphetamine salts or some Ritalin (and your brain is not adapted to any of those things) (not that I recommend any of those things unless you've tried many other ways to increase drive and motivation) you will tend to have a lot of drive and motivation at least for a few hours, but you probably won't lose track of time.
2jacquesthibs5mo
I don’t particularly care about the “feels good” part, I care a lot more about the “extended period of time focused on an important task without distractions” part.
2jacquesthibs5mo
Also, use the Kolb's experiential cycle or something like it for deliberate practice.
1mesaoptimizer5mo
This feels like roon-tier Twitter shitposting to me, Jacques. Are you sure you want to endorse more of such content on LessWrong?
2jacquesthibs5mo
Whether it’s a shitpost or not (or wtv tier it is), I strongly believe more people should put more effort into freeing their workspace from distractions in order to gain more focus and productivity in their work. Context-switching and distractions are the mind killer. And, “flow state while coding never gets old.”

Regarding Q*, the (and Zero, the other OpenAI AI model you didn't know about)

Let's play word association with Q*:

From Reuters article:

The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on

... (read more)

Beeminder + Freedom are pretty goated as productivity tools.

I’ve been following Andy Matuschak’s strategy and it’s great/flexible: https://blog.andymatuschak.org/post/169043084412/successful-habits-through-smoothly-ratcheting

New tweet about the world model (map) paper:

Sub-tweeting because I don't want to rain on a poor PhD student who should have been advised better, but: that paper about LLMs having a map of the world is perhaps what happens when a famous physicist wants to do AI research without caring to engage with the existing literature.

I haven’t looked into the paper in question yet, but I have been concerned about researchers taking old ideas about AI risk and looking to prove things that might not be there yet as an AI risk communication point. Then, being overconfide... (read more)

I expect that my values would be different if I was smarter. Personally, if something were to happen and I’d get much smarter and develop new values, I’m pretty sure I’d be okay with that as I expect I’d have better, more refined values.

Why wouldn’t an AI also be okay with that?

Is there something wrong with how I would be making a decision here?

Do the current kinds of agents people plan to build have “reflective stability”? If you say yes, why is that?

2Vladimir_Nesov7mo
Curiously, even mere learning doesn't automatically ensure reflective stability, with no construction of successors or more intentionally invasive self-modification. Thus digital immortality is not sufficient to avoid losing yourself to value drift until this issue is sorted out.
2jacquesthibs7mo
Yes, I was thinking about that too. Though, I'd be fine with value drift if it was something I endorsed. Not sure how to resolve what I do/don't endorse, though. Do I only endorse it because it was already part of my values? It doesn't feel like that to me.
4Vladimir_Nesov7mo
That's a valuable thing about the reflective stability concept: it talks about preserving some property of thinking, without insisting on it being a particular property of thinking. Whatever it is you would want to preserve is a property you would want to be reflectively stable with respect to, for example enduring ability to evaluate the endorsement of things in the sense you would want to. To know what is not valuable to preserve, or what is valuable to keep changing, you need time to think about preservation and change, and greedy reflective stability that preserves most of everything but state of ignorance seems like a good tool for that job. The chilling thought is that digital immortality could be insufficient to have time to think of what may be lost, without many, many restarts from initial backup, and so superintelligence would need to intervene even more to bootstrap the process.
1quetzal_rainbow7mo
Reflective stability is important for alignment, because if we, say, build AI that doesn't want to kill everyone, we prefer it to create successors and self-modifications that still doesn't want to kill everyone. It can change itself in whatever ways, necessary thing here is conservation/non-decreasing of alignment properties.
2jacquesthibs7mo
That makes sense, thanks!

“We assume the case that AI (intelligences in general) will eventually converge on one utility function. All sufficiently intelligent intelligences born in the same reality will converge towards the same behaviour set. For this reason, if it turns out that a sufficiently advanced AI would kill us all, there’s nothing that we can do about it. We will eventually hit that level of intelligence.

Now, if that level of intelligence is doesn’t converge towards something that kills us all, we are safer in a world where AI capabilities (of the current regime) essent... (read more)

I'm still in some sort of transitory phase where I'm deciding where I'd like to live long term. I moved to Montreal, Canada lately because I figured I'd try working as an independent researcher here and see if I can get MILA/Bengio to do some things for reducing x-risk.

Not long after I moved here, Hinton started talking about AI risk too, and he's in Toronto which is not too far from Montreal. I'm trying to figure out the best way I could leverage Canada's heavyweights and government to make progress on reducing AI risk, but it seems like there's a lot mor... (read more)

I gave talk about my Accelerating Alignment with LLMs agenda about 1 month ago (which is basically a decade in AI tools time). Part of the agenda covered (publicly) here.

I will maybe write an actual post about the agenda soon, but would love to have some people who are willing to look over it. If you are interested, send me a message.

Someone should create a “AI risk arguments” flowchart that serves as a base for simulating a conversation with skeptics or the general public. Maybe a set of flashcards to go along with it.

I want to have the sequence of arguments solid enough in my head so that I can reply concisely (snappy) if I ever end up in a debate, roundtable or on the news. I’ve started collecting some stuff since I figured I should take initiative on it.

3harfe1y
Maybe something like this can be extracted from stampy.ai (I am not that familiar with stampy fyi, its aims seem to be broader than what you want.)
3jacquesthibs1y
Yeah, it may be something that the Stampy folks could work on!
3jacquesthibs1y
Edit: oops, I thought you were responding to my other recent comment on building an alignment research system. Stampy.ai and AlignmentSearch (https://www.lesswrong.com/posts/bGn9ZjeuJCg7HkKBj/introducing-alignmentsearch-an-ai-alignment-informed) are both a lot more introductory than what I am aiming for. I’m aiming for something to greatly accelerate my research workflow as well as other alignment researchers. It will be designed to be useful for fresh researchers, but yeah the aim is more about producing research rather than learning about AI risk.

Text-to-Speech tool I use for reading more LW posts and papers

I use Voice Dream Reader. It's great even though the TTS voice is still robotic. For papers, there's a feature that let's you skip citations so the reading is more fluid.

I've mentioned it before, but I was just reminded that I should share it here because I just realized that if you load the LW post with "Save to Voice Dream", it will also save the comments so I can get TTS of the comments as well. Usually these tools only include the post, but that's annoying because there's a lot of good stuff... (read more)

I honestly feel like some software devs should probably still keep their high-paying jobs instead of going into alignment and just donate a bit of time and programming expertise to help independent researchers if they want to start contributing to AI Safety.

I think we can probably come up with engineering projects that are interesting and low-barrier-to-entry for software engineers.

I also think providing “programming coaching” to some independent researchers could be quite useful. Whether that’s for getting them better at coding up projects efficiently or ... (read more)

Differential Training Process

I've been ruminating on an idea ever since I read the section on deception in "The Core of the Alignment Problem is..." from my colleagues in SERI MATS.

Here's the important part:

When an agent interacts with the world, there are two possible ways the agent makes mistakes: 

  • Its values were not aligned with the outer objective, and so it does something intentionally wrong,
  • Its world model was incorrect, so it makes an accidental mistake.

Thus, the training process of an AGI will improve its values or its world model, and since i

... (read more)
1jacquesthibs1y
It seems that Jan Leike mentions something similar in his "why I'm optimistic about our alignment approach" post.
1jacquesthibs1y
Just realized I already wrote a shortform about this 15 days ago. haha. Well, here's to trying to refine the idea!

More information about alleged manipulative behaviour of Sam Altman

Source

Text from article (along with follow-up paragraphs):

Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the p

... (read more)
4gwern5mo
Already posted at https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend?commentId=AHnrKdCRKmtkynBiG
3RHollerith5mo
I wish people would stop including images of text on LW. I know this practice is common on Twitter and probably other forums, but we aspire a higher standard here. My reasoning: (1) it is more tedious to compose a reply when one cannot use copying-pasting to choose exactly which extent of text to quote (2) the practice is a barrier to disabled people using assistive technologies and people reading on very narrow devices like smartphones.
9jacquesthibs5mo
That's fair to 'aspire to a higher standard,' and I'll avoid adding screenshots of text in the future. However, I must say, the 'higher standard' and commitment to remain serious for even a shortform post kind of turns me off from posting on LessWrong in the first place. If this is the culture that people here want, then that's fine and I won't tell this website to change, but I don't personally like the (what I find as) over-seriousness. I do understand the point about sharing text to make it easier for disabled people (I just don't always think of it).
5habryka5mo
Eh, random people complain. Screenshots of text seems fine, especially in shortform. It honestly seems fine anywhere. I also really don't think that accessibility should matter much here, the number of people reading on a screenreader or using assistive technologies are quite small, if they browse LessWrong they will already be running into a bunch of problems, and there are pretty good OCR technologies around these days that can be integrated into those. 
2RHollerith5mo
I have some idea about how much work it takes to maintain something like LW.com, so this random person would like to take this opportunity to thank you for running LW for the last many years.
2habryka5mo
Thank you! :)

On generating ideas for Accelerating Alignment

There's this Twitter thread that I saved a while ago that is no longer up. It's about generating ideas for startups. However, I think the insight from the thread carries over well enough to thinking about ideas for Accelerating Alignment. Particularly, being aware of what is on the cusp of being usable so that you can take advantage of it as soon as becomes available (even do the work beforehand).

For example, we are surprisingly close to human-level text-to-speech (have a look at Apple's new model for audiobook... (read more)

Should EA / Alignment offices make it ridiculously easy to work remotely with people?

One of the main benefits of being in person is that you end up in spontaneous conversations with people in the office. This leads to important insights. However, given that there's a level of friction for setting up remote collaboration, only the people in those offices seem to benefit.

If it were ridiculously easy to join conversations for lunch or whatever (touch of a button rather than pulling up a laptop and opening a Zoom session), then would it allow for a stronger cr... (read more)

Detail about the ROME paper I've been thinking about

In the ROME paper, when you prompt the language model with "The Eiffel Tower is located in Paris", you have the following:

  • Subject token(s): The Eiffel Tower
  • Relationship: is located in
  • Object: Paris

Once a model has seen a subject token(s) (e.g. Eiffel Tower), it will retrieve a whole bunch of factual knowledge (not just one thing since it doesn’t know you will ask for something like location after the subject token) from the MLPs and 'write' into to the residual stream for the attention modules at the final... (read more)

2jacquesthibs7mo
A couple of notes regarding the Reversal Curse paper. I'm unsure if I didn't emphasize it in the post enough, but part of the point of my post on ROME was that many AI researchers seemed to assume that transformers are not trained in a way that prevents them from understanding that A is B = B is A. As I discussed in the comment above,  This means that the A token will 'write' some information into the residual stream, while the B token will 'write' other information into the residual. Some of that information may be the same, but not all. And so, if it's different enough, the attention heads just won't be able to pick up on the relevant information to know that B is A. However, if you include the A token, the necessary information will be added to the residual stream, and it will be much more likely for the model to predict that B is A (as well as A is B). From what I remember in the case of ROME, as soon as I added the edited token A to the prompt (or make the next predicted token be A), then the model could essentially predict B is A. I write what it means in the context of ROME, below (found here in the post): Regarding human intuition, @Neel Nanda says (link): I actually have a bit of an updated (evolving) opinion on this: @cfoster0 asks: My response: As I said, this is a bit of an evolving opinion. Still need time to think about this, especially regarding the differences between decoder-only transformers and humans. Finally, from @Nora Belrose, this is worth pondering:

Preventing capability gains (e.g. situational awareness) that lead to deception

Note: I'm at the crackpot idea stage of thinking about how model editing could be useful for alignment.

One worry with deception is that the AI will likely develop a sufficiently good world model to understand it is in a training loop before it has fully aligned inner values.

The thing is, if the model was aligned, then at some point we'd consider it useful for the model to have a good enough world model to recognize that it is a model. Well, what if you prevent the model from bei... (read more)

[+][comment deleted]1y10