Logan Riggs Smith
Boston, MA, USA

Wiki Contributions

Comments

Frame for Take-Off Speeds to inform compute governance & scaling alignment

I wonder how much COVID got people to switch to working on Biorisks.

What I’m interested here is talking to real researchers and asking what events would convince them to switch to alignment. Enumerating those would be useful for explaining to them.

I think asking for specific capabilities would also be interesting. Or what specific capabilities they would’ve said in 2012. Then asking how long they expect between that capability and an x-catastrophe.

Prize for Alignment Research Tasks

[Note: this one, steelman, and feedback on proposals all have very similar input spaces. I think I would ideally mix them as one in an actual product, but I'm keeping them separate for now]

Task: Obvious EA/Alignment Advice

  • Context: There are several common mental motions that the EA community does which are usefully applied to alignment. Ex. "Differential Impact", "Counterfactual Impact", "Can you clearly tell a story on how this reduces x-risk?", and "Truly Optimizing for X". A general "obvious advice" is useful for general capabilities as well, but this is intended to have a strong, EA tilt to it.
  • Input Type: A plan for reducing x-risk, either a research direction, upskilling, or field-building
  • Output Type: Useful exploratory questions 

Instance 1

Input:

I'm thinking about taking up a job at Google to upskill in machine learning before I apply to Anthropic or Redwood Research.

Output:

Isn't there a more direct path to upskilling in machine learning? Like taking a course or doing a bunch of dumb, fun projects? Additionally, is programming/machine learning your comparative advantage?

Instance 2

Input:

I'm working on robustness to distributional shift, specifically identifying when an input is out-of-distribution so it can default to a no-op or asking for human feedback on that input.

Output:

How neglected is this field? If you didn't work in this field, would someone else have done the work you'd do, but a year or two later?

Instance 3

Input:

I'm building an interpretability tool that can tell you what the model was thinking when it was making a classification, for instance, what part of the input it was using to make that classification.

Output:

Let's say you succeed in making this tool; can you tell a story on how this tool can reduces AI x-risk?

Prize for Alignment Research Tasks

Task: Steelman Alignment proposals

  • Context: Some alignment research directions/proposals have a kernel of truth to them. Steelmanning these ideas to find the best version of it may open up new research directions or, more likely, make the pivot to alignment research easier. On the latter, some people are resistant to change their research direct, and a steelman will only slightly change the topic while focusing on maximizing impact. This would make it easier to convince these people to change to a more alignment-related direction.
  • Input Type: A general research direction for advancing alignment.
  • Output Type: A steelman of the research direction

Instance 1

Input:

Currently AI systems are prone to bias and unfairness which is unaligned with our values. I work in bias and fairness, specifically in analyzing how the biases in large datasets (such as common crawl) affect the probability distributions in large language models.

Output:

What I'm most excited about for bias and fairness research is understanding the model's concepts. If we can build tools that pin-point the bias or unfairness in a model such that there's a causal explanation and we can manipulate that part of the model to make it unbiased and fair, then we may be able to pin-point other parts of the model we care about. For example, parts of the model that are optimizing for a goal, signs of deception, and human values. Research in this area while asking "how could this work generalize to pin-pointing [human values/deception]" seems promising, especially making use of the current literature.

Instance 2

Input:

AI capabilities will continue to increase, so how do we propose utilizing this to boost alignment research. An Alignment research assistant (ARA) can perform many different tasks for the researcher such as summarizing papers, writing code, and assisting math proofs.

Output:

What I'm most excited about for something like ARA is differential research that mostly helps alignment researchers. Collecting datasets and tasks to e.g. give feedback on alignment proposals and automate mentorships from alignment researchers would be incredibly useful for avoiding dead-ends and scaling up junior researchers, respectively.

Instance 3

Input:

Before we trust the AI, we can prevent it from taking over the world by not giving it internet access or putting it in a faraday cage to avoid causal interactions with the outside world. Another possibility is running the AI in a simulated environment different than our own, so that we could catch it if it starts to perform power seeking.

Output:

What I'm most excited about for boxing an AI is figuring out the minimal capabilities required to perform a pivotal act. We box because we're concerned it may have too many resources (internet access, rich world model, etc) in order to gain more resources, but if we can constrain the capabilities of it enough to ensure a pivotal act, this reduces x-risk. Creating more and more pivotal acts that require less capabilities (or a different variety of capabilities) means we don't have to optimize as much and risk power-seeking.

Instance 4

Input:

We can perform reinforcement learning from human feedback to align the AI to human values. By achieving greater instruct-ability with smaller models and extrapolating trends with larger models, we can more safely build larger models that do what we ask them to.

Output:

What I'm most excited about human-in-the-loop feedback is using this system to accelerate alignment research and create better human-in-the-loop systems that then create better human-in-the-loop systems (i.e. recursive bootstrapping). Additionally, providing lots of human feedback can give the model a better understanding of human values. Although this doesn't imply the model optimizes for those human values, it does mean we have a more coherent concept to point to with e.g. interpretability tools.

Prize for Alignment Research Tasks

Task: Feedback on alignment proposals

  • Context: Some proposals for a solution to alignment are dead ends or have common criticisms. Having an easy way of receiving this feedback on one's alignment proposal can prevent wasted effort as well as furthering the conversation on that feedback.
  • Input Type: A proposal for a solution to alignment or a general research direction
  • Output Type: Common criticisms or arguments for dead ends for that research direction

Instance 1

Input:

Currently AI systems are prone to bias and unfairness which is unaligned with our values. I work in bias and fairness, specifically in analyzing how the biases in large datasets (such as common crawl) affect the probability distributions in large language models.

Output:

Making AI systems un-biased and fair has a positive impact on deployed products, but does not reduce existential risk.

Instance 2

Input:

AI capabilities will continue to increase, so how do we propose utilizing this to boost alignment research. An Alignment research assistant (ARA) can perform many different tasks for the researcher such as summarizing papers, writing code, and assisting math proofs.

Output:

If ARA can summarize, write code, and assist math proofs, then it can also be used to accelerate capabilities research. There are already market incentives to create those types of tools, so it isn't likely that you're able to produce a good research assistant that can perform those tasks before another company does.

Instance 3

Input:

Before we trust the AI, we can prevent it from taking over the world by not giving it internet access or putting it in a faraday cage to avoid causal interactions with the outside world. Another possibility is running the AI in a simulated environment different than our own, so that we could catch it if it starts to perform power seeking.

Output: (from Arbital)

The central difficulty of AI boxing is to describe a channel which cannot be used to manipulate the human operators, but which provides information relevant enough to be pivotal or game-changing relative to larger events. For example, it seems not unthinkable that we could safely extract, from a boxed AI setup, reliable information that prespecified theorems had been proved within Zermelo-Fraenkel set theory, but there is no known way to save the world if only we could sometimes know that prespecified theorems had been reliably proven in Zermelo-Fraenkel set theory.

Instance 4

Input:

We can perform reinforcement learning from human feedback to align the AI to human values. By achieving greater instruct-ability with smaller models and extrapolating trends with larger models, we can more safely build larger models that do what we ask them to.

Output:

An intelligent enough model can optimize for reward by taking over the reward signal directly or manipulating the mechanical turk workers providing the feedback. Having humans-in-the-loop doesn't solve the problem of power-seeking being instrumentally convergent.

Productive Mistakes, Not Perfect Answers

It’s also clear when reading these works and interacting with these researchers that they all get how alignment is about dealing with unbounded optimization, they understand fundamental problems and ideas related to instrumental convergence, the security mindset, the fragility of value, the orthogonality thesis…

I bet Adam will argue about this (or something similar) is the minimal we want for a research idea, because I agree with your idea that we shouldn’t expect solution to alignment to fall out of the marketing program for Oreos. We want to constrain it to at least “has a plausible story on reducing x-risk” and maybe what’s mentioned in the quote as well.

A survey of tool use and workflows in alignment research

Ya, I was even planning on trying:

[post/blog/paper] rohinmshah karma: 100 Planned summary for the Alignment Newsletter: \n> 

Then feed that input to.

Planned opinion:

to see if that has some higher-quality summaries. 

Job Offering: Help Communicate Infrabayesianism

For those with math backgrounds not already familiar with InfraBayes (maybe people share the post with their math-background friends), can there be specifics for context? Like:

If you have experience with topology, functional analysis, measure theory, and convex analysis then...

Or

You can get a good sense of InfraBayes from [this post] or [this one]

Or

A list of InfraBayes posts can be found here.

How I Formed My Own Views About AI Safety

No, "why" is correct. See the rest of the sentence:

Write out all the counter-arguments you can think of, and repeat
 

It's saying assume it's correct, then assume it's wrong, and repeat. Clever arguers don't usually devil advocate themselves.

The Big Picture Of Alignment (Talk Part 1)

How do transcriptions typically handle images? They're pretty important for this talk. You could embed the images in the text as it progresses?

The Big Picture Of Alignment (Talk Part 1)

Regarding generators of human values: say we have the gene information that encodes human cognition, what does that mean? Equivalent of a simulated human? Capabilities secret-sauce algorithm right? I'm unsure if you can take the body out of a person and still have the same values because I have felt senses in my body that tells me information about the world and how I relate to it.

Assume it works as a simulated person and ignore mindcrime, how do you algorithmically end up in a good enough subset of human values (because not all human values are meta-good)? Or, how do you use this to create a simulated long reflection? (ie what humans would decide ethics to be if they thought about it for [1000] years)

You could first figure out meta-preferences and bootstrap that in for figuring out preferences. Though, I'm unsure if there are a "correct" set of meta-preferences, with my main confusion being the blank spot in my map where "enlightenment" is.

Load More