BASALT: A Benchmark for Learning from Human Feedback

by Rohin Shah2 min read8th Jul 202114 comments

35

AI
Frontpage

Copying the abstract of the paper:

The last decade has seen a significant increase of interest in deep learning research, with many public successes that have demonstrated its potential. As such, these systems are now being incorporated into commercial products. With this comes an additional challenge: how can we build AI systems that solve tasks where there is not a crisp, well-defined specification? While multiple solutions have been proposed, in this competition we focus on one in particular: learning from human feedback. Rather than training AI systems using a predefined reward function or using a labeled dataset with a predefined set of categories, we instead train the AI system using a learning signal derived from some form of human feedback, which can evolve over time as the understanding of the task changes, or as the capabilities of the AI system improve.

The MineRL BASALT competition aims to spur forward research on this important class of techniques. We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions. These tasks are defined by a paragraph of natural language: for example, "create a waterfall and take a scenic picture of it", with additional clarifying details. Participants must train a separate agent for each task, using any method they want. Agents are then evaluated by humans who have read the task description. To help participants get started, we provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline that leverages these demonstrations.

Our hope is that this competition will improve our ability to build AI systems that do what their designers intend them to do, even when the intent cannot be easily formalized. Besides allowing AI to solve more tasks, this can also enable more effective regulation of AI systems, as well as making progress on the value alignment problem.

I also mention this in the latest Alignment Newsletter, but I think this is probably one of the best ways to get started on AI alignment from the empirical ML perspective: it will (hopefully) give you a sense of what it is like to work with algorithms that learn from human feedback, in a more realistic setting than Atari / MuJoCo, while still not requiring a huge amount of background or industry-level compute budgets.

Section 1.1 of the paper goes into more detail about the pathways to impact. At a high level, the story is that better algorithms for learning from human feedback will improve our ability to build AI systems that do what their designers intend them to do. This is straightforwardly improving on intent alignment (though it is not solving it), which in turn allows us to better govern our AI systems by enabling regulations like "your AI systems must be trained to do X" without requiring a mathematical formalization of X.

AI2
Frontpage

35

15 comments, sorted by Highlighting new comments since Today at 10:26 AM
New Comment

Nice! If I had university CS affiliations I would send them this with unsubtle comments that it would be a cool project to get students to try :P

In fact, now that I think about it, I do have one contact through the UIUC datathon. Or would you rather not have this sort of marketing?

Or would you rather not have this sort of marketing?

I would be excited to see this competition promoted widely!

(Obviously I wouldn't want to do anything that reflected really poorly on the marketers, like blackmail, but this seems to clearly not be in that category.)

Love this idea. From the linked post on the BAIR website, the idea of "prompting" a Minecraft task with e.g. a brief sequence of video frames seems especially interesting.

Would you anticipate the benchmark version of this would ask participants to disclose metrics such as "amount of task-specific feedback or data used in training"? Or does this end up being too hard to quantify because you're explicitly expecting folks to use a variety of feedback modalities to train their agents?

Would you anticipate the benchmark version of this would ask participants to disclose metrics such as "amount of task-specific feedback or data used in training"?

Probably not, just because it's pretty niche -- I expect the vast majority of papers (at least in the near future) will have only task-specific feedback, so the extra data isn't worth the additional hassle. (The prompting approach seems like it would require a lot of compute.)

Tbc, "amount of task-specific feedback" should still be inferable from research papers, where you are meant to provide enough details that others could reproduce your work. It just wouldn't be as simple as looking up the "BASALT evaluation table" for your method of choice.

That makes sense, though I'd also expect that LfLH benchmarks like BASALT could turn out to be a better fit for superscale models in general. (e.g. a BASALT analogue might do a better job of capturing the flexibility of GPT-N or DALL-E type models than current benchmarks do, though you'd probably need to define a few hundred tasks for that to be useful. It's also possible this has already been done and I'm unaware of it.)

That makes sense, though I'd also expect that LfLH benchmarks like BASALT could turn out to be a better fit for superscale models in general.

Oh yeah, it totally is, and I'd be excited for that to happen. But I think that will be a single project, whereas the benchmark reporting process is meant to apply for things where there will be lots of projects that you want to compare in a reasonably apples-to-apples way, so when designing the reporting process I'm focused more on the small-scale projects that aren't GPT-N-like.

It's also possible this has already been done and I'm unaware of it

I'm pretty confident that there's nothing like this that's been done and publicly released.

It's not quite as interesting as I initially thought, since they allow handcrafted reward functions and heuristics. It would be more interesting if the designers did not know the particular task in advance, and the AI would be forced to learn the task entirely from demonstrations and/or natural language description.

Going from zero to "produce an AI that learns the task entirely from demonstrations and/or natural language description" is really hard for the modern AI research hive mind. You have to instead give it a shaped reward, breadcrumbs along the way that are easier, (such as allowing handcrafted heuristics and such, and allowing knowledge of a particular target task) to get the hive mind started making progress.

It's not "from zero" though, I think that we already have ML techniques that should be applicable here.

they allow handcrafted reward functions and heuristics

We allow it, but we don't think it will lead to good performance (unless you throw a very large amount of time at it).

The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won't be able to do so. That's what we've tried to do here.

Note we do ban extraction of information from the Minecraft simulator -- you have to work with pixels, so if you want to make handcrafted reward functions, you have to compute rewards from pixels somehow. (Technically you also have inventory information but that's not that useful.) We have this rule because in a real-world deployment you wouldn't be able to simply extract the "state" of physical reality.

I am a bit more worried about allowing heuristics -- it's plausible to me that our chosen tasks are simple enough that heuristics could solve them, even though real world tasks are too complex for similar heuristics to work -- but this is basically a place where we're sticking our necks out and saying "nope, heuristics won't suffice either" (again, unless you put a lot of effort into designing the heuristics, where it would have been faster to just build the system that, say, learns from demonstrations).

It would be more interesting if the designers did not know the particular task in advance

But for real-world deployment of AI systems, designers do know the task in advance! We don't want to ban strategies that designers could use in a realistic setting.

The AI safety community claims it is hard to specify reward functions... But for real-world deployment of AI systems, designers do know the task in advance!

Right, but you're also going for tasks that are relatively simple and easy. In the sense that, "MakeWaterfall" is something that I can, based on my own experience, imagine solving without any ML at all (but ofc going to that extreme would require massive work). It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn't scale to more complex tasks. If your task was e.g. "follow arbitrary natural language instructions" then I wouldn't care about the "lax" rules.

Note we do ban extraction of information from the Minecraft simulator

This is certainly good, but I wonder what are the exact rules here. Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed? If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?

Not saying that what you're doing is not useful, just pointing out a certain way in which the benchmark might diverge from its stated aim.

It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn't scale to more complex tasks. 

I agree that's possible. Tbc, we did spend some time thinking about how we might use handcrafted rewards / heuristics to solve the tasks, and eliminated a couple based on this, so I think it probably won't be true here.

Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed?

No.

If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?

For the competition, there's a ban on pretrained models that weren't publicly available prior to competition start. We look at participants' training code to ensure compliance. It is still possible to violate this rule in a way that we may not catch (e.g. maybe you use internal simulator details to do hyperparameter tuning, and then hardcode the hyperparameters in your training code), but it seems quite challenging and not worth the effort even if you are willing to cheat.

For the benchmark (which is what I'm more excited about in the longer run), we're relying on researchers to follow the rules. Science already relies on researchers honestly reporting their results -- it's pretty hard to catch cases where you just make up numbers for your experimental results.

(Also in the benchmark version, people are unlikely to write a paper about how they solved the task using special-case heuristics; that would be an embarrassing paper.)

The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won't be able to do so. That's what we've tried to do here.

It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems. 

Additionally, developing a safe system and developing a nonsafe system are very different. Even if your reward function works 99,9% of the time it can be exploited in those cases where it fails.

Okay, regardless of what the AI safety community claims, I want to make that claim.

(I think a substantial chunk of the AI safety community also makes that claim but I'm not interested in defending that here.)

It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems. 

As an aside, if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I'd be advocating really hard for sticking with task-specific AI systems and never building super general AI systems (or only building them after some really high threshold of safety was met).

if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I'd be advocating really hard for sticking with task-specific AI systems and never building super general AI systems

The problem with this is that you need an AI whose task is "protect humanity from unaligned AIs", which is already very "general" in a way (i.e. requires operating on large scales of space, time and strategy). Unless you can effectively reduce this to many "narrow" tasks which is probably not impossible but also not easy.