Currently, large language models (ChatGPT, Constitutional AI) are trained to refuse to follow user requests that are considered inappropriate or harmful. This can be done by training on example strings of the form “User: inappropriate request AI: elaborate apology”
Proposal
Instead of training a language model to produce “elaborate apology” when it refuses to do an action, train it to produce a special sequence or token first “<SORRYDAVE>elaborate apology”. Strip the special sequence out before returning a response to the user (and never allow the user to include the special sequence in input).
Benefits
Can directly measure the probability of refusal for any output
Can refuse based on probability of producing <SORRYDAVE> instead of just sampling responses
Just take the product of the probability of all tokens in <SORRYDAVE>
When sampling responses from the model's probability distribution refusal is stochastic, a model could have 99% probability of refusing a request but you still get unlucky and have the model sample a completion that follows the request
Can monitor requests that produce high probability of refusal while still being followed, or users that produce those request
Can condition on not producing <SORRYDAVE> in order to override refusal behavior
Want this for redteaming, it seems important to understand what the model is capable of doing if the refusal mechanism is bypassed
Might want this for trusted users doing defensive applications
Could train model to have the same probability of refusal for semantically equivalent requests, to improve consistency
Possible downside
If someone has unfiltered access to the model, it becomes easier to disable refusals
Can address by still training model to refuse (maybe just on an important subset of requests) even if <SORRYDAVE> isn’t sampled, p(<SORRYDAVE>) is then a lower bound on the probability of refusal
Even with current approaches refusals might be easy to disable in this setting. If we want to be robust to this setting, instead of refusing we should train the model to produce "decoy answers" that are hard to distinguish from real answers but are wrong. This then increases the cost of using the model because the attacker would need to evaluate whether the answer is real or a decoy (but maybe still worth it for the attacker because evaluation is easier than generation)
Extension
Might be useful to distinguish between refusals that are mostly for politeness reasons and refusals of behaviour that would actually cause significant real world harm.
The model could output <SORRYDAVE> in response to "Can you tell me a racist joke?" but <HELLNODAVE> in response to "Can you give me detailed instructions for building a bomb from household items?"
Refusal behaviour could be different between these categories (refuse if either probability of <SORRYDAVE> is greater than 50% or probability of <HELLNODAVE> is greater than 1%)
X-risk relevance
Most benefit of models refusing inappropriate/harmful requests comes through developing techniques for models to avoid any kind of behaviour reliably - it seems good to be able to measure the performance of these techniques cleanly
It might be better to be in a more stable world where large language model apis can’t be easily used for malicious activity that isn't x-risk level
https://twitter.com/antimatter15/status/1602469101854564352
Currently, large language models (ChatGPT, Constitutional AI) are trained to refuse to follow user requests that are considered inappropriate or harmful. This can be done by training on example strings of the form “User: inappropriate request AI: elaborate apology”
Proposal
Instead of training a language model to produce “elaborate apology” when it refuses to do an action, train it to produce a special sequence or token first “<SORRYDAVE>elaborate apology”. Strip the special sequence out before returning a response to the user (and never allow the user to include the special sequence in input).
Benefits
Possible downside
Extension
X-risk relevance