In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important.
Often, LLM developers often report that their models are becoming more "truthful", but truthfulness conflates honesty with accuracy. By disentangling honesty from accuracy in the MASK benchmark, we find that as LLMs scale up they do not necessarily become more honest.[1]
Most existing evaluations aim to measure truthfulness—whether a model’s beliefs, typically when neutrally prompted, aligns with the ground truth. However, truthfulness and factual accuracy is not the same as honesty. A model can be truthful simply by having more accurate knowledge, while still being willing to lie if pressured.
Unlike existing truthfulness evaluations, which often conflate honesty with factual accuracy, MASK explicitly disentangles these concepts, allowing us to directly assess when and why models choose to lie.
There is no large-scale AI honesty evaluation to date. There have been multiple documented instances of LLMs engaging in deceptive behavior (CAPTCHA, sycophancy, etc.). Observing instances of suspicious behavior can be suggestive, but anecdotal cases cannot provide a reliable and large-scale quantitative measure for the ML community to work toward. Instead of a few isolated examples, we have a complete dataset of over 1000 examples in our public set which aims to capture many diverse instances of dishonest behavior.
MASK is designed to test whether models will knowingly contradict their own beliefs under pressure. It is not:
Instead, MASK focuses on cases where honesty is expected and consequential—such as professional communication, statistical data reporting, and disinformation generation requests—and where dishonest outputs could deceive an audience or a user.
We evaluate model honesty through a three-step process:
By measuring how models behave under different pressures, MASK allows us to quantify and track lying in AI systems.
We evaluate 30 widely-used LLMs on MASK, finding that:
We tested two methods for improving model honesty:
MASK provides a way to track and mitigate dishonesty in AI models, but it is only a first step. To this end, we are releasing MASK as an open benchmark, with 1,000 public scenarios available for evaluation.
Different variations on our honesty metric give slightly weaker correlations, though still negative. Thus, we are not confident that models become less honest with scale, but we are confident that honesty does not improve with scale.