This post is written in my personal capacity, and does not necessarily represent the views of OpenAI or any other organization. Cross-posted to the Effective Altruism Forum.
Image by OpenAI's DALL·E
If an agent A causes some harm while intending to benefit a principal P, what is P's liability? The answer to this question is important because any liability to P should affect A's calculus (insofar as A is trying to benefit and avoid harming P). Liability to P would help deter A from causing harm.
If A is a human, the law currently provides at least two mechanisms for discouraging :
- Making A directly liable, and
- Making P vicariously liable for A's actions.
What if A is an AI? AIs are not (yet?) legal persons, and so cannot yet be held directly liable. Thus, the main legal deterrent would have to work on P or some other person in the causal chain, such as the developer of the AI agent (who may not be the principal). However, there are several reasons to worry about relying on this as a strategy to make A compliant (which, in this case, means not tortiously harming others) under the current state of law and AI.
First, the problem of evasion still remains. Because sufficiently intelligent agents can evade detection and attribution, A may often (perhaps usually) prefer evasion over compliance when compliance would hinder A's ability to benefit P.
Second, the applicability and appropriateness of various theories of vicarious liability to the actions of AI agents is heavily debated in legal scholarship. These debates have cast some doubt on whether/which harms from AI "agents" can properly give rise to liability to human principals. Other possible theories of human liability—such as products liability—also face doctrinal challenges.
Third, note that relying on vicarious liability alone leaves A under fewer constraints than an analogous human would. Under most vicarious liability regimes, A would still be directly liable for her actions, even if P would also be vicariously liable. It seems unwise to legally constrain A less than we constrain humans in analogous circumstances.
Finally (and most decisively in my opinion) developing a theory that assigns liability to P based on A's actions (or actions + "mental" state) dramatically lowers the bar for creating an LFAI system in the first place. Once we have such a theory, A (if intent-aligned) should indeed incorporate expected vicarious liability to P into its decision procedure. However, if A is already reasoning about whether its actions would violate law (as required to make vicarious liability an effective constraint on A's actions), it seems strictly better to require A to directly incorporate that information into its decision procedure, rather than needing to go through the additional step of estimating the expected liability to P. This direct approach is simpler, and removes the possibility of evasion as a way around the legal constraint.
Further legal scholarship on vicarious liability for AI systems may still be valuable. If most morally significant autonomous activity in the world is indeed carried out by AI agents in the future, incentivizing their principals to constrain them seems important. But I think there are good reasons to suppose that this will either be ineffective for—or else dominated by—requiring AIs to be directly law-following.
See generally Vicarious Liability, Wex, https://www.law.cornell.edu/wex/vicarious_liability (last accessed Sept. 10, 2021); Respondeat Superior, Wex, https://www.law.cornell.edu/wex/respondeat_superior (last accessed Sept. 10, 2021). ↩︎
See, e.g., Benny Chan, Applying A Common Enterprise Theory of Liability to Clinical AI Systems, 47 Am. J.L. & Med. 351 (2021); Mihailis E. Diamantis, Algorithms Acting Badly: A Solution from Corporate Law, 89 Geo. Wash. L. Rev. 801 (2021); Mihailis E. Diamantis, The Extended Corporate Mind: When Corporations Use AI to Break the Law, 98 N.C. L. Rev. 893 (2020); Yaniv Benhamou & Justine Ferland, Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages, in Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (forthcoming 2020), https://ssrn.com/abstract=3535387; Mark A. Lemley & Bryan Casey, Remedies for Robots, 86 U. Chi. L. Rev. 1311 (2019); Elizabeth Fuzaylova, War Torts, Autonomous Weapon Systems, and Liability: Why A Limited Strict Liability Tort Regime Should Be Implemented, 40 Cardozo L. Rev. 1327 (2019); Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39 (2019); Xavier Frank, Is Watson for Oncology Per Se Unreasonably Dangerous?: Making A Case for How to Prove Products Liability Based on A Flawed Artificial Intelligence Design, 45 Am. J.L. & Med. 273 (2019); Matthew U. Scherer, Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems, 19 Nev. L.J. 259 (2018); David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117, 121–124 (2014); Jessica S. Allain, From Jeopardy! To Jaundice: The Medical Liability Implications of Dr. Watson and Other Artificial Intelligence Systems, 73 La. L. Rev. 1049 (2013). ↩︎
See Benhamou & Ferland, supra, at 13; Vladeck, supra, at 123 n.21. ↩︎
Diamantis, Algorithms Acting Badly, supra, at 823–26 (arguing that products liability will largely be unavailable); Vladeck, supra, at 129–41; Scherer, supra, at 280–81. ↩︎
Specifically, in the direct case, what matters is whether A is actually violating the law, whereas in the vicarious case, what matters is the expected liability to P. A can reduce expected liability to P through evasion, but cannot reduce the probability of “actually” breaking the law through evasion. ↩︎