This is a post in a series on "AI Benefits." It is cross-posted from my personal blog. For other entries in this series, navigate to the AI Benefits Blog Series Index page.

This post is also discussed on the Effective Altruism Forum.

For comments on this series, I am thankful to Katya Klinova, Max Ghenis, Avital Balwit, Joel Becker, Anton Korinek, and others. Errors are my own.

If you are an expert in a relevant area and would like to help me further explore this topic, please contact me.

AI Benefits Post 4: Outstanding Questions on Selecting Benefits

The final posts in this series contain questions I am currently contemplating about AI Benefits, which I’d love help and feedback on! There are two such posts: questions about selecting Benefits (this post) and questions about governing the Benefits process (the next post). The line between these categories is blurry because the governance process will (and indeed is designed to) influence the selection process. However, an intuitive—if imperfect—distinction between these is:

  • “Selection” questions focus on identifying and choosing between the types of Benefits that a Benefactor could produce.
  • “Governance” questions focus on the social institutions that affect the selection process and enhance its legitimacy.

Framing: A Portfolio Approach to Benefits

I currently think that large-scale AI Benefactors (e.g, technology companies like OpenAI or DeepMind) should initially adopt a portfolio approach to Benefits. This is certainly true for Benefactors that are risk-averse. However, even risk-neutral Benefactors should consider a portfolio approach due to:

  1. A need to try many different Benefits to discover which are most beneficial and learn how to deploy Benefits well, and
  2. The heterogeneity of the world, which means that an undiversified approach to Benefits might be too unequal.[1]

Topics in this post generally concern how to allocate resources within the overall portfolio of Benefits. Note that these topics are not fully independent, and so might intersect.

Direct versus Indirect Benefits

As noted in a previous post, a Benefactor must choose between providing direct (AI-driven) and indirect (e.g., cash) Benefits. The best approach will probably vary between organizations given differences in existing resources.

Organizations of all types often have to choose between various methods of meeting their ultimate goals (usually maximizing profit). In the context of AI Benefits, this process is complicated by the multiobjective and difficult-to-measure nature of the goal.

Questions that AI Benefactors will need help navigating include:

I expect that answering this question will require insights from economics, ethics, decision theory, organizational psychology, and business decision-making. I also expect lessons learned from impact-based charity evaluators like GiveWell, Animal Charity Evaluators, and Open Philanthropy to be helpful in guiding these decisions.

Explore or Exploit

Benefactors might have to choose between high- and low-confidence approaches to generating Benefits. A Benefactor’s confidence in a Benefit here means how much knowledge the Benefactor has about the expected value of the Benefit. Examples of high-confidence approaches might be cash transfers or subsidizing an existing AI service. These approaches will be well-studied, and thus the actor should be able to form a reasonably confident estimate of their beneficiality. Examples of a low-confidence approach might be attempting to develop a speculative new AI application or trying to modify existing products to new tasks. Since these latter approaches require new work and investment that might fail with unknown probability, they are low-confidence.

This issue in Benefits allocation resembles a multi-armed bandit problem, suggesting that that field of literature should be useful. More general insights from economics, statistics, computer science, and decision theory should also help.

Supranational versus National Allocation

One important decision is whether Benefits should be allocated at the supranational (global) level or at a national level for each nation.[2] Motivating this problem is the AI Benefactor’s desire to provide Benefits to many people and benefit people equally.

I think the distinction between national and supranational Benefits is somewhat intuitive, but perhaps hard to define precisely. Supranational Benefits, as I use the term, are things that citizens of nearly any nation can benefit from and are generally non-rivalrous. Since they are non-rivalrous, supranational Benefits will be public or club goods. For example, using AI to develop new public-domain vaccines is a supranational benefit, since the knowledge of the vaccine can be used everywhere. (Of course, downstream necessary steps like vaccine manufacturing and distribution are not supranational, but the benefit here is the knowledge of how to make the vaccine.) Universally accessible resources like free web applications would also qualify.[3]

If the AI Benefactor decides to provide some Benefits not on the supranational level, the obvious alternative is to divide up remaining resources between countries on the basis of population and make further decisions for each nation.[4] Within-nation allocation could also include the provision of national public or club goods.

Given their non-rivalrous nature, supranational goods probably tend to have lower overhead and upkeep costs per person benefitted. Once the formula for a vaccine exists in the world, the creator does not need to direct additional resources to its upkeep (other than recording the vaccine formula somewhere) and communications technologies can freely disseminate the formula.

However, it seems like many supranational goods would differentially impact different people. In the vaccine example, people who would never suffer from the disease it prevents anyway (e.g., because of where they live or their existing access to medical care) do not derive much (any?) actual benefit from the existence of the vaccine. Even something like a universally accessible web application will be of varying usefulness to people: some people use free resources like Wikipedia or a public library much more than others, even if many people use it. Furthermore, most supranational Benefits of this sort can be hampered downstream, such as by a government blocking a website or inability to distribute a vaccine.

National Benefits, by contrast, can be much better tailored to a national (or even sub-national) population’s needs and available resources. However, reaching each member of each country’s population requires a lot of work. Furthermore, dividing up a pool of Benefits to each country per capita is probably not worthwhile for small amounts of Benefits, since each national “slice” of Benefits might be quite small.[5]

Given all these considerations, it seems like most Benefactors with a commitment to universality and equality should commit to supranational Benefits for small pools of resources. Even though such Benefits might not be truly equal or universal, they approximate these values much better than national approaches and are probably more effective given the high fixed costs of attempting national distributions. However, if an organization generates truly large amounts of Benefits, a national per capita strategy seems more appealing.[6]

Thus, key questions that influence this allocative decision include:

  • When does a pool of Benefits become big enough to justify allocating Benefits to each nation per capita?
  • More generally: For any given size of the pool of Benefits, what is the proper allocation to supranational and national approaches?
  • What types of supranational Benefits best advance the values I enumerated above?
  • For resources not allocated on a supranational level, when should allocations be made at a national level versus a subnational (e.g., regional or provincial) level?

I expect that answering this question will require insights from economics, political science, public administration, sociology, psychology, international relations, and decision theory.

Producing Benefits Now or Later

Firms also face a temporal choice of when to deliver Benefits. AI Benefactors must decide whether to invest present resources in hopes of generating even greater future Benefits, or simply immediately distribute existing resources. This choice resembles a decision that for-profit businesses face regularly: whether to benefit shareholders by reinvesting profits (e.g., in new capital, new product lines, corporate acquisitions) or declaring dividends. People have also discussed this temporal choice problem in the context of the optimal timing of philanthropy. As in those contexts, this decision for Benefactors will require insights from economics, decision theory, psychology, and ethics.

Expert Input

If you are an expert in one of the identified fields and are interested in contributing to this project, I would love to hear from you!


  1. Put differently: by tailoring Benefits to different salient local differences the world, more people should be able to derive significant Benefits. ↩︎

  2. Several commenters to an earlier draft of this piece questioned whether geographical/national variation was the most important variation to consider for more granular allocation of Benefits. One might instead differentiate Benefits by income bands. My intuition is that nationality is a uniquely strong proxy for relevant conditions like market access, rule of law, income, and healthcare access. However, I am open to the possibility that other variables should be used for sub-global groupings. I therefore think it deserves a lot of attention. ↩︎

  3. Of course, there are currently significant issues with access to internet resources. ↩︎

  4. Each AI Benefactor could also focus on his or her own nation. Of course, this would likely lead to highly unequal outcomes globally, unless each nation had a national Benefactor. ↩︎

  5. A final wrinkle in this discussion is that, if a Benefactor allocates Benefits on a national level, then the Benefactor faces a similar decision for that nation: the Benefactor could provide national public or club goods or provide individuated Benefits like cash or some bundle of resources. ↩︎

  6. The analogy is imperfect because in all cases the ultimate benefit shareholders derive is monetary (from dividends or heightened stock prices, which are a proxy for future dividends), whereas many direct forms of AI Benefits may not be monetary (e.g., the vaccine example above. ↩︎

AI
Frontpage

3

New Comment