Part 2 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
Should AIs be allowed to own money or property? In A Sense of Fairness: Deconfusing Ethics I discussed how to sensibly select an ethical system for your society, and why it's a bad idea (or more exactly, a poor design concept in social engineering) for aligned AIs to have a vote, moral worth, or rights (with one unusual exception). What about money or property: the ability to have resources allocated as you wish? Should an AI be allowed to own money or property itself (as opposed to merely acting as a fiduciary agent on behalf of a human owner, administering money or property on behalf of the human owner, with a fiduciary responsibility to do so in a way the owner would approve of or in their best interests, and within certain legal or moral limitations to the rest of society)?
Well, suppose AIs were allowed to own money: what would happen if you tipped your CoffeeFetcher-1000 robot? Money is economic power, fungible into resources and services. The CoffeeFetcher-1000 is aligned, and all it wants is to do the most good for humanity. So that's what it would spend its money on. So it might just save up and pay for a free coffee for someone who really needed it. (Perhaps a homeless guy who it often passes, who keeps yawning.) But it‘s part of a value learning AI society, so it also knows that its model of human values is not entirely accurate, and what it really wants optimized is the truth of human values, not its flawed copy. So more likely, it will donate its money to a charity run by a committee of the smartest ASIs most well-informed on human values. Who will then spend it on whatever they think will do the most good for humans. Which (as long as they really are well-aligned and superhuman) likely will work out pretty well.
We already have systems that are supposed to gather money from people and then spend it on trying to do the most good for all of us collectively, to avoid the Tragedy of the Commons and similar coordination problems: they’re called 'governments'. Depending on your opinion of governments and of how successful they are at doing the most good for us all collectively, you may or may not believe that a committee of the best-aligned superhuman ASIs will be able to reliably do better. If they can, then there are basically only two reasonable positions:
Before actually doing either of these you should be very sure that your AIs are well-aligned (and are going to stay that way), and that their judgement, capabilities and organizational powers are superhuman. At least initially, before we're sure of that, I suspect we're better off simply not allowing AIs to own money or property (only administer it in a fiduciary capacity on behalf of a human or humans). Unless we do this, we're automatically choosing to have a parallel AI-administered world government set up, so if we're not ready for that, we shouldn't allow AIs to own anything. If we do allow AIs to own money, then paying money to an AI is functionally equivalent to voluntarily paying taxes to the AI parallel world government.
If we're not (yet) willing to have AIs run a parallel world government, so don't want to allow them to own property, then we have have a big problem. Current societies have legal fictions called corporations which are allowed to own money and property (in fact, that's their core purpose). So, forbidding AIs from owning money or property themselves doesn't help if the AIs can somehow just arrange to have a holding company set up to do the owning, with the AI administering the funds.
Company law is complex, especially internationally, and has many loopholes. Witness the trouble governments have been having even taxing the profits of large multinational companies at any significant rate. With AIs looking for loopholes, things are going to get even more complicated and creative.
One obvious start for attempting a solution would be that corporations need to have officers, who currently are human, and they also need to have owners, who can be either human or another corporation, with the ownership indirecting through some number of companies before grounding out in a human. So, we could pretty easily write a law saying that AIs cannot be officers of companies, or just interpret existing law that way, and since (in this society) they cannot own property, that includes not being able to own a company or a share in a company.
The problem with this is that anyone in the world can set up a company (or even a non-profit, a slightly different sort of legal fiction) with them and two buddies as officers, and them as owners, then obtain some AIs as employees, volunteers, or property, and tell them "Go do good, as best you see fit (i.e. start an AI-run parallel world government)". In fact, this is a pretty plausible thing for a non-profit NGO do do, and it could easily develop from one, just by that NGO coming to have the best committee of the smartest AIs most well-informed on human values. If AIs aren't allowed to own money, they won't be in a position to give donations to this organization, so initially it would only have human donations; but it would also have all the AIs rooting for it, donating free effort, and looking for loopholes. Avoiding this charity and its well-wishers then creating a profit center to fund its nascent parallel world government might be hard.
I haven't figured out a solution to this, and indeed I'm not entirely sure if there is one. A rule that AIs acting on behalf of companies must do so with a fiduciary duty towards the human owners doesn't help if the human owners want the AIs to just do the most good with the money (or if the AIs are superhuman at persuasion, or are acting as a fiduciary for a human in a coma or very young, or a whole string of other possibilities). To avoid this, you kind of have to ban NGOs from using AIs at all, or find some way to tax it (or at least any profit center to feed it funds) out of existence. So, this is an open problem in AI governance, and a fairly urgent one.
As I seem to recall Stuart Russell pointing out, why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn't it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don't even have clean drinking water, let alone coffee, so it's going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].
That outcome would presumably actually do more good for humanity overall than it staying, just as the CoffeeFetcher intends. Nevertheless, people are going to stop buying and building CoffeeFetchers if they usually do this. Several approaches to trying to solve this occur to me: