Hey, 
When thinking about ethical questions, especially in the face of strong AI optimization forces, I wonder if some of you might have considered a version of utilitarianism which instead of trying to maximise nebulous things like happiness or utility, (whatever those might be—the difficulty of defining which is somewhat part of the problem; and wireheading is another), tries to maximise 'optionality', i.e. number of choices available to humans, and also not just humans.

 

This could potentially solve lots of long-standing thorny problems in consequentialism, like wireheading and the fiendish difficulty of defining happiness/utility, and how even making the tiniest of mistake in that definition can be precisely catastrophic.

 

While I do think that optionality is more definable than utility, it's still not trivial. I have ideas on how to calculate it, but not full clarity yet. I'm reaching out to find more people who have thoughts in this direction: are there some of you who might already believe that the greatest good might come from giving the most amount of (meaningfully different) choices to agents? 

The limit is that those choices (ideally) have to not destroy meaningful choices of other agents. After which point they can choose for themselves however they fit. 
And always increasing this optionality, preempting lock-ins.
(Maybe this ethical system or something similar already has a name? Do let me know!) 

 



Some of my important references :

.The Capability Approach to Human Welfare 

.Paretotopia
"For Paretotopian Goal Alignment, a key concept is Pareto-preferred futures, meaning futures that would be strongly approximately preferred by more or less everyone."
 

.The Hippocratic principle - Vanessa Kosoy
"I propose a new formal desideratum for alignment: the Hippocratic principle. Informally the principle says: an AI shouldn't make things worse compared to letting the user handle them on their own, in expectation w.r.t. the user's beliefs. This is similar to the dangerousness bound I talked about before, and is also related to corrigibility."

.Learning Altruistic Behaviours in Reinforcement Learning without External Rewards
"We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future."


.OMNI: Open-endedness via Models of human Notions of Interestingness 

.Patterns of innovation
"The “adjacent possible”, introduced by the complexity theorist Stuart Kauffmann consists of all the unexplored possibilities surrounding a particular phenomenon: ideas, words, songs, molecules, genomes, and so on. The very definition of adjacent possible encodes the dichotomy between the actual and the possible: the actual realization of a given phenomenon and the space of possibilities still unexplored. But all the connections between these elements are hard to measure and quantify when including the things that are entirely unexpected and hard to imagine." 

.​From AI for people to AI for the world and the universe​

.Autonomy in Moral and Political Philosophy 

 

Wait

Do we agree that (without destroying everything that currently exist) the,

"Best Possible Reality";

Is one were everything enhances the capacity of everything else?

 

-> In the sense that each alive species is able to live what they would want to live IF they had perfect knowledge of their true needs

+

-> The perfection would be that every species synergize into co-affordance (they all participate in the well-being of others)



Note : There's a paradox on the meta knowledge of one's true needs; you would become a completely new being with different needs if you had such 'perfect knowledge'. It's an eternal loop of self-uncertainties. Thus in an "ideal life" the environment would have to be pre-organized for us.



Of course it's utopic (an horizon we should try our best to reach realistically)
An ecosystem with all species/individuals responding to each other's needs cannot exist,
When the needs of ones implies the suffering of others.

Furthermore this yet vague ideal doesn't resolve what "well-being" is. 
It just states that it should be everywhere (systemic and ubiquitous).

Then, in a real world with uncertainty and friction, what is the most practical path forward?

 

We need to gain knowledge about what is good but we can't know for sure, and every species or individual has specific irreducible preferences that we cannot easily calculate.

And because of uncertainty we said that instead of "beneficiality" we would focus on "optionality".
 

 

So instead of imposing something or creating direct dependence/heteronomy we give the tools for beings to do what they want, and learn about what they need. 

A prosaic example would be that it's better to help a poor farmer by giving network, improved skills and access to familiar tools rather than simply giving food boxes.

.Helping People Help Themselves: Toward a Theory of Autonomy-Compatible Help


Meanwhile we conduct researches, meta-researches, and organization for the true needs of species and individuals to be answered.

Increasing options is difficult and can be non-linear;

To increase options implies we need to increase the capacity to chose (ie. you sometimes need to restrict the number of options for one to not be paralyzed by possibilities).
 

We also make sure that the options species/individuals can choose are not decreasing the options of other species/individuals. 

This does not mean we let everyone do *purely has they want*, this is precisely NOT what I'm saying. I am saying we need to select OUT the options that are decreasing the options of others (as much as possible). For example in cases like letting people access to 1 dollar DIY kits to build nuclear weapons in their garden (if such thing is possible) the option is not given. 

We purposefully destroy/select-out these kinds of options. In the same way we should restrict our destruction of wildlife and climate, while caring about fairness, equity etc.

 

There are more and more complex cases in the list of things we should care about,
Still each time the framing is analogous to a procedure asking :

"Is [this option] really giving [more options] to the highest number of beings?"


Follow-up post here

More precision here, TLDR here

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:40 PM

Problems with maximizing optionality are discussed in the comments of this post:

https://www.lesswrong.com/posts/JPHeENwRyXn9YFmXc/empowerment-is-almost-all-we-need

[-]Ryo 5mo20

Thank you, it's very interesting, I think that non-myopic 'ecosystemic optionality' and irreducibility may resolve the issues, so I made a reaction post