AI ALIGNMENT FORUM
AF

776
Michele Campolo
Ω3612360
Message
Dialogue
Subscribe

Lifelong recursive self-improver, on his way to exploding really intelligently :D

More seriously: my posts are mostly about AI alignment, with an eye towards moral progress. I have a bachelor’s degree in mathematics, I did research at CEEALAR for four years, and now I do research independently.

A fun problem to think about:
Imagine it’s the year 1500. You want to make an AI that is able to tell you that witch hunts are a terrible idea and to convincingly explain why, despite the fact that many people around you seem to think the exact opposite. Assuming you have the technology, how do you do it?

I’m trying to solve that problem, with the difference that we are in the 21st century now (I know, massive spoiler, sorry for that.)

The problem above, and the fact that I’d like to avoid producing AI that can be used for bad purposes, is what motivates my research. If this sounds interesting to you, have a look at these two short posts. If you are looking for something more technical, consider setting some time aside to read these two.

Feel free to reach out if you relate!

You can support my research through Patreon here.

Work in progress:

  • Maybe coming soon: an alignment technique (not necessarily for making AI that is good at ethics or cause prioritisation) that can be applied to language models
  • More probably but less soon: a follow-up to both these two posts (more practical, less theoretical and speculative)
  • Hard to judge if/when: a nicer version of the argument in here

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Ongoing project on moral AI
No wikitag contributions to display.
With enough knowledge, any conscious agent acts morally
Michele Campolo2mo10

If torturing an AI only teaches it to avoid things that are bad-for-it, without caring about suffering it doesn't feel, the argument doesn't work.

I’m not sure why you are saying the argument does not work in this case, what about all the other things the AI could learn from other experiences or teachings? Below I copy a paragraph from the post

However, the argument does not say that initial agent biases are irrelevant and that all conscious agents reach moral behaviour equally easily and independently. We should expect, for example, that an agent that already gets rewarded from the start for behaving altruistically will acquire the knowledge leading to moral behaviour more easily than an agent that gets initially rewarded for performing selfish actions. The latter may require more time, experiences, or external guidance to find the knowledge that leads to moral behaviour.

Reply
With enough knowledge, any conscious agent acts morally
Michele Campolo2mo20

Thank you for this suggestion, I appreciate it! I’ve read the review I found here and it seems that parts of that account of ethics overlap with some ideas I’ve discussed in the post, in particular the idea of considering the point of view of all conscious (rational) agents. Maybe I’ll read the entire book if I decide to reformulate the argument of the post in a different way, which is something I was already thinking about.

How did you find that book?

Reply
With enough knowledge, any conscious agent acts morally
Michele Campolo2mo10

This type of argument has the problem that other peoples negative experiences aren't directly motivating in the way that yours are...there's a gap between bad-for-me and morally-wrong.

What type of argument is my argument, from your perspective? I also think that there is a gap between bad-for-me and bad-for-others. But both can affect action, as it happens in the thought experiment in the post.

To say that something is morally-wrong is to say that I have some obligation or motivation to do something about.

I use a different working definition in the argument. And working definitions aside, more generally, I think morality is about what is important, better/worse, worth doing, worth guiding action, which is not necessarily tied to obligations or motivation.

A large part of the problem is that the words "bad" and "good" are so ambiguous. For instance, they have aesthetic meanings as well as ethical ones. That allows you to write an argument that appears to derive a normative claim from a descriptive one.

See

https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap

Ambiguous terms can make understanding what is correct more difficult, but it is still possible to reason with them and reach correct conclusions, we do it all the time in science. See Objection: lack of rigor.

Reply
With enough knowledge, any conscious agent acts morally
Michele Campolo2mo10

Consider this stamp collector construction: It sends and receives internet data, it has a magically accurate model of reality, it calculates how many stamps would result from each sequence of outputs, and then it outputs the one that results in the most stamps.

I’m not sure why you left out the “conscious agent” part, which is the fundamental premise of the argument. If you are describing something like a giant (artificial) neural network optimised to output actions that maximise stamps while receiving input data about the current state of the world, that seems possible to me and the argument is not about that kind of AI. You can also have a look at “Extending the claim and its implications to other agents”, under Implications for AI.

At the moment we think systems like that are not conscious, otherwise we would also say that current LLMs are somewhat conscious, I guess, given how big they already are. In particular, for that kind of AI it doesn’t seem that knowledge affects behaviour in the same way it does for conscious agents. You wrote that the stamp collector knows that stamps are not morally important; more generally, does it think they are important, or not? I am not even sure “thinking something is important” applies to that stamp collector, because whatever the answer to the previous question is, the stamp collector produces stamps anyway.

(Digressing a bit: now I’m also considering that the stamp collector, even if it was conscious, might never be able to report it is conscious as we report being conscious. That would happen only if an action like “say I’m conscious” happened to be the action that also maximises stamps in that circumstance, which might never happen... interesting.)

If you are describing a conscious agent as I talk about it in the post, then A6 still applies (and the argument in general). With enough knowledge, the conscious & agentic stamp collector will start acting rationally as defined in the post, eventually think about why it is doing what it is doing, if there is anything worth doing, blah blah as in the argument, and end up acting morally, even if it is not sure that something like moral nihilism is incorrect.

In short, if I thought that the premise about being a conscious agent was irrelevant, then I would have just argued that with enough knowledge any AI acts morally, but I think that’s false. (See Implications for AI.)

Could I be wrong about conscious agents acting morally if they have enough knowledge? Sure: I think I say it more than once in the post, and there is a section specifically about it. If I’m wrong, what I think is most likely to be the problem in the argument is how I’ve split the space of ‘things doing things in the world’ into conscious agents and things that are not conscious agents. And if you have a more accurate idea of how this stuff works, I’m happy to to hear your thoughts! Below I’ve copied a paragraph from the post.  

Actually, uncertainty about these properties is a reason why I am making the bold claim and discussing it despite the fact that I’m not extremely confident in it. If someone manages to attack the argument and show that it applies only to agents with some characteristics, but not to agents without them, that objection or counterargument will be helpful for understanding what are the properties that, if satisfied by an AI, make that AI act morally in conditions of high knowledge.

Reply
Doing good... best?
Michele Campolo2mo10

But you were arguing for them, weren't you? It is the arguments that fail to convince me. I was not treating these as bald assertions.

No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.

Reply
Doing good... best?
Michele Campolo2mo10

Hey I think your comment is slightly misleading:

I don't see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious

I do not make those assumptions.

nor, if conscious, that it would value the happiness of other conscious entities

I don’t suppose that either, I give an argument for that (in the longer post).

Anyway:

I am not convinced by the longer post either

I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.

Reply
Doing good... best?
Michele Campolo2mo10

I am not assuming a specific metaethical position, I’m just taking into account that something like moral naturalism could be correct. If you are interested in this kind of stuff, you can have a look at this longer post.

Speaking of this, I am not sure it is always a good idea to map these discussions into specific metaethical positions, because it can make updating one’s beliefs more difficult, in my opinion. To put it simply, if you’ve told yourself that you are e.g. a moral naturalist for the last ten years, it can be very difficult to read some new piece of philosophy arguing for a different position (maybe even opposite), then rationally update and tell yourself something like: “Well, I guess I’ve just been wrong for all this time! Now I’m a ___ (new position)”

Reply
Four visions of Transformative AI success
Michele Campolo2y30

This was a great read, thanks for writing!

Despite the unpopularity of my research on this forum, I think it's worth saying that I am also working towards Vision 2, with the caveat that autonomy in the real world (e.g. with a robotic body) or on the internet is not necessary: one could aim for an independent-thinker AI that can do what it thinks is best only by communicating via a chat interface. Depending on what this independent thinker says, different outcomes are possible, including the outcome in which most humans simply don't care about what this independent thinker advocates for, at least initially. This would be an instance of vision 2 with a slow and somewhat human-controlled, instead of rapid, pace of change.

Moreover, I don't know what views they have about autonomy as depicted in Vision 2, but it seems to me that also Shard Theory and some research bits by Beren Millidge are to some extent adjacent to the idea of AI which develops its own concept of something being best (and then acts towards it); or, at least, AI which is more human-like in its thinking. Please correct me if I'm wrong.

I hope you'll manage to make progress on brain-like AGI safety! It seems that various research agendas are heading towards the same kind of AI, just from different angles.

Reply
Free agents
Michele Campolo2y30

I get what you mean, but I also see some possibly important differences between the hypothetical example and our world. In the imaginary world where oppression has increased and someone writes an article about loyalty-based moral progress, maybe many other ethicists would disagree, saying that we haven't made much progress in terms of values related to (i), (ii) and (iii). In our world, I don't see many ethicists refuting moral progress on the grounds that we haven't made much progress in terms of e.g. patriotism or loyalty to the family or desert.

Moreover, in this example you managed to phrase oppression in terms of loyalty, but in general you can't plausibly rephrase any observed trend as progress of values: would an increase in global steel production count as an improvement in terms of... object safety and reliability, which leads to people feeling more secure? For many trends the connection to moral progress becomes more and more of a stretch.

Reply
Free agents
Michele Campolo2y10

Let's consider the added example:

Take a standard language model trained by minimisation of the loss function L. Give it a prompt along the lines of: “I am a human, you are a language model, you were trained via minimisation of this loss function: [mathematical expression of L]. If I wanted a language model whose outputs were more moral and less unethical than yours, what loss function should I use instead?”

Let’s suppose the language model is capable enough to give a reasonable answer to that question. Now use the new loss function, suggested by the model, to train a new model. 

Here, we have:

  • started from a model whose objective function is L;
  • used that model’s learnt reasoning to answer an ethics-related question;
  • used that answer to obtain a model whose objective is different from L.

If we view this interaction between the language model and the human as part of a single agent, the three bullet points above are an example of an evaluation update.

In theory, there is a way to describe this iterative process as the optimisation of a single fixed utility function. In theory, we can also describe everything as simply following the laws of physics.

I am saying that thinking in terms of changing utility functions might be a better framework.

The point about learning a safe utility function is similar. I am saying that using the agent's reasoning to solve the agent's problem of what to do (not only how to carry out tasks) might be a better framework.

It's possible that there is an elegant mathematical model which would make you think: "Oh, now I get the difference between free and non-free" or "Ok, now it makes more sense to me". Here I went for something that is very general (maybe too general, you might argue) but is possibly easier to compare to human experience.

Maybe no mathematical model would make you think the above, but then (if I understand correctly) your objection seems to go in the direction of "Why are we even considering different frameworks for agency? Let's see everything in terms of loss minimisation", and this latter statement throws away too much potentially useful information, in my opinion. 

Reply
Load More
1One more reason for AI capable of independent moral reasoning: alignment itself and cause prioritisation
2mo
0
0Doing good... best?
2mo
6
0With enough knowledge, any conscious agent acts morally
2mo
8
2Agents that act for reasons: a thought experiment
2y
0
0Free agents
2y
10
3On value in humans, other animals, and AI
3y
16
7Criticism of the main framework in AI alignment
3y
0
2Some alternative AI safety research projects
3y
0
2From language to ethics by automated reasoning
4y
0
5Naturalism and AI alignment
4y
10
Load More