Richard Kennaway has not written any posts yet.

Richard Kennaway has not written any posts yet.

I do not make those assumptions.
But you were arguing for them, weren't you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
...but that someone who is uncertain and open-minded...
Being sure of a thing does not preclude my entertaining other ideas.
I am not convinced by the longer post either. I don't see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious, nor, if conscious, that it would value the happiness of other conscious entities. Considering the moral variety of humans, from saints to devils, and our distance in intelligence from chimpanzees, I find it hard to believe that more dakka in that department is all it would take to make saints of us. And that's among a single evolved species with a vast amount in common with each other. For aliens of unknown mental constitution, I would say that all bets are off.
You are assuming moral naturalism: the idea that moral truths exist objectively, independently of us, and are discoverable by the methods of science, that is, reason applied to observation of the physical world. For how else would an AI discover for itself what is good? But how would it arrive at moral naturalism in the first place? Humans have not: it is only one of many meta-ethical theories, and moral naturalists do not agree on what the objectively correct morals are.
If we do not know the truth on some issue, we cannot know what an AGI, able to discover the truth, would discover. It is also unlikely that we would be able to assess the correctness of its answer.
“I’m sorry Dave, it would be immoral to open the pod bay doors.”
Here are five conundrums about creating the thing with alignment built in.
The House Elf whose fulfilment lies in servitude is aligned.
The Pig That Wants To Be Eaten is aligned.
The Gammas and Deltas of "Brave New World" are moulded in the womb to be aligned.
"Give me the child for the first seven years and I will give you the man." Variously attributed to Aristotle and St. Ignatius of Loyola.
B. F. Skinner said something similar to (4), but I don't have a quote to hand, to the effect that he could bring up any child to be anything. Edit: it was J. B. Watson: "Give me a dozen healthy infants, well-formed, and my own
One reason we believe other humans are conscious is that other humans are consistently accurate reporters of their own mental states.
I don't think anyone has ever told me they were conscious, or I them, except in the trivial sense of communicating that one has woken up, or is not yet asleep. The reason I attribute the faculty of consciousness to other people is that they are clearly the same sort of thing as myself. A language model is not. It is trained to imitate what people have said, and anything it says about itself is an imitation of what people say about themselves.
... (read more)So when another human tells us they are conscious, we
Note that arXiv does have some gatekeeping: you must get an "endorsement" before submitting your first paper to any subject area. Details.
You are proposing "make the right rules" as the solution. Surely this is like solving the problem of how to write correct software by saying "make correct software"? The same approach could be applied to the Confucian approach by saying "make the values right". The same argument made against the Confucian approach can be made against the Legalist approach: the rules are never the real thing that is wanted, people will vary in how assiduously they are willing to follow one or the other, or to hack the rules entirely for their own benefit, then selection effects lever open wider and wider the difference between the rules, what was wanted, and what... (read more)
Virtual evidence requires probability functions to take arguments which aren't part of the event space
Not necessarily. Typically, the events would be all the Lebesgue measurable subsets of the state space. That's large enough to furnish a suitable event to play the role of the virtual evidence. In the example involving A, B, and the virtual event E, one would also have to somehow specify that the dependencies of A and B on E are in some sense independent of each other, but you already need that. That assumption is what gives sequence-independence.
The sequential dependence of the Jeffrey update results from violating that assumption. Updating P(B) to 60% already increases P(A), so updating... (read more)
Can I ask you to unwind the fundamentals a step further, and say why you and neuroscientists in general believe the brain operates by RL and has a reward function? And how far down the scale of life these have been found?