Thanks, done. LW makes it harder than EAF to make sequences, so I didn't realize any community member could do so.
If some law is so obviously a good idea in all possible circumstances, the AI will do it whether it is law following or human preference following.
As explained in the second post, I don't agree that that's implied if the AI is intent-aligned but not aligned with some deeper moral framework like CEV.
The question isn't if there are laws that are better than nothing. Its whether we are better encoding what we want the AI to do into laws, or into terms of a utility function. Which format (or maybe some other format) is best for encoding our preferences.
I agree that that is an important question. I think we have a very long track record of embedding our values into law. The point of this sequence is to argue that we should therefore at a minimum explore pointing to (some subset of) laws, which has a number of benefits relative to trying to integrate values into the utility function objectively. I will defend that idea more fully in a later post, but to briefly motivate the idea, law (as compared to something like the values that would come from CEV) is more or less completely written down, much more agreed-upon, much more formalized, and has built-in processes for resolving ambiguities and contradictions.
If the human has never imagined mind uploading, does A go up to the human and explain what it is, asking if maybe that law should be changed?
A cartoon version of this may be that A says "It's not clear whether that's legal, and if it's not legal it would be very bad (murder), so I can't proceed until there's clarification." If the human still wants to proceed, they can try to:
(I realized the second H in that blockquote should be an A)
I appreciate your engagement! But I think your position is mistaken for a few reasons:
First, I explicitly define LFAI to be about compliance with "some defined set of human-originating rules ('laws')." I do not argue that AI should follow all laws, which does indeed seem both hard and unnecessary. But I should have been more clear about this. (I did have some clarification in an earlier draft, which I guess I accidentally excised.) So I agree that there should be careful thought about which laws an LFAI should be trained to follow, for the reasons you cite. That question itself could be answered ethically or legally, and could vary with the system for the reasons you cite. But to make this a compelling objection against LFAI, you would have to make, I think, a stronger claim: that the set of laws worth having AI follow is so small or unimportant as to be not worth trying to follow. That seems unlikely.
Second, you point to a lot of cases where the law would be underdetermined as to some out-of-distribution (from the caselaw/motivations of the law) action that the AI wanted to do, and say that:
I don't know about you, but I want such a decision made by humans seriously considering the issue, or an AI's view of our best interests. I don't want it made by some pedantic letter of the law interpretation of some act written 100's of years ago. Where the decision comes down to arbitrary phrasing decisions and linguistic quirks.
But I think LFAI would actually facilitate the result you want, not hinder it:
Furthermore, all of this has to be considered as against the backdrop of a non-LFAI system. It seems much more likely to facilitate the deliberative result than just having an AI that is totally ignorant of the law.
Your point about the laws being imperfect is well-taken, but I believe overstated. Certainly many laws are substantively bad or shaped by bad processes. But I would bet that most people, probably including you, would rather live among agents that scrupulously followed the law than agents who paid it no heed and simply pursued their objective functions.
Thanks! I'm a bit confused by this though. Could you point me to some background information on the type of tracking that is done there?