AI ALIGNMENT FORUM
AF

Personal Blog

0

Improved formalism for corruption in DIRL

by Vanessa Kosoy
30th Nov 2017
2 min read
0

0

Personal Blog
New Comment
Moderation Log
More from Vanessa Kosoy
View more
Curated and popular this week
0Comments

We give a treatment of advisor corruption in DIRL, more elegant and general than our previous formalism.

The following definition replaces the original Definition 5.

Definition

Consider a meta-universe υ=(μ,r) and β:(0,∞)→(0,∞). A metapolicy α is called β-rational for υ (as opposed to before, we assume α is an I-metapolicy rather than an ¯I-metapolicy; this is purely for notational convenience and it is straightforward to generalize the definition) when there exists {Lαt:hdomμ×A→[0,∞]}t∈(0,∞) s.t.

i. For any t∈(0,∞) and h∈hdomμt, there is a∈A s.t. Lαt(ha)=0.

ii. αt(h)(a)=exp(−β(t)Lαt(ha))maxa∗∈Aαt(h)(a∗)

iii. For any π∈Π and t∈(0,∞)

limt→∞min(Ex∼μt⋈π[∞∑n=0e−n/tLαt(x:n+1/2)]−Ex∼μt⋈π[∞∑n=0e−n/t(Vυt(x:n)−Qυt(x:n+1/2))],0)=0


In condition ii, exp(−∞) is understood to mean 0. Conditions i+ii can be seen as the definition of Lαt given αt. A notable special case of condition iii is when for any x∈hdomμt

∞∑n=0e−n/tLαt(x:n+1/2)≥∞∑n=0e−n/t(Vυt(x:n)−Qυt(x:n+1/2))

As a simple example, we can have a set of corrupt states {Ct⊆(A×O)∗}t∈(0,∞) in which the behavior of the advisor becomes arbitrary, but for each h∈Ct there is g∈(A×O)∗×A s.t. g⊏h and Lαt(g)=∞ (i.e., to corrupt the advisor one has to take an action that the advisor would never take). As opposed to before, this formalism can also account for partial corruption, e.g. if for each h∉Ct and a∈A, we have Lαt(ha)≥Vυt(h)−Qυt(ha) (like in strict β-rationality) whereas for h∈Ct, we only have Lαt(ha)≥Vυt(h)−Qυt(ha)−δ for some constant δ>0, then to ensure β-rationality, it is sufficient that for each h=a0o0a1o1…∈Ct:

max{m∣h:m∉Ct}∑n=0e−n/t(Lαt(h:nan)−Vυt(h:n)−Qυt(h:nan))≥δe−(max{m∣h:m∉Ct}+1)/t1−e−1/t

Theorem

Consider H={υk}k∈N a countable family of I-meta-universes and β:(0,∞)→(0,∞) s.t. β(t)=ω(t2/3). Let {αk}k∈N be a family of I-metapolicies s.t. for every k∈N, αk is β-rational for υk. Define ¯H:={¯υk[αk]}k∈N. Then, ¯H is learnable.

Proof of Theorem

We don't spell out the proof in detail, but only the modifications with respect to the original.

As in the proof of the original theorem, we can assume without loss of generality that H is finite. Define π∗ the same way as in Lemma A, but with Lt redefined as

Lt(ha):=Ek∼ζt(h)[Lαkt(ha)]

Similarly, define π! the same way as in the proof of Lemma A, but with Lt redefined as

Lt(ha):=Ek∼ζ!kt(h)[Lαkt(ha)]

As in the proof of Lemma A, we have

1N∑k<N(EU∗¯υk[αk](t)−EUπ!k¯υk[αk](t))=∞∑n=0e−n/tE(k,x)∼ρ!t[Vυk[αk]t(x:n)−Qυk[αk]t(x:nπ!k(x:n))]

Using condition iii in the Definition, we conclude that for some function δ:(0,∞)→[0,∞) with limt→∞δ(t)=0

1N∑k<N(EU∗¯υk[αk](t)−EUπ!k¯υk[αk](t))≤∞∑n=0e−n/tE(k,x)∼ρ!t[[[π!k(x:n)≠⊥]]Lαkt(x:n–––π!k(x:n))+[[π!k(x:n)=⊥]]Ea∼αk(x:n––––)[Lαkt(x:n–––a)]]+δ(t)

We can now repeat the same arguments as in the proof of Lemma A to get

1N∑k<N(EU∗¯υk[αk](t)−EUπ∗¯υk[αk](t))≤(1t+1+8|A|3lnNe(1−e−1)2)t2/3β(t)+N−1t1/3+δ(t)

The desired result follows.