Tamsin Leake

hi! i'm tammy :3

i research the QACI plan for formal-goal AI alignment at orthogonal.

check out my blog and my twitter.

Wiki Contributions


one solution to this problem is to simply never use that capability (running expensive computations) at all, or to not use it before the iterated counterfactual researchers have developed proofs that any expensive computation they run is safe, or before they have very slowly and carefully built dath-ilan-style corrigible aligned AGI.

nothing fundamentally, the user has to be careful what computation they invoke.

an approximate illustration of QACI: