Alex Flint

Independent AI safety researcher

Sequences

The accumulation of knowledge

Wiki Contributions

Comments

The Blackwell order as a formalization of knowledge

Yes I believe everything you have said here is consistent with the way the Blackwell order is defined.

Provide feedback on Open Philanthropy’s AI alignment RFP

Thank you for posting this Asya and Nick. After I read it I realized that it connected to something that I've been thinking about for a while that seems like it might actually be a fit for this RFP under research direction 3 or 4 (interpretability, truthful AI). I drafted a very rough 1.5-pager this morning in a way that hopefully connects fairly obviously to what you've written above:

https://docs.google.com/document/d/1pEOXIIjEvG8EARHgoxxI54hfII2qfJpKxCqUeqNvb3Q/edit?usp=sharing

Interested in your thoughts.

Feedback from everyone is most welcome, too, of course.

AI Risk for Epistemic Minimalists

Yes. Thank you. Would love to hear more about you work on goal-directedness. Let me know if you're up for chatting.

AI Risk for Epistemic Minimalists

Seems excellent to me. Thank you as always for your work on the newsletter Rohin.

Agency in Conway’s Game of Life

Yup, Life does not have time-reversibility, so it does not preserve the phase space volume under time evolution, so it does not obey the laws of thermodynamics that exist under our physics.

But one could still investigate whether there is some analog of thermodynamics in Life.

There also is a cellular automata called Critters that does have time reversibility.

The ground of optimization

Thank you for this comment Chantiel. Yes, a container that engineered to evaporate water poured anywhere into it and condense it into a central area would be an optimizing system by my definition. That is a bit like a ball rolling down a hill, which is also an optimizing system and also has nothing resembling agency. I am

The bottle cap example was actually about putting a bottle cap onto a bottle and asking whether, since the water now stays inside the bottle, it should be considered an optimizer. I pointed out that this would not qualify as an optimizing system because if you moved a water molecule from the bottle and place it outside the bottle, the bottle cap would not act to put it back inside.

Problems facing a correspondence theory of knowledge

Your summaries are excellent Rohin. This looks good to me.

Agency and the unreliable autonomous car

I think that I should modify 5a from "Search for a proof that this sentence is consistent with your model of the world, up to a maximum proof length of one million characters" to "Search to a proof of this sentence, using your model of the world as a set of starting assumptions". This is indeed a significant change to the algorithm and I thank you for pointing it out. I think that would resolve your second concern, about the problem with 5a itself, yes?

I think it might also resolve your first concern, about the unsoundness of the logical system, because the agent does not have a deduction rule that says that if P can be proved then P is true, but rather reasons from the fact that P can be proved, and the fact that this particular agent chooses its actions based on proof search and will behave in such-and-such a way if it finds a proof of this particular P, to conclude P. This only work for certain particular sentences P, such as the one that is constructed in this story, since this particular sentence P is one that, if the agent finds a proof of it, will cause it to take actions that lead to P itself being true.

Load More