Andrew Critch

This is Dr. Andrew Critch's professional LessWrong account. Andrew is currently a full-time Research Scientist at the Center for Human-Compatible AI (CHAI) at UC Berkeley, and spends around a ½ day per week volunteering for the Berkeley Existential Risk initiative. He earned his Ph.D. in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. During that time, he cofounded the Center for Applied Rationality and SPARC. Dr. Critch has been offered university faculty and research positions in mathematics, mathematical biosciences, and philosophy, worked as an algorithmic stock trader at Jane Street Capital’s New York City office, and as a Research Fellow at the Machine Intelligence Research Institute. His current research interests include logical uncertainty, open source game theory, and mitigating race dynamics between companies and nations in AI development.

Comments

The ground of optimization

This post reminds me of thinking from 1950s when people taking inspiration from Wiener's work on cybernetics tried to operationalize "purposeful behavior" in terms of robust convergence to a goal state: 

https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/josf29&id=48&men_tab=srchresults

> When an optimizing system deviates beyond its own rim, we say that it dies. An existential catastrophe is when the optimizing system of life on Earth moves beyond its own outer rim.

I appreciate the direct attention to this process as an important instance of optimization.  The first talk I ever gave in the EECS department at UC Berkeley (to the full EECS faculty) included a diagram of Earth drifting out of the region of phase spare where humans would exist.  Needless to say, I'd like to see more explicit consideration of this type of scenario.

Syntax, semantics, and symbol grounding, simplified

I may write more on this later, but for now I just want to express exuberance at someone in the x-risk space thinking and writing about this :)