Lifelong recursive self-improver, on his way to exploding really intelligently :D
More seriously: my posts are mostly about AI alignment, with an eye towards moral progress and creating a better future instead of risk only.
At the moment I am doing research at CEEALAR on agents whose behaviour is driven by a reflective process analogous to human moral reasoning, rather than by a metric specified by the designer. I'll probably post a short article on this topic before the end of 2023.
Here are some suggested readings from what I've written so far: