Language Models are a Potentially Safe Path to Human-Level AGI
The core argument: language models are more transparent and less prone to develop agency and superintelligence I argue that compared to alternative approaches such as open-ended reinforcement learning, the recent paradigm of achieving human-level AGI with language models has the potential to be relatively safe. There are three main reasons...
Apr 20, 202328