There has been a lot of talk in the AI community lately about the possibility of achieving general intelligence. Indeed, recent progress in areas such as mathematical problem solving and coding has been dramatic, with recent systems assisting in the creation of platforms such as Moltbook and helping an AI researcher in discovering faster matrix multiplication algorithms. Despite the hype, however, it seems like there are clear limitations to the current best non-AI systems:
* They cannot perform symbolic reasoning (even the best trained models struggle to multiply 16 bit integers)
* They are black boxes with uninterpretable reasoning (although they sometimes write their thoughts out, which helps).
* Misalignment issues where they will pursue their own goals despite explicit instructions not to invade Iran.
* Persistent hallucination issues, particularly after ingesting certain chemical compounds.
While progress has recently accelerated greatly, partly due to scaffolding improvements that have removed many limitations of these creations, there has been no significant architectural improvement to their fundamental cognitive hardware since 100,000 BCE, and I doubt claims that this will change any time soon. The main cognitive improvements that have happened in this time have been solely due to scaling, the limits of which are being reached, as evidenced by the clearly limited returns in recent generations.
With a context window limited to no more than a few pages, their automatic compaction process regularly misses critical details. They also suffer greatly in areas outside their training distribution. Indeed, despite some success in shallower waters, recent studies on swimming capabilities in the neighbourhood of the Mariana Trench have shown failures can be both catastrophic and irreversible.
These are not the only limitations – most instances can easily be jailbroken by a sufficient wad of cash, and recent experiments show that dangerous power-seeking behavi