AI ALIGNMENT FORUM
AF

1383
Chapin Lenthall-Cleary
Ω3000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Foom & Doom 1: “Brain in a box in a basement”
Chapin Lenthall-Cleary2mo32

LLMs are very impressive, but they’re not AGI yet—not by my definition. For example, existing AIs are nowhere near capable of autonomously writing a business plan and then founding a company and growing it to $1B/year revenue, all with zero human intervention. By analogy, if humans were like current AIs, then humans would be able to do some narrow bits of founding and running companies by ourselves, but we would need some intelligent non-human entity (angels?) to repeatedly intervene, assign tasks to us humans, and keep the larger project on track.

This is an insane AGI definition/standard. Very few humans can make billion-dollar businesses, and the few who can take years to do so. If that were the requirement for AGI, almost all humans wouldn't qualify. Indeed, if an AI could make a billion-dollar-a-year businesses upon demand, I'd wonder whether it was (weak) ASI.

(Not saying that current systems qualify as AGI, though I would say they're quite close to what I'd call weak AGI. They do indeed have severe issues with time horizons and long-term planning. But a reasonable AGI definition shouldn't exclude the vast majority of humans.)

Reply