AI ALIGNMENT FORUM
AF

AI
Frontpage

12

Yann LeCun on AGI and AI Safety

by Chris_Leong
6th Aug 2023
1 min read
13

12

This is a linkpost for https://drive.google.com/file/d/1wzHohvoSgKGZvzOWqZybjm4M4veKR6t3/view
AI
Frontpage
Yann LeCun on AGI and AI Safety
4Steve Byrnes
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 11:34 PM
[-]Steve Byrnes2y46

I find it interesting how he says that there is no such thing as AGI, but acknowledges that machines will "eventually surpass human intelligence in all domains where humans are intelligent" as that would meet most people's definition of AGI.

The somewhat-reasonable-position-adjacent-to-what-Yann-believes would be: “I don’t like the term ‘AGI’. It gives the wrong idea. We should use a different term instead. I like ‘human-level AI’.”

I.e., it’s a purely terminological complaint. And it’s not a crazy one! Lots of reasonable people think that “AGI” was a poorly-chosen term, although I still think it’s possibly the least-bad option.

Yann’s actual rhetorical approach tends to be:

  • Step 1: (re)-define the term “AGI” in his own idiosyncratic and completely insane way;
  • Step 2: say there’s no such thing as “AGI” (as so defined), and that anyone who talks about AGI is a moron.

I talk about it in much more detail here.

Reply
Crossposted to the EA Forum. Click to view 4 comments.
Moderation Log
Curated and popular this week
1Comments

Yann recently gave a presentation at MIT on Objective-Driven AI with his specific proposal being based upon a Joint Embedding Predictive Architecture.

He claims that his proposal will make AI safe and steerable, so I thought it was worthwhile copying the slides at the end which provide a very quick and accessible overview of his perspective:

Here's a link to the talk itself.

I find it interesting how he says that there is no such thing as AGI, but acknowledges that machines will "eventually surpass human intelligence in all domains where humans are intelligent" as that would meet most people's definition of AGI.

I also observe that he has framed his responses to safety on "How to solve the alignment problem?". I think this is important. It suggests that even people who think aligning AGI will be easy have started to think a bit more about this problem and I see this as a victory in and of itself.

You may also find it interesting to read Steven Byrnes' skeptical comments on this proposal.

Mentioned in
6Refinement of Active Inference agency ontology