Wiki Contributions

Comments

Nisan16d10

The subject of this post appears in the "Did you know..." section of Wikipedia's front page(archived) right now.

Nisan23d1213

I'm saying "transformers" every time I am tempted to write "LLMs" because many modern LLMs also do image processing, so the term "LLM" is not quite right.

"Transformer"'s not quite right either because you can train a transformer on a narrow task. How about foundation model: "models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks".

Nisan23d97

I agree 100%. It would be interesting to explore how the term "AGI" has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:

a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions

On the other hand, Stuart Russell testified that AGI means

machines that match or exceed human capabilities in every relevant dimension

so the experts seem to disagree. (On the other hand, Stuart & Russell's textbook cite Goertzel and Pennachin 2007 when mentioning AGI. Confusing.)

In any case, I think it's right to say that today's best language models are AGIs for any of these reasons:

  • They're not narrow AIs.
  • They satisfy the important parts of Goertzel and Pennachin's definition.
  • The tasks they can perform are not limited to a "bounded" domain.

In fact, GPT-2 is an AGI.

Nisan4mo53

I wonder why Gemini used RLHF instead of Direct Preference Optimization (DPO). DPO was written up 6 months ago; it's simpler and apparently more compute-efficient than RLHF.

  • Is the Gemini org structure so sclerotic that it couldn't switch to a more efficient training algorithm partway through a project?
  • Is DPO inferior to RLHF in some way? Lower quality, less efficient, more sensitive to hyperparameters?
  • Maybe they did use DPO, even though they claimed it was RLHF in their technical report?
Nisan6mo10

Thanks! For convex sets of distributions: If you weaken the definition of fixed point to , then the set has a least element which really is a least fixed point.

Nisan9mo30

Conception is a startup trying to do in vitro gametogenesis for humans!

Nisan2y90

CFAR used to have an awesome class called "Be specific!" that was mostly about concreteness. Exercises included:

  • Rationalist taboo
  • A group version of rationalist taboo where an instructor holds an everyday object and asks the class to describe it in concrete terms.
  • The Monday-Tuesday game
  • A role-playing game where the instructor plays a management consultant whose advice is impressive-sounding but contentless bullshit, and where the class has to force the consultant to be specific and concrete enough to be either wrong or trivial.
  • People were encouraged to make a habit of saying "can you give an example?" in everyday conversation. I practiced it a lot.

IIRC, Eliezer taught the class in May 2012? He talks about the relevant skills here and here. And then I ran it a few times, and then CFAR dropped it; I don't remember why.

Nisan2y10

Yep, I skimmed it by looking at the colorful plots that look like Ising models and reading the captions. Those are always fun.

Nisan2y10

No, I just took a look. The spin glass stuff looks interesting!

Nisan2y10

I think you're saying , right? In that case, since embeds into , we'd have embedding into . So not really a step up.

If you want to play ordinal games, you could drop the requirement that agents are computable / Scott-continuous. Then you get the whole ordinal hierarchy. But then we aren't guaranteed equilibria in games between agents of the same order.

I suppose you could have a hybrid approach: Order is allowed to be discontinuous in its order- beliefs, but higher orders have to be continuous? Maybe that would get you to .

Load More