AI ALIGNMENT FORUM
AF

99
Henry Russell
000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
0Hruss's Shortform
7mo
0
Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems)
Hruss4mo13

I find that studies criticizing current models are often used long after the issue is fixed, or without consideration to the actual meaning. I would wish that technology reporting is more careful, as much of this misunderstanding seems to come from journalistic sources. Examples:

Hands in diffusion models

Text in diffusion models

Water usage

Model collapse - not an issue for actual commercial AI models, the original study was about synthetic data production, and directly feeding the output of models as the exclusive training data

LLMs = Autocorrect - chat models have RLHF post training 

Nightshade/glaze: useless for modern training methods

AI understanding - yes, the weights are not understood, but the overall architecture is

 

It is surprising how many times I hear these, with false context.

Reply