I find that studies criticizing current models are often used long after the issue is fixed, or without consideration to the actual meaning. I would wish that technology reporting is more careful, as much of this misunderstanding seems to come from journalistic sources. Examples:
Hands in diffusion models
Text in diffusion models
Water usage
Model collapse - not an issue for actual commercial AI models, the original study was about synthetic data production, and directly feeding the output of models as the exclusive training data
LLMs = Autocorrect - chat models have RLHF post training
Nightshade/glaze: useless for modern training methods
AI understanding - yes, the weights are not understood, but the overall architecture is
It is surprising how many times I hear these, with false context.
I find that studies criticizing current models are often used long after the issue is fixed, or without consideration to the actual meaning. I would wish that technology reporting is more careful, as much of this misunderstanding seems to come from journalistic sources. Examples:
Hands in diffusion models
Text in diffusion models
Water usage
Model collapse - not an issue for actual commercial AI models, the original study was about synthetic data production, and directly feeding the output of models as the exclusive training data
LLMs = Autocorrect - chat models have RLHF post training
Nightshade/glaze: useless for modern training methods
AI understanding - yes, the weights are not understood, but the overall architecture is
It is surprising how many times I hear these, with false context.