Posts

Sorted by New

Wiki Contributions

Comments

One question that occurred to me, reading the extended GPT-generated text. (Probably more a curiosity question than a contribution as such...)

To what extent does text generated by GPT-simulated 'agents', then published on the internet (where it may be used in a future dataset to train language models), create a feedback loop? 

Two questions that I see as intuition pumps on this point:

  1. Would it be a bad idea to recursively ask GPT-n "You're a misaligned agent simulated by a language model and your name is [unique identifier]. What would you like to say, knowing that the text you generate will be used in training future GPT-n models, to try to influence that process?" then use a dataset including that output in the next training process? What if training got really cheap and this process occurred billions of times?
  2. My understanding is that language models are drawing on the fact that the existing language corpus is shaped by the underlying reality - and this is why they seem to describe reality well, capture laws and logic, agentic behaviour etc. This works up until ~2015, when the corpus of internet text begins to include more text generated only by simulated writers. Does this potentially degrade the ability of future language models to model agents, perform logic etc? Since their reference pool of content is increasingly (and often unknowably) filled with text generated without (or with proportionally much less) reference to underlying reality? (Wow, who knew Baudrillard would come in handy one day?)