This is a linkpost for https://twitter.com/xuenay/status/1283312640199196673
I kept seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd start collecting them.
- first gwern's crazy collection of all kinds of prompts, with GPT-3 generating poetry, summarizing stories, rewriting things in different styles, and much much more. (previous discussion)
- Automatic code generation from natural language descriptions. "Give me a page with a table showing the GDP of different nations, and a red button."
- Building a functioning React app by just describing it to GPT-3.
- Taking a brief technical tweet about GPT-3 and expanding it to an essay which the author of the original tweet mostly endorses.
- Acting as a more intense therapist than ELIZA ever was. [1, 2]
- On the other hand, you can trick GPT-3 into saying nonsense. On the other hand, you can just prompt it to point out the nonsense.
- Redditor shares an "AI Dungeon" game played with the new GPT-3 -based "Dragon Model", involving a cohesive story generated in response to their actions, with only a little manual editing.
- The official Dragon Model announcement.
- I was a little skeptical about some of these GPT-3 results until I tried the Dragon Model myself, and had it generate cohesive space opera with almost no editing.
- Another example of automatically generated code, this time giving GPT-3 a bit of React code defining a component called "ThreeButtonComponent" or "HeaderComponent", and letting it write the rest.
- From a brief description of a medical issue, GPT-3 correctly generates an explanation indicating that it's a case of asthma, mentions a drug that's used to treat asthma, the type of receptor the drug works on, and which multiple-choice quiz question this indicates.
- GPT-3 tries to get a software job, and comes close to passing a phone screen.
- Translating natural language descriptions into shell commands, and vice versa.
- Given a prompt with a few lines of dialogue, GPT-3 continues the story, incorporating details such as having a character make 1800s references after it was briefly mentioned that she's a nineteenth-century noblewoman.
- Turning natural language into lawyerese.
- Using GPT-3 to help you with gratitude journaling.
- Source is an anonymous image board poster so could be fake, but: if you give an AI Dungeon character fake wolf ears and then ask her to explain formal logic to you, she may use the ears in her example.
- Even after seeing all the other results, I honestly have difficulties believing that this one is real.
- Of course, even GPT-3 fumbles sometimes.
The Sequences post you've never read, by GPT-3.
First sampling. Two-shot (two real sequences articles fed in as context).
Hypothesis: Unlike the language models before it and ignoring context length issues, GPT-3's primary limitation is that it's output mirrors the distribution it was trained on. Without further intervention, it will write things that are no more coherent than the average person could put together. By conditioning it on output from smart people, GPT-3 can be switched into a mode where it outputs smart text.
Has anyone tried to get it to talk itself out of the box yet?
Yup, i saw an attempt on the SSC subreddit
Thank you! It looks very impressive.
According to Gwern, it fails the Parity Task.
Two of my own: To what extent is GPT-3 capable of reasoning? and GPT-3 Gems.