AI ALIGNMENT FORUM
Wikitags
AF

Subscribe
Discussion0

GPT

Subscribe
Discussion0
Written by Ruben Bloom, Multicore, Ben Pace, A_donor, et al. last updated 27th Aug 2022

GPT (Generative Pretrained Transformer) is a family of large transformer-based language models created by OpenAI. Its ability to generate remarkably human-like responses has relevance to discussions on AGI.

External links:

GPT-3 Paper

GPT-3 Website

Posts tagged GPT
10
24Collection of GPT-3 results
Kaj Sotala
5y
7
10
24To what extent is GPT-3 capable of reasoning?
Q
Alex Turner, Daniel Kokotajlo
5y
Q
26
7
18$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is
Bird Concept
5y
15
7
46Alignment As A Bottleneck To Usefulness Of GPT-3
johnswentworth
5y
31
7
29How "honest" is GPT-3?
Q
Abram Demski, gwern
5y
Q
4
6
61larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist
3y
7
3
36Developmental Stages of GPTs
orthonormal
5y
43
6
33Can you get AGI from a Transformer?
Steve Byrnes
5y
17
2
47Are we in an AI overhang?
Andy Jones
5y
27
3
5Analyzing the Problem GPT-3 is Trying to Solve
Adam Shimi
5y
0
2
78interpreting GPT: the logit lens
nostalgebraist
5y
14
2
72Hiring engineers and researchers to help align GPT-3
Paul Christiano
5y
7
2
50the scaling “inconsistency”: openAI’s new insight
nostalgebraist
5y
8
1
61How LLMs are and are not myopic
janus
2y
7
2
39Extrapolating GPT-N performance
Lukas Finnveden
4y
19
Load More (15/85)
Add Posts