AI ALIGNMENT FORUM
AF

Jessica Rumbelow
Ω139000
Message
Dialogue
Subscribe

AI researcher

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
SolidGoldMagikarp (plus, prompt generation)
Jessica Rumbelow2y1-2

This link: https://help.openai.com/en/articles/6824809-embeddings-frequently-asked-questions says that token embeddings are normalised to length 1, but a quick inspection of the embeddings available through the huggingface model shows this isn't the case. I think that's the extent of our claim. For prompt generation, we normalise the embeddings ourselves and constrain the search to that space, which results in better performance. 

Reply
20SolidGoldMagikarp III: Glitch token archaeology
2y
3
25SolidGoldMagikarp II: technical details and more recent findings
2y
0
138SolidGoldMagikarp (plus, prompt generation)
2y
17