Archetypal Transfer Learning

Archetypal Transfer Learning (ATL) is a proposal by @whitehatStoic for what is argued by the author to be a fine tuning approach that "uses archetypal data" to "embed Synthetic Archetypes". These Synthetic Archetypes are derived from patterns that models assimilate from archetypal data, such as artificial stories. The method yielded a shutdown activation rate of 57.3333% % in the GPT-2-XL model after fine-tuning. 

The team, consisting of @MiguelDev, @marc/er, @Abhay Chowdhry is working to improve the build to a 100%. 

Related Tags: Corrigibility, Inner Alignment, Outer Alignment 

The team, consisting of @MiguelDev, @marc/er, @Abhay Chowdhry, and Mazianni is working to improve the build to a 100%. 

Archetypal Transfer Learning (ATL) is a proposal by @whitehatStoic for what is argued by the author to be a fine tuning approach that "uses archetypal data" to "embed Synthetic Archetypes". These Synthetic Archetypes are derived from patterns that models assimilate from archetypal data, such as artificial stories. The method yielded a shutdown activation rate of 50.67%57.33% in the GPT-2-XL model after fine-tuning. 

The team, consisting of @MiguelDev, @marc/er, @Abhay Chowdhry, Mazianni and @Linda LinseforsMazianni is working to improve the build to a 100%. 

Archetypal Transfer Learning (ATL) is a proposal by @whitehatStoic for what is argued by the author to be a fine tuning approach that "uses archetypal data" to "embed Synthetic Archetypes". These Synthetic Archetypes are derived from patterns that models assimilate from archetypal data, such as artificial stories. The method yielded a shutdown activation rate of 29.33%50.67% allowing GPT2-xl.in the GPT-2-XL model after fine-tuning. 

Archetypal Transfer Learning (ATL) is a proposal by @whitehatStoic for what is argued by the author to be a fine tuning approach that "uses archetypal data" to "embed Synthetic Archetypes". These Synthetic Archetypes are derived from patterns that models assimilate from archetypal data, such as artificial stories. The method yielded a shutdown activation rate of 38.6%29.33% allowing GPT2-medium to shutdown itself 386 times in 1,000 tries in the event its intelligence exceeded that of humans.xl. 

The team, consisting of @MiguelDev, @marc/er, @Abhay Chowdhry, Mazianni and @Linda Linsefors is working to improve the build to a 100%. The project proposal is found here.