Sharing Powerful AI Models
This is a linkpost for: https://www.governance.ai/post/sharing-powerful-ai-models On the GovAI blog, Toby Shevlane (FHI) argues in favour of labs granting "structured access" to AI models.
This is a linkpost for: https://www.governance.ai/post/sharing-powerful-ai-models On the GovAI blog, Toby Shevlane (FHI) argues in favour of labs granting "structured access" to AI models.
Cross-posted to the EA forum. Summary * In August 2020, we conducted an online survey of prominent AI safety and governance researchers. You can see a copy of the survey at this link.[1] * We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts,...
Junior researchers are often wondering what they should work on. To potentially help, we asked people at the Centre for the Governance of AI for research ideas related to longtermist AI governance. The compiled ideas are developed to varying degrees, including not just questions, but also some concrete research approaches,...
This work was done collaboratively with Tom Davidson. Thanks to Paul Christiano, Ben Garfinkel, Daniel Garrett, Robin Hanson, Philip Trammell and Takuro Yamashita for helpful comments and discussion. Errors our own. Introduction The AI alignment problem has similarities with the principal-agent problem studied by economists. In both cases, the problem...