AI ALIGNMENT FORUM
AF

JustinShovelain
Ω13110
Message
Dialogue
Subscribe

I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last sixteen years I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Some background for reasoning about dual-use alignment research
JustinShovelain2y30

Complementary ideas to this article:

  • https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects: (the origin for the fuel tank metaphor Raemon refers to in these comments) 
  • Extending things further to handle higher order derivatives and putting things within a cohesive space: https://forum.effectivealtruism.org/posts/TCxik4KvTgGzMowP9/state-space-of-x-risk-trajectories
  • A typology for mapping downside risks: https://www.lesswrong.com/posts/RY9XYoqPeMc8W8zbH/mapping-downside-risks-and-information-hazards
  • A set of potential responses for what to do with potentially dangerous developments and a heuristic for triggering that evaluation: https://www.lesswrong.com/posts/6ur8vDX6ApAXrRN3t/information-hazards-why-you-should-care-and-what-you-can-do
  • A general heuristic for what technology to develop and how to distribute it: https://forum.effectivealtruism.org/posts/4oGYbvcy2SRHTWgWk/improving-the-future-by-influencing-actors-benevolence
  • A coherence focused framework from which is more fundamental than the link just above and from which it can be derived: https://www.lesswrong.com/posts/AtwPwD6PBsqfpCsHE/aligning-ai-by-optimizing-for-wisdom
Reply
No wikitag contributions to display.
12Updating Utility Functions
3y
0
8Using vector fields to visualise preferences and make them consistent
6y
3