Application deadline: * Main deadline: Midnight, 7th December, Sydney time * If we have unfilled slots, we may still accept applications until the 14th of December Location: Sydney (definite); Melbourne (likely; contingent on sufficient high-quality applications) When: January/February 2026 with remote activities pre and post the main fellowship (detail further...
This week for Wise AI Wednesdays, I'll be sharing something a bit different - the announcement post of a competion that is already over (the AI Impacts Essay competition on the Automation of Wisdom and Philosophy). If you're wondering why I'm sharing it, even though some of the specific discussion...
This is a list of projects[1] to consider for folks who want to use Wise AI to steer the world towards positive outcomes. Some of these projects are listed because they're impactful. Others are listed because I believe they would be good projects for someone to get started. Please note...
Why did I create both a post and a wiki article? Posts are best for making sure people see the initial version of the post, whilst Wiki articles are best for long-term maintenance. Posting an article that links to a Wiki page provides the best of both worlds. For the...
How does my summary compare to the original paper? Obviously, the summary is shorter :-). Additionally, I have: • Streamlined the user journey with collapsable sections • Communicated some ideas in an easier to digest format • Added commentary and a glossary Whilst I've tried as hard as possible to...
> We may not know precisely what form wise AI will take—but it must surely be preferable to folly. Read the full, original paper Paper Authors: Samuel G. B. Johnson, Amir-Hossein Karimi, Yoshua Bengio, Nick Chater, Tobias Gerstenberg, Kate Larson, Sydney Levine, Melanie Mitchell, Iyad Rahwan, Bernhard Schölkopf, Igor Grossmann...
Here’s my take on why the distinction between inner and outer-alignment frame is weird/unclear/ambiguous in some circumstances: My understanding is that these terms were originally used when talking about AGI. So outer alignment involved writing down a reward or utility function for all of human values and inner alignment involves...