AI ALIGNMENT FORUM
AF

Wikitags

Free Will

Edited by Eliezer Yudkowsky, et al. last updated 1st Oct 2020

Free will is one of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is generally considered fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.

Non-spoiler posts

The following posts can be read to set up the problem of "free will" and what constitutes a good solution from a reductionist perspective, without entirely giving away the solution. (When Yudkowsky wrote these posts, he thought he could get away with just leaving it as a practice problem, and some of the posts state that the problem will be left open. However Yudkowsky did eventually find that he needed to write out the whole solution.)

  • How An Algorithm Feels From Inside (see also the )
  • Dissolving the Question - this is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
  • Wrong Questions
  • Righting a Wrong Question

For spoiler posts see .

See also

Subscribe
2
Subscribe
2
wiki page
How an algorithm feels from the inside
free will (solution)
Discussion0
Discussion0
Posts tagged Free Will
5What is the subjective experience of free will for agents?
Q
Gordon Seidoh Worley
5y
Q
4
11A critical agential account of free will, causation, and physics
Jessica Taylor
5y
3
Add Posts