Open Problems On Less Wrong

This page has Open Problems on LessWrong. It was last updated 2012 and is rather out of date. Some new pages/questions have come since then [links needed]. 

This is a bad first draft, and may be drastically revised.  But, as a starting vision, I'm hoping these pages will do three things:

  1. How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
    • a. What are the specific biases?
    • b.  To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
    • c. How accurate are most peoples’ beliefs, overall?   How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
  2. How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
  3. What practical techniques can improve the accuracy of individuals’ beliefs?
  4. What practical techniques can improve the accuracy of groups’ beliefs?
  5. Disagreements on Less Wrong.
  6. Foundational questions: How is it that people can form accurate beliefs at all? How would an ideal accurate belief-former form its beliefs? Would such a belief-former use probabilities? What are probabilities?  Where do priors come from?
     
  1. To what extent does forming more accurate beliefs tend to help people achieve happiness, positive social relationships, income, longevity, actually useful philanthropy, or other goals?
  2. It would be nice to have more closure on the "Should Rationalists Win?" question--the latest salvo here being "Extreme Rationality: It's Not That Great" which essentially answered "Not on the present state of the Art"
  3. What techniques other than improving the accuracy of our beliefs, can help us achieve important goals?
  4. Foundational questions: What do humans really care about, and what formalisms can help us descriptively or normatively model human concerns? Does it make sense to discuss value as distinct from human preferences? How would an ideal goal-maximizer think?  What other foundations do we need, to think non-confusedly about the grand unified problem of What To Do?
     
Created by AnnaSalamon at 4y
  1. Provide a good landing page for newcomers. Let newcomers see what we’re up to, why a person might care about it, and which past posts can help them with which of their own questions.
  2. Provide a good review, or summary of what has been discussed or figured out where, for current community members who’d like a better birds-eye picture of our project.
  3. Provide a list of "open problems" and "articles someone should write", together with a picture of how progress on those problems would contribute to LW's project, and an index of progress to date. So that would-be authors can see useful avenues to contribute, and so that some portion of LW posts can visibly contribute to a useful, cummulativecumulative project, instead of being like a succession of randomly entertaining newspaper articles that no one much cares about afterwards. (The idea is not that LW posts should do this, but that it would be nice if some portion did, and if LW readers ended up building some cummulativecumulative competencies over time.)

Technical Issues

(2010-09-10) The Atom feed on Special:RecentChanges is broken in both Firefox and Akregator. In Firefox, the message is:

XML Parsing Error: undefined entityLocation: jar:file:///usr/lib/firefox-3.6.8/chrome/browser.jar!/content/browser/feeds/subscribe.xhtmlLine Number 22, Column 12:    <title>&feedPage.title;</title>

Technical Issues

(2010-09-10) The Atom feed on Special:RecentChanges is broken in both Firefox and Akregator. In Firefox, the message is:

XML Parsing Error: undefined entityLocation: jar:file:///usr/lib/firefox-3.6.8/chrome/browser.jar!/content/browser/feeds/subscribe.xhtmlLine Number 22, Column 12:    <title>&feedPage.title;</title>

This is an essay by LessWrong user Yvain. See :Category:Problems for a list of unresolved problems in human rationality.

This is an essay by LessWrong user Yvain. See :Category:Problems for a list of unresolved problems in human rationality.

This is an essay by LessWrong user Yvain. See :Category:Problems for a list of unresolved problems in human rationality.

This is an essay by LessWrong user Yvain. See :Category:Problems for a list of unresolved problems in human rationality.

  1. To what extent does forming more accurate beliefs tend to help people achieve happiness, positive social relationships, income, longevity, actually useful philanthropy, or other goals?
  2. It would be nice to have more closure on the "Should Rationalists Win?" question--the latest salvo here being "Extreme Rationality: It's not that greatNot That Great" which essentially answered "Not on the present state of the Art"
  3. What techniques other than improving the accuracy of our beliefs, can help us achieve important goals?
  4. Foundational questions:  What do humans really care about, and what formalisms can help us descriptively or normatively model human concerns?  Does it make sense to discuss value as distinct from human preferences?  How would an ideal goal-maximizer think?  What other foundations do we need, to think non-confusedly about the grand unified problem of What To Do?
  1.  How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
    • a.  What are the specific biases or error patterns interfere?biases?
    • b.  To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
    • c.  How accurate are most peoples’ beliefs, overall?   How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
  2. How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
  3. What practical techniques can improve the accuracy of individuals’ beliefs?
  4. What practical techniques can improve the accuracy of groups’ beliefs?
  5. Disagreements on Less Wrong.
  6. Foundational questions: How is it that people can form accurate beliefs at all?  How would an ideal accurate belief-former form its beliefs?  Would such a belief-former use probabilities?  What are probabilities?  Where do priors come from?
  1. To what extent does forming more accurate beliefs tend to help people achieve happiness, positive social relationships, income, longevity, actually useful philanthropy, or other goals?
  2. It would be nice to have more closure on the "Should Rationalists Win?" question--the latest salvo here being "Extreme Rationality: It's not that great" which essentially answered "Not on the present state of the Art"
  3. What techniques other than improving the accuracy of our beliefs, can help us achieve important goals?
  4. Foundational questions:  What do humans really care about, and what formalisms can help us descriptively or normatively model human concerns?  Does it make sense to discuss value as distinct from human preferences?  How would an ideal goal-maximizer think?  What other foundations do we need, to think non-confusedly about the grand unified problem of What To Do?
  1.  How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
    • a.  What specific biases or error patterns interfere?
    • b.  To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
    • c.  How accurate are most peoples’ beliefs, overall?   How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
  2. How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
  3. What practical techniques can improve the accuracy of individuals’ beliefs?
  4. What practical techniques can improve the accuracy of groups’ beliefs?
  5. What are important aspects of the world basically like?Disagreements on Less Wrong  What should we know (e.g., about longevity techniques, or how to earn money, or how to reduce human extinction risks, or how much credence to place in published results) if we want to form accurate beliefs about the areas we most want to model correctly?.
  6. Foundational questions: How is it that people can form accurate beliefs at all?  How would an ideal accurate belief-former form its beliefs?  Would such a belief-former use probabilities?  What are probabilities?  Where do priors come from?
  1. To what extent does forming more accurate beliefs tend to help people achieve happiness, positive social relationships, income, longevity, actually useful philanthropy, or other goals?
  2. What techniques other than improving the accuracy of our beliefs, can help us achieve important goals?
  3. Foundational questions:
    • Concerning human preferences:  What do humans really care about, anyhow?  And inand what senseformalisms can we be said to coherently care about anything?  Also, canhelp us descriptively or normatively model human preferences be well-approximated with a utility function?  Should that utility function be allowed to take very large and/or infinite values?
    • Concerning meta-ethics: What goals are worth pursuing?  What meanings of "worth pursuing" make sense (e.g., doesconcerns?  Does it make sense to discuss value as distinct from one's own  preferences)?  In what sense can any theory of ethics, including theories that e.g. describe what a "rational" agent "with your preferences" would do, be said to be normative?  What happens when real humans interact with theories of what we value?
    • Concerning general goal-maximizers:human preferences?  How would an ideal agentgoal-maximizer think?  What other foundations do we need, to think non-confusedly about causes, effects, and decision theory?  In particular, how would it make decisions in situations such as Newcomb’s problem, where the physical processgrand unified problem of its own decision-making has real effects apart from its decision?  Also, can decision theory be made workable with very large utilities, such as 3^^^3?  With infinite utilities?What To Do?

As I mentioned on LW,This is a bad first draft, and may be drastically revised.  But, as a starting vision, I'm hoping these pages will do three things:

  1. Provide a good landing page for newcomers. Let newcomers see what we’re up to, why a person might care about it, and which past posts can help them with which of their own questions.
  2. Provide a good review, or summary of what has been discussed or figured out where, for current community members who’d like a better birds-eye picture of our project.
  3. Provide a list of "open problems" and "articles someone should write", together with a picture of how progress on those problems would contribute to LW's project, and an index of progress to date. So that would-be authors can see if weuseful avenues to contribute, and so that some portion of LW posts can makevisibly contribute to a questions-based index into the wiki (either as the primary front page to the wiki, or as an alternative view)useful, cummulative project, instead of being like a topics-based index.  Thesuccession of randomly entertaining newspaper articles that no one much cares about afterwards. (The idea here is not that a reader could walk into the wiki and see the questions with which we are engaged, with each question linked to the partial progress we've made to date, and the open sub-questions on which we'd particularly appreciate new contributions.  Questions, or problems for which we want solutions, could make for a more motivating, curiosity-arousing, this-is-why-we're-bothering approach to a table of contents. NewLW posts could be added to the appropriate question, and newcomers could findshould do this, but that new post fairly naturally as they follow out their questions. Also, writers might see how certain potential posts might add to a common project.

    I'm writing a rough-draft list of such questions below, and I'll also start making pages linked to each question, with those linked pages summarizing progress to date (on OB/LW, and to a lesser extent elsewhere) and important open sub-questions on which progressit would be valuable.

    I'd love feedback on either the overall idea, or revisions to the questions-outline of contents.

    nice if some portion did, and if LW readers ended up building some cummulative competencies over time.)
  1.  How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
    • a.  [[What specific biases or error patterns interfere?]]
    • b.  To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
    • c.  How accurate are most peoples’ beliefs, overall?   How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
  2. How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
  3. What practical techniques can improve the accuracy of individuals’ beliefs?
  4. What practical techniques can improve the accuracy of groups’ beliefs?
  5. What are important aspects of the world basically like?  What should we know (e.g., about longevity techniques, or how to earn money, or how to reduce human extinction risks, or how much credence to place in published results) if we want to form accurate beliefs about the areas we most want to model correctly?
  6. Foundational questions: How is it that people can form accurate beliefs at all?  How would an ideal accurate belief-former form its beliefs?  Would such a belief-former use probabilities?  What are probabilities?  Where do priors come from?
  1.  How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
    • a.  [[What specific biases or error patterns interfere?]]
    • b.  To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
    • c.  How accurate are most peoples’ beliefs, overall?   How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
  2. How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
  3. What practical techniques can improve the accuracy of individuals’ beliefs?
  4. What practical techniques can improve the accuracy of groups’ beliefs?
  5. What are important aspects of the world basically like?  What should we know (e.g., about longevity techniques, or how to earn money, or how to reduce human extinction risks, or how much credence to place in published results) if we want to form accurate beliefs about the areas we most want to model correctly?
  6. Foundational questions: How is it that people can form accurate beliefs at all?  How would an ideal accurate belief-former form its beliefs?  Would such a belief-former use probabilities?  What are probabilities?  Where do priors come from?