AI ALIGNMENT FORUM
AF

Wikitags
You are viewing version 1.10.0 of this page. Click here to view the latest version.

Community

Edited by Ruby, jimrandomh last updated 6th Oct 2022
You are viewing revision 1.10.0, last edited by Ben Pace

The LessWrong Community is the people who write to and contribute to the mission of LessWrong, of refining the art of human rationality. This tag includes community events, analysis of the quality, direction and norms of the community, and space to understand communities in general and how to ensure the rationality community is healthy. 

LessWrong also has many brothers and sisters like the Berkeley Rationality Community, SlateStarCodex, Effective Altruism, AI Alignment, and more, who participate here. To see upcoming LessWrong events, go to the community section.

This tag applies to any post about:

  • Specific projects, orgs, and prizes [e.g. 1, 2, 3, 4, 5]
  • Requests and offers for help [1, 2, 3]
  • Announcements, retrospectives, funding requests, and AMAs from orgs [1 2 3, 4, 5]
  • Discussions of the orgs in the LessWrong, Rationalist cluster [1, 2]
  • Discussions about the LessWrong, Rationalist, and related communities [1, 2, 3]

The Community tag does not apply to Event Posts which are not taggable in any case.

Community Sub-Topics

Research Agendas
Solstice Celebration
Petrov Day
Ritual

While the World Optimization core tag is for posts discussing how to do good in general, the Community tag is for the specific, concrete efforts of our community to execute plans.

Subscribe
Subscribe
Discussion0
Discussion0
Posts tagged Community
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
28Applications for AI Safety Camp 2022 Now Open!
adamShimi
4y
0
116Alignment Grantmaking is Funding-Limited Right Now
johnswentworth
2y
14
58Alignment Research Field Guide
abramdemski
6y
8
87What I would do if I wasn’t at ARC Evals
LawrenceC
2y
4
72Hiring engineers and researchers to help align GPT-3
paulfchristiano
5y
7
57We Are Conjecture, A New Alignment Research Startup
Connor Leahy
3y
0
59Announcing the Inverse Scaling Prize ($250k Prize Pool)
Ethan Perez, Ian McKenzie, Sam Bowman
3y
1
45Many arguments for AI x-risk are wrong
TurnTrout
2y
47
63Prizes for ELK proposals
paulfchristiano
4y
69
45Refine: An Incubator for Conceptual Alignment Research Bets
adamShimi
3y
4
61Evaluations (of new AI Safety researchers) can be noisy
LawrenceC
3y
5
51Call for research on evaluating alignment (funding + advice available)
Beth Barnes
4y
10
40Announcing Encultured AI: Building a Video Game
Andrew_Critch, Nick Hay
3y
2
Load More (15/79)
Add Posts