AI ALIGNMENT FORUM
AF

Wikitags
You are viewing version 1.14.0 of this page. Click here to view the latest version.

Community

Edited by Ruby, jimrandomh, et al. last updated 6th Oct 2022
You are viewing revision 1.14.0, last edited by Ruby

The LessWrong Community is the people who write on LessWrong and who contribute to its mission of refining the art of human rationality. This tag includes community events, analysis of the health, norms and direction of the community, and space to understand communities in general.

LessWrong also has many brothers and sisters like the Berkeley Rationality Community, SlateStarCodex, Effective Altruism, AI Alignment, and more, who participate here. To see upcoming LessWrong events, go to the community section.


                                   Community Sub-Topics

All

Bounties (active)
Grants & Fundraising
Growth Stories
Online Socialization
Petrov Day
Public Discourse
Research Agendas
Ritual
Solstice Celebration
 

LessWrong

Events (Community)
Site Meta
GreaterWrong Meta
LessWrong Events
LW Moderation
Meetups (topic)
Moderation (topic)
The SF Bay Area
Tagging

Not all Community posts are tagged with subtopics.


This tag applies to any post about:

  • Specific projects, orgs, and prizes [e.g. 1, 2, 3, 4, 5]
  • Requests and offers for help [1, 2, 3]
  • Announcements, retrospectives, funding requests, and AMAs from orgs [1 2 3, 4, 5]
  • Discussions of the orgs in the LessWrong, Rationalist cluster [1, 2]
  • Discussions about the LessWrong, Rationalist, and related communities [1, 2, 3]

While the World Optimization core tag is for posts discussing how to do good in general, the Community tag is for the specific, concrete efforts of our community to execute plans.

Subscribe
Subscribe
Discussion0
Discussion0
Posts tagged Community
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
28Applications for AI Safety Camp 2022 Now Open!
adamShimi
4y
0
116Alignment Grantmaking is Funding-Limited Right Now
johnswentworth
2y
14
58Alignment Research Field Guide
abramdemski
6y
8
87What I would do if I wasn’t at ARC Evals
LawrenceC
2y
4
72Hiring engineers and researchers to help align GPT-3
paulfchristiano
5y
7
57We Are Conjecture, A New Alignment Research Startup
Connor Leahy
3y
0
59Announcing the Inverse Scaling Prize ($250k Prize Pool)
Ethan Perez, Ian McKenzie, Sam Bowman
3y
1
45Many arguments for AI x-risk are wrong
TurnTrout
1y
47
63Prizes for ELK proposals
paulfchristiano
4y
69
45Refine: An Incubator for Conceptual Alignment Research Bets
adamShimi
3y
4
61Evaluations (of new AI Safety researchers) can be noisy
LawrenceC
3y
5
51Call for research on evaluating alignment (funding + advice available)
Beth Barnes
4y
10
40Announcing Encultured AI: Building a Video Game
Andrew_Critch, Nick Hay
3y
2
Load More (15/79)
Add Posts