It is not clear why, under many moral systems we should care more about people who are in our country than to those who aren't. But those who are in developing nations can be helped about 100x more cheaply than those in the US.
On a deeper level, EAs say that species is not the marker of moral worth. If we had evolved from dolphins rather than apes, would we be less deserving of moral consideration? If this logic follows, it implies significant low-cost opportunities to improve welfare.
The winners of this https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest
One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself?[1]
(#fncz1wzc3buf9)
If many unrelated factors point towards doing the same action, beware that you may be using motivated reasoning[2](#fnz97ytpiywqf).
[how much lower higher? risk of existential catastrphe as a result][3](#fnqlydrfpz78)
One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself?[
[1]](#fncz1wzc3buf9)
If many unrelated factors point towards doing the same action, beware that you may be using motivated reasoning[[2]](#fnz97ytpiywqf).
[how much lower higher? risk of existential catastrphe as a result][[3]](#fnqlydrfpz78)
Peter Singer - https://newint.org/features/1997/04/05/peter-singer-drowning-child-new-internationalist
Because existential risk is so important compared to anything else, there is some chance that EA has made this a little worse and so is a net negative enterprise
https://twitter.com/KerryLVaughan/status/1545063368695898112?s=20&t=xgaSuh22V6y44Wkcebo22Q
https://twitter.com/xriskology/status/1579832304503259136?s=20&t=e8IFDZuxC5gLO2vdCldwyg
One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself?[[1]](#fncz1wzc3buf9)
It is not clear why, under many moral systems we should care more about people who are in our country than to those who aren't. But those who are in developing nations can be helped about 100x more cheaply than those in the US.
On a deeper level, EAs say that species is not the marker of moral worth. If we had evolved from dolphins rather than apes, would we be less deserving of moral consideration? If this logic follows, it implies significant low-cost opportunities to improve welfare.
If many unrelated factors point towards doing the same action, beware that you may be using motivated reasoning[[2]](#fnz97ytpiywqf).
https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data Spreadsheet https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=771773474
[how much lower higher? risk of existential catastrphe as a result][[3]](#fnqlydrfpz78)
The winners of this https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest
**[^](#fnrefcz1wzc3buf9)**
**[^](#fnrefz97ytpiywqf)**
https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence
**[^](#fnrefqlydrfpz78)**
Because existential risk is so important compared to anything else, there is some chance that EA has made this a little worse and so is a net negative enterprise
**[^](#fnrefipt32j7op0s)**
https://twitter.com/KerryLVaughan/status/1545063368695898112?s=20&t=xgaSuh22V6y44Wkcebo22Q
**[^](#fnref8bu6gauz733)**
https://twitter.com/xriskology/status/1579832304503259136?s=20&t=e8IFDZuxC5gLO2vdCldwyg
The Scale, Neglectedness, Tractability, (Personal Fit) criteria