This reads like a strawman perspective on the concept of 'effective altruism.'
The 'effective' part in the term is doing a lot of the heavy lifting when it comes to the definition. It seems reasonable to focus on actually 'effective' actions with good outcomes, rather than comparatively ineffective virtual signalling or martyrdom.
Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens. Both should be admired and encouraged, but one is clearly more 'effective' than the other.
>Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens.
It's also more self-serving for the better part of it (as your focus is to become rich), and might never reach the second part (actually becoming rich and helping others).
It's also bogus: not many can become rich, both for reasons for circumstance making from much harder to nigh impossible for billions, and also for purely logical reasons: wealth is relative, and if much bigger numbers of people were rich, it would get lost to inflation.
So this effectively gives a pie-in-the-sky idea for what to become and when to help, to most people, as opposed to encouraging them to help dozens (or just a few) here and now.
Sure. So it's possible to focus on getting rich and then never give back. There's a solution to that, which is continuing to give a percentage of your income.
Some people are quite charitable and give 10%. I'm a little more selfish and give 1.5%, but this is already currently helping people.
Is it really pie in the sky idealism if (statistically) I've saved around 6 lives to far, (and helped many more). Still early in my career, and lots of time to go. I hope to cross 100 someday. But I started just as I graduated and have kept it up every year.
So I suppose the next question in my rebuttal would be to ask what alternative you suggest?
Who cares if it's more self serving as long as they do get the the second part? There is such a thing as win-win outcomes.
"It's also bogus: not many can become rich"
No need to become rich. You can have a big impact and save many lives on a dev salary. EA isn't premised on becoming a billionaire, as you seem to incorrectly believe.
But that assumes that
1. A significant portion of the people who want to get rich to help people actually get rich
2. Once they are rich they actually help
3. The process of getting rich doesn't make others poorer. (example does Bezos donating some portion of his wealth do better than him actually paying people more, or have the part owners of amazon?)
All these points are far from obviously true. In fact I can come up with many examples where they obviously are not.
Regarding (1) - Becoming rich isn't part of the premise of EA. Getting a $100k job is often sufficient to have extra cash that can save real lives.
Regarding (2) - There are examples of successful EA in action. See the FTX founder.
Regarding (3) - Getting rich can't make people poorer (on net) unless you're exploiting negative externalities, which we would both agree need to be addressed by government. It can transfer wealth from A to B (e.g., by deprecating one industry and replacing it with another), for sure, but I don't see that as automatically bad.
I don't against EA in general in the comment (I do have tons of reservations against it, but not time to write about it here atm). I respond to the grandparent's comment, specifically the part I explicitly quoted -- which does mention becoming rich.
That said:
>Who cares if it's more self serving as long as they do get the the second part?
People like me, who don't consider it being altruistic or not as orthogonal to the success of the second part.
Are you using 'orthogonal' to mean "unrelated to", or do you mean "in opposition to"?
Because depending on which meaning you intend, your final sentence can be read in two very different ways. If it's the first, I'm not completely sure you and fighterpilot are necessarily disagreeing? If it's the second meaning, then it's a somewhat bold claim to say that personal benefit in any form invalidates altruism.
People like me, who don't consider it being altruistic or not as orthogonal to the success of the second part.
>Are you using 'orthogonal' to mean "unrelated to", or do you mean "in opposition to"?
I'm using it to mean "unrelated to" (canonical CS meaning of "orthogonal" as well).
But, here's the catch, note that I used a double negative in my phrasing. so, I'm not saying they are othogonal, I'm saying they're not orthogonal. That is, I don't consider "being altruistic or not as orthogonal to the success of the second part" (the second part being the "helping" we were discussing).
So, it's not that "personal benefit in any form invalidates altruism", it's that lack of altruism works against effectivly helping people.
>It's also bogus: not many can become rich, both for reasons for circumstance making from much harder to nigh impossible for billions, and also for purely logical reasons: wealth is relative, and if much bigger numbers of people were rich, it would get lost to inflation.
Yes, uninvested wealth is zero sum (cash and cash equivalents) and that is why we have inflation, but invested wealth can be positive sum.
If you use your uninvested wealth to invest into a poor country then you end up doing more good than bad.
Not to mention that becoming rich usually requires participaing in some kind of scheme that transfers money from vast number of people poorer than you into your pocket.
Not necessarily, most wealth isn’t zero-sum. Everyone benefits from a new miracle drug, or a new industrial technique, even those who become rich from it, and those who don’t.
The modern world compared 1700 is a great example on joe wealth can (but doesn’t always, see Saudi Arabia) benefit all.
Sure. Sometimes the scheme is introducing a new miracle tech to the masses of poor people and it benefits them. However some additional money beyond the cost of the tech is taken from them in return.
Other times the scheme is just cheating people or restricting their access to hoarded limited resource.
The miracle drug is a good example. Sure everyone benefits if it is not expensive. Take the Covid vaccines for example, there are many locations (e.g. In India) that could produce vaccines, however the companies (and the Western countries) do not want to open up the patents, even though we would all (even us in the 1st world) benefit, because eliminating covid as quick as possible would reduce the chance of mutations.
A lot of people believe Q-anon. This is such a shitty, drive by way of asserting something. Do you believe it? If so then make the case, but the way you've dropped this steamy innuendo here without even taking a position on it yourself contributes less than nothing to the discussion.
MS. GATES: Well, we, along with many other partners, supported Oxford in saying what really needs to have happened there, they had incredible science, but they had never brought a vaccine to market. And so, they needed to partner with a pharmaceutical company who had expertise bringing a vaccine to market. And so, it was us and many partners that said that's a partnership that ought to happen. Ultimately, Oxford made that decision.
I can't imagine why. Maybe be because Melinda Gates said it herself that they advised Oxford to find strong commercial partner for their vaccine instead not patenting it?
Sure, but it doesn't fit into the traditional framework of "effective altruism". Collective efforts-based altruism is just very different in so many ways.
Why not? A collective, efforts-based, altruistic organization needs funding. If such a organization could demonstrate that their efforts lead to good outcomes (improve happiness, save lives), then effective altruists would donate to them.
A legitimate argument is that not everyone can make tons of money and donate because someone has to do the work. But there are plenty of "effective" organizations that are not yet overfunded.
The thing is that money without involvement is poison to collective movements, besides a certain point. Money and involvement is much better.
If all you want is to distribute, then sure. Collective movements aren't even the best at straight distribution, so someone trying to maximize the marginal utility of their dollar probably won't even donate. But collective action is generally not just about distribution, and moreso about fixing the structure of society to make distribution from rich donors unnecessary.
I agree that both money and involvement is necessary. But in our current world, nonprofits have a lack of money, not lack of involvement. Nonprofits talk all the time about how difficult it is to fundraise. And all the charities GiveWell recommends still have a lack of capital.
I think EA's would 100% want to fix the structure of society, if that method was resource efficient. If you believe that changing the structure of society is more resource efficient (in terms of time or money) than donating to AMF, GiveDirectly, or Deworm the World, please publish your analysis.
How much {money, time, etc.} would it take to convince a government or people to adopt a certain policy? What would the benefits of that policy be? How much pushback would you get from opponents? What are the risks? If you can successfully make an argument that changing a policy would be more resource efficient than current efficient charities, that would convince EAs to direct more resources to politics.
Any time humans get together to solve a problem, band together to form a charity, coordinate over a social network to send out PPE, etc, they are not simply allocating optimally at the margins, they are aggregating resources and spending them toward a directed goal more efficiently. These are all examples of altruism. If you narrowly scope altruism to "marginal donation optimization", then yes, effective altruism is indeed a fairly trivially optimal way to allocate these resources.
> As with Dada, altruism doesn't survive intellectual evaluation.
Sure and I'm not as interested in discussing the philosophical ideal of altruism. My interest in charitable work, as I suspect many interested in EA feel, stems from trying to do the greatest good with my limited allocation of resources, be that money, time, knowledge, manual labor, or otherwise. In that regard I'm unconcerned about the usual moral philosophical questions about motive and goodness.
Neither of these requirements make sense. First, with activism, being known pretty often costs you. And with second, you are not altruistic if you dont die?
(Plus, people who helped the right cause and did not died have done more good then those who died for bad cause. Ultimate sacrifice for something bad does not make you better.)
> Anything less is open to the usual questions of motive.
But then the focus is on "if someone theoretically learned about me existing, do I leave a space for that person to attribute to me some intentions?" And the answer to that should be "who cares".
A single poor person may be able to have a large impact on ~dozens of people, but even a small impact on millions of people is likely more valuable and 'altruistic' in terms of effect.
But again, both paths should be encouraged. There will always be relatively few 'rich' people with the means to help millions.
> Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens.
Is it though? E.g. Bill Gates got insanely rich because his company engaged in anti-competitive behavior, thus causing lots of harm. You can easily make a point that often the damage that is caused to others if larger than the gain for the individual, so even if they spent 100% of their wealth, they couldn't make up for the damage they have caused.
In that case, maybe it's better to look at medicine and the Hippocratic Oath: first, do no harm.
When a hedge fund analyst meddles with food supply in Ethiopia and makes a killing while a few thousand starve, but then builds a school to educate a hundred kids in Ethiopia... how much good has he done?
But the "effective" part means more than just that. From their site:
> It is a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible.
This assumes that reasoning and evidence are sufficient to determine what will be most effective. Compare this to the Buddhists, for example, who believe that the networks of causality are so complex and intricate that trying to make such determinations (at least, on any humanly reasonable scale) are doomed due to missing a lot of the picture as well as generally incorporating many faulty assumptions.
In other words, what is ultimately most effective at ending suffering may not fit into the metaphysical framework that EA implicitly adopts.
This is a common philosophical problem. EA fits in a philosophical framework broadly known as utilitarianism (which is also popular among rationalists, a group known to be fans of EA). Buddhism's ideas of moral philosophy often lie in a mixture of what is commonly known as Divine Will Ethics and Virtue Ethics. Utilitarianism is quite popular among modern STEM discourse, but there are moral alternatives out there that have been sufficiently explored. The idea that decision making may just be too complex to be effectively optimized is one of Utilitarianism's long-standing critiques (e.g., in the trolley problem, what happens if you save the 3 people over the 1 person, but one of those 3 people ends up becoming (or is) a dictator who orchestrates a genocide.)
The trolley problem situation seems pretty straightforward under utilitarianism if you add probability into it, I've never found the "what if one of the 3/4/5 could have been Hitler?" argument particularly convincing. The dictator assumption would apply to all humans born, and we as a species clearly don't punish birth sufficiently if the dictator risk was that heavily weighted.
Axiom 1: Human life has value, saving a life is thus positive utility.
Assumption 1: We know nothing of the people tied to the trolley -> All humans are probabilistically equal.
Thus, the utilitarian presses the button and saves 3 at the cost of 1.
I'm not sure what probabilities have to do with anything here. It sounds like you're applying a prior that each individual on each side has some "badness" that is uniformly distributed. In effect, a uniform distribution is probably one of the weakest priors and leads to some of the highest variance results (save perhaps Jeffery's Prior). I would go so far as to say assuming uniform probability of "badness" is probably a bad assumption on the way humans actually discriminate on other humans.
Probability is the reflection of the state of your knowledge. If you know someone might become the next Hitler, but you don't know who that might be, or if they're even on the tracks right now, then this is reflected by assigning the same probability of "hitlerism" to all the people on tracks.
And even if you had some suspicions, once you honestly factor in your certainty about predictive value of your suspicions, it multiplies down to nothing - so in practice, an uniform prior is a pretty good choice.
Yes, it's true that all this stuff is not computable in practice, that it's too complex - but it's not really an obstacle! Everything in life is like that. When you're cooking and a recipe calls for 200g of an ingredient, you're fine with anything between 180g or 220g, and don't care how much of it is water. A restaurant may need to do 200g ± 5g. A chemical plant may want to do 200g ± 0.5g, after first demoisturizing. Nobody tries to hit 200g to ± 1 dalton, because that would be ridiculously complex to do, and require to control for phenomena that most people don't even know exist.
We've been trading precision for compute time ever since humans first figured out that things can be measured. The solution is what it's always been: calculating using heuristics and approximations, while keeping tracks of the bounds of those heuristics and approximations.
> And even if you had some suspicions, once you honestly factor in your certainty about predictive value of your suspicions, it multiplies down to nothing - so in practice, an uniform prior is a pretty good choice.
Does it? You seem to be marginalizing across the entire population, but it's eminently obvious that most humans do _not_ see a human and erase all aspects of their being and then consider them to be one human in a uniformly distributed pool. Humans make snap judgements based on hundreds of subconscious factors all the time. Most humans look at a person and immediately bucket them into a cohort using some mix of physical attributes and prior beliefs of these attributes. Once we're in a cohort, relative probabilities can change. In this particular example, what happens if one of the 3 happens to be wearing military fatigues? What if they have a well-known authoritarian political symbol on their clothing?
Anyway, lots of intelligent philosophers have argued against utilitarianism and have brought up flaws in utilitarianism, such as Utility Monsters, and then there's the issue of even being able to impose a utility-based order between disparate things such as "tasty candy" and "rocks underneath my feet". Feel free to read those on the internet.
> Most humans look at a person and immediately bucket them into a cohort using some mix of physical attributes and prior beliefs of these attributes. Once we're in a cohort, relative probabilities can change.
That indeed happens, but going by the gut isn't a bullet-proof moral philosophy. Snap judgements are something we can evaluate and consciously calibrate when we're not in the heat of the moment.
> lots of intelligent philosophers have argued against utilitarianism and have brought up flaws in utilitarianism
Yeah, I read some of those, many are really solid objections. I'm not arguing that utilitarianism is the be-all, end-all moral philosophy - just that it's a surprisingly good heuristic that can be pushed quite far before it starts to go "off the rails". And primarily, I'm arguing that working with approximations and heuristics is something we know how to do - not being able to compute a perfectly precise answer isn't a valid argument against a method.
> (e.g., in the trolley problem, what happens if you save the 3 people over the 1 person, but one of those 3 people ends up becoming (or is) a dictator who orchestrates a genocide.)
Presumably by this logic, you'd have no opinion on whether a midwife should save a newborn baby's life, as it's unknowable whether the baby will one day orchestrate a genocide?
Well this gets into the absurdity of relative utility of goodness altogether. If I'm comparing a child's happiness from eating a candy to the unhappiness of stepping on a pebble, what are the relative magnitudes?
Regardless, there are lots of criticisms of utilitarianism, and utilitarianism is far from the ascendant moral philosophy, so feel free to find others online.
I'll bite. Probability comes into play. The vast majority of babies do not in future orchestrate a genocide, but instead live lives worth living, and in turn contribute in some way to shared human progress and wellbeing. So the expected value of the baby's life is positive, the baby should be saved.
The very low chance of a future genocide does not weigh highly in this decision.
Thanks, but elsewhere Karrot_Kream says "I'm not sure what probabilities have to do with anything here" so I'm really hoping to hear their non-probability-based thinking.
Why the fuck are you worrying about "metaphysical frameworks?" The things EA mostly focuses on are people dying of easily preventable starvation and disease. There is no need to bring up highly abstract ideas about "metaphysics" (whatever you mean by that - morality isn't generally considered to be in the field of metaphysics) when the suffering is so blatantly concrete.
At the EA conference I attended, one of the talks that got people excited was about false vacuum decay of the universe (which would end all life everywhere), and why therefore QFT research was the most effective. Even if that's not their major focus now, it fit nicely into their framework.
The 'effective' part in the term is doing a lot of the heavy lifting when it comes to the definition. It seems reasonable to focus on actually 'effective' actions with good outcomes, rather than comparatively ineffective virtual signalling or martyrdom.
Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens. Both should be admired and encouraged, but one is clearly more 'effective' than the other.