Well, if you work for facebook, ethics is not your priority in the first place.
On the other hand, if I ever decided to throw mine on the toilet and work for a company that manipulate people, and perform mass spying on them, then I would go all the way and just do things like this.
You misunderstood. I gave other examples of good things.
To elaborate, I said “work on removing the negatives”. If you won’t work at fb, others will. But when you work there, you actually have the power to change the place so that they do less of the stuff that you described.
I wouldn’t work for Facebook because I don’t hate myself enough. But I also don’t clutch my pearls and think that any for profit company is feeding starving children.
You can make a negative case for any of the large tech company.
> You can make a negative case for any of the large tech company.
Yes you can, but it's a spectrum. Microsoft abusing their monopolistic position to strangle competition and adding adware and spyware in an OS people pay for is ethically and morally bad, but is much less bad than Facebook profiting from and doing nothing to stop an actual active genocide. Facebook is by far the worst, ethically and morally, offender of the big tech companies not actively involved in the military industrial complex.
> The invisible hand is a metaphor used by the Scottish moral philosopher Adam Smith that describes the unintended greater social impacts brought about by individuals acting in their own self-interests.[1][2] Smith originally mentioned the term in his work Theory of Moral Sentiments in 1759, but it has actually become known from his main work The Wealth of Nations, where the phrase is mentioned only once, in connection with import restrictions.
Can FB even do anything meaningful against "an active genocide"?
It's important to assign some responsibility to those who have/had the ability to prevent it, but it's not clear to me that FB could have prevented it.
In close races like the US presidential election it's plausible that FB was the deciding factor, but places where genocides happen are not known for close races.
And to be clear, it's not like FB couldn't have done a lot more with a lot less money to filter out certain kinds of content, but they intentionally wanted some kind of neutrality, right? And that's probably noble (and the maximally harmless thing) when it comes to - let's say Norwegian politics - but not so great in a lot of other contexts. However engaging in moderation with the intent to prevent social ills is very non-trivial.
To me it looks like they realized they were in a hard place and then basically bailed out from the hard problem and instead went all in on trying to make as much money as they can.
The question is not “how could a social network like Facebook have done any better given small tweaks to content moderation?” It’s “is Facebook fundamentally designed to create these kinds of misinformation bubbles?”
That second question is much more painful to ask if you’re an engineer at a company like this, because it means there’s no changing it from within.
but it's also probably false with a likelihood of at least 99%
Facebook is designed fundamentally to be a social graph, a representation of people's lives, blablabla. the more time people spend representing their lives on FB, the more attached they are to their online persona/profile/connections (and the dopamine rewards through likes and engagement), the more money it makes.
misinformation bubble? who cares. if that is what people want to represent, FB provides it, sure, but it's not what it is designed for.
On the other hand, if I ever decided to throw mine on the toilet and work for a company that manipulate people, and perform mass spying on them, then I would go all the way and just do things like this.