Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's how I imagine it was introduced:

Engineer A: Hey how would we know whether loading this image as a .webp vs a .jpg would perform? Our lab testing is one thing but the different hardware out there vastly differs from phone to phone -- we need some data on the whether the average phone has enough optimizations for one versus the other.

Engineer B: Oh, what if we just run a background task on some of the phones that tries loading an image that we don't show, just to get the metrics?

Engineer A: Hmnn, that could work, we could load just one image across some small % of the user base to get a representative sample and report that data to system X so we can get some statistics and make heuristics based off of that.

Ethics is not a strong suit/required reading for software engineers, so not every engineer will read the above and detect something wrong. Hell, running experiments on wide swaths of users without consent isn't quite above board ethically either, but we've got a whole technological trees and industry dedicated to it.

Most of the time the person with enough ethics knowledge to realize there's a problem here is more incentivized to alert legal (so they can add it to the EULA/ToS that users don't read) rather than stop the behavior.

I suspect many engineers could read the conversation above and not think there was anything wrong there to begin with. And of course, once X teams across Y companies start using this, we've got a problem.

Note that I use engineers for all the roles on purpose -- the idea that it's only non-engineers doing all the sketchy shit is a deflection of responsibility, I for one am pretty proud of George here -- he's turned down what is very likely a lot of money to do the right thing. We can only guess at how many do not take this route.

[EDIT] I want to note that I do not put myself above this conversation -- being able to recognize that this is wrong immediately is not some innate skill that everyone has, it has to be reasoned about, and often where to draw the line comes down to widely-enough-held social mores/morality.

For example, I run ethical ads on my personal blog -- I know that ads incur unnecessary load on user machines that visit, and in-turn burning unnecessary battery on visitors' machines. Am I the same as Facebooks' engineers that work on this system? Probably not, but explaining that completely (and convincing yourself or anyone else) is more complicated -- maybe it's only a matter of degree.



Well, if you work for facebook, ethics is not your priority in the first place.

On the other hand, if I ever decided to throw mine on the toilet and work for a company that manipulate people, and perform mass spying on them, then I would go all the way and just do things like this.


There are plenty of worse companies (direct scams, sketchy loans, ponzi schemes); I’d say going there would be “going all the way”.

There are good things you can do at fb too, e.g. expand the positive areas of FB or work on removing the negatives.


Good things like inciting war and political division? Maybe building tools to easily spread misinformation that drowns out factual information?


You misunderstood. I gave other examples of good things.

To elaborate, I said “work on removing the negatives”. If you won’t work at fb, others will. But when you work there, you actually have the power to change the place so that they do less of the stuff that you described.


I wouldn’t work for Facebook because I don’t hate myself enough. But I also don’t clutch my pearls and think that any for profit company is feeding starving children.

You can make a negative case for any of the large tech company.


> You can make a negative case for any of the large tech company.

Yes you can, but it's a spectrum. Microsoft abusing their monopolistic position to strangle competition and adding adware and spyware in an OS people pay for is ethically and morally bad, but is much less bad than Facebook profiting from and doing nothing to stop an actual active genocide. Facebook is by far the worst, ethically and morally, offender of the big tech companies not actively involved in the military industrial complex.


And just went you think Facebook is the worst, TikTok comes around.


"Der Markt regelt das." (The market will regulate itself.)


> The invisible hand is a metaphor used by the Scottish moral philosopher Adam Smith that describes the unintended greater social impacts brought about by individuals acting in their own self-interests.[1][2] Smith originally mentioned the term in his work Theory of Moral Sentiments in 1759, but it has actually become known from his main work The Wealth of Nations, where the phrase is mentioned only once, in connection with import restrictions.

https://en.wikipedia.org/wiki/Invisible_hand

(should also help to obtain the historic criticism of it that has materialized since back then)


Can FB even do anything meaningful against "an active genocide"?

It's important to assign some responsibility to those who have/had the ability to prevent it, but it's not clear to me that FB could have prevented it.

In close races like the US presidential election it's plausible that FB was the deciding factor, but places where genocides happen are not known for close races.

And to be clear, it's not like FB couldn't have done a lot more with a lot less money to filter out certain kinds of content, but they intentionally wanted some kind of neutrality, right? And that's probably noble (and the maximally harmless thing) when it comes to - let's say Norwegian politics - but not so great in a lot of other contexts. However engaging in moderation with the intent to prevent social ills is very non-trivial.

To me it looks like they realized they were in a hard place and then basically bailed out from the hard problem and instead went all in on trying to make as much money as they can.


The question is not “how could a social network like Facebook have done any better given small tweaks to content moderation?” It’s “is Facebook fundamentally designed to create these kinds of misinformation bubbles?”

That second question is much more painful to ask if you’re an engineer at a company like this, because it means there’s no changing it from within.


but it's also probably false with a likelihood of at least 99%

Facebook is designed fundamentally to be a social graph, a representation of people's lives, blablabla. the more time people spend representing their lives on FB, the more attached they are to their online persona/profile/connections (and the dopamine rewards through likes and engagement), the more money it makes.

misinformation bubble? who cares. if that is what people want to represent, FB provides it, sure, but it's not what it is designed for.


My version:

Hey, Apple is now blocking our ads. They will eat our lunch. What can we do about it?

Hmm... Let's drain the batteries of iPhone users faster. They will switch to Android.

What if we get caught?

We'll just make up some story about A/B testing. Oh and drain some Android user's batteries too, for plausible deniability.

Great plan, implement it!


Noone sane would believe that people would switch to Android over that, that's some imagination you have there.

Not caring about battery use? Yeah sure, even the enlightened developers of HN rarely give a crap about that.

Deliberately draining battery to force people to replace their phones? C'mon.


You wouldn't get the tech-savvy people with that, no. But it's easy to imagine someone barely making it through a day, not quite cottoning on, but hearing how Android phones last longer - and dismissing reports about iPhones lasting longer because their iPhone... doesn't.

They're not replacing their phones over this, but for their next phone they're choosing an Android. For battery reasons. Fastforward 2-3 years and most of these folks are switching (with renewal of subscription).

My main question regarding such a scenario: could they find the right balance, ie. drain the battery enough to annoy, but not so much as to attract investigation that'll definitively identify facebook's app as the culprit?

That I doubt.


"load just one image across some small % of the user base" sounds harmless, especially while the app is open. Different world from intentional draining.


Running it until the battery drains say 30% probably gives more useful data than once. And much less harmless.


But why not just limit such benchmarks to times when the phone is being charged? If they write internal guides on how to do this "thoughtfully" one would think they thought more about this.


Would not give representative data. Many devices have different power management profiles when plugged in.


Sure, they could (maybe they did, I have no idea) -- but note that George had this to say about the doc:

> The document included examples of how to run such tests. After reading the document, Hayward said that it appeared to him that Facebook had used negative testing before. He added, "I have never seen a more horrible document in my career."


I would imagine a company the size of facebook can look at their internal stats, see what devices most people use, buy the top 80% of devices 10pieces each and create a lab environment to test them, and that includes battery usage.


I would agree that it is not a trivial black-or-white question about ethics for a number of reasons: 1. It wasn't Facebook who invented testing of products on different audiences. This was happening long before invention of computers. A cook altering the ingredients of the soup or a garment producer changing their supplier of fabric, they all altered the features of their product, sometimes genuinely believing that there will be more good than harm in it. 2. Customers may have right to know if something has changed in the product they love to make informed choices. With complex products like a digital platform or a car, it is no longer feasible to become aware of all the changes and it is absolutely impossible to understand them. Now it is a matter of trust of regulatory supervision more than "do you want to try our new recipe?" 3. Engineers may have a duty to build their systems responsibly and with respect to the needs and rights of the society. However expecting them to execute right judgement on complex legal matters is a big stretch, especially given the diversity of modern engineering teams and their cultural backgrounds. What is acceptable in USA, may be completely unacceptable in Germany or Pakistan and vice versa. Finding where to draw a line is important and it is not just (and mostly not) on engineers to do it. Engineers solve technical problems. This is a problem of trust and regulation, which must be considered by the whole society.


Distributed unit testing ... as you note I like the idea.

If a 'unit test' is "high CPU load during usage" they kill people though at their scale just by the fact that people can't call 911.


>they kill people though at their scale just by the fact that people can't call 911.

In the same way that everyone who drives a car is a murderer, a ridiculous appeal to emotion.


Better example would be a car manufacturer testing new production method that reduces safety of seat belts. Or aircraft manufacturer testing new software that compensates the significant changes in hardware design without airlines knowing about it. Though question is if it is really about seat belts or about diameter of cup holder in terms of safety.


No, neither of those are even remotely comparable.


I have edited my post to be more relevant to the discussion, see the last part.


I would think that facebook being nonchalant about battery usage has far more in common with cupholder diameters than with seatbelt designs.


Would you put holdouts of good features in the same bucket as negative testing? Let’s say someone develops an ML model that cuts down spam by 90%. It’s launched to all but 1% of users, so that you can consistently measure that the spam reduction is 90%, that the users are benefiting from it, and that the false positive rate is low.

From a different POV you’re intentionally worsening the experience for 1% of users. Is this a negative test? If so, is it morally different?


But there is beta version fot that, right? I don't see anything wrong doing this in beta. If i installed beta, i consent to be a lab rat


You're right -- this is a mitigation that highly ethical/moral companies will take. It's likely that EULAs/ToS permit experimentation at any time on any version of an application though.

There's also the question of how much of a lab rat people really consent to being -- given that you can't really know what experiments will be run ahead of time.

What are the disclosure rules? What is an experiment versus what is not? What is legal to use to experiment on people with? This is the kind of place regulation is normally present, and it hasn't caught up yet.


Sounds more than plausible, but I don't really see how that would be "horrible" (quote from the ex-employee). Also I don't think it jives with the reported name "negative testing".


A potentially simpler explanation is an A/B test of a potentially CPU-intensive feature, and a misinformed data scientist.

Testing a higher-compression image format might effectively be testing relative behavior between an experiment arm with higher data usage and lower battery usage and an arm with lower data usage and higher battery usage. I.e., "testing draining users' batteries."

There are too little details in the blog post or the original article to really figure out what's being discussed.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: