Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>That would be illegal

As the other commenter has mentioned, it's passed on via higher prices in the future.

>This is beside the point though. The government asks "would the average person be willing to pay $x to lower their chance of death by y%". The corporate executive asks "would I be willing to pay >$0 to lower their chance of death by y%". "Some of you may die, but it's a sacrifice I am willing to make."

Okay, but surely you don't agree that Boeing should spend infinite amounts of money making their planes safe? For instance we don't install backup engines on the off chance that all 2 engines fail. That's all I'm trying to argue, that the cold calculation/cost benefit analysis as mentioned in the OP isn't where Boeing went wrong, it's that they they undervalued the value of a human life. This was specifically mentioned in my original comment.



What I am trying to argue is that valuing a human life accurately doesn't matter if you're weighing it against your own profits. The correct comparison is against societal benefit, profit is completely morally irrelevant.

I agree that the ceiling for how much money you could spend trying to make an airplane perfectly safe is infinite, so by definition they have to stop somewhere. However, I disagree that finding that line of where to stop has anything to do with the statistical value of a human life.

For example, imagine the Boeing CEO says "we could spend $20 billion on R&D and manufacturing of a new safety system that would on average prevent 1 crash per year, but a 737 MAX carries 200 people, at $15M per life that's only $3 billion in value, so it's obviously better for us to skip it and pass that $20 billion on to shareholders". This would be criminal, and not because their value of a life is off and they got the math a little bit wrong. It's criminal because they are consciously choosing to kill people unnecessarily.

If it is physically and financially possible to make your product safer, you do it, without any thought to how much a life is worth. If it is not possible, because you can't figure out how to solve a problem or it would be so expensive to fix that your business couldn't survive, then you sit down and think about whether it's worth selling your product at all. Are there safer alternatives available? Could a better-funded company fix the flaws you've found? If you determine that your product is important, there are no alternatives out there, and it cannot be made safer, only then do you start weighing the benefits to society against the deaths you expect to occur. I don't really think this is a financial decision involving the value of a life - if you expect your product to kill people then the expected benefits need to be so overwhelming that doing the financial math is unnecessary.


>For example, imagine the Boeing CEO says "we could spend $20 billion on R&D and manufacturing of a new safety system that would on average prevent 1 crash per year, but a 737 MAX carries 200 people, at $15M per life that's only $3 billion in value, so it's obviously better for us to skip it and pass that $20 billion on to shareholders". This would be criminal, and not because their value of a life is off and they got the math a little bit wrong. It's criminal because they are consciously choosing to kill people unnecessarily.

It is not criminal and I have no idea how anyone can think it is.

The crime is in the first paragraphs of the article:

>Boeing has violated a 2021 agreement that shielded it from criminal prosecution after two 737 Max disasters killed 346 people overseas, the Justice Department told a federal judge in a court filing Tuesday.

>According to the Justice Department, Boeing failed to "design, implement, and enforce a compliance and ethics program to prevent and detect violations of the U.S. fraud laws throughout its operations."

So to recap, they were agreed to implement internal rules that match US laws after killing a bunch of people to avoid criminal liability, _which they failed to do_.


> If it is physically and financially possible to make your product safer, you do it, without any thought to how much a life is worth.

A cursory glance at how literally any product is designed will tell you that this is not true.

All engineering is fundamentally compromise between utility functions and cost functions (where cost is monetary, or weight, or size, or poor usability, ugliness etc). There's always a balance point somewhere. It may be the cost/benefit functions to the user differ from those of the manufacturer, and again to regulators, so the product appears out-of-balance to one or the other, but a meta-balance was struck somewhere.

If every light switch was a 2-foot, 200kg cube with a titanium shell filled with monitoring electronics running on lockstepped processors, fire suppression and potting compound, and was tested individually for a year before sale, it's still not as safe as one with six grounding points (in case the first 5 fail) and triple-thick gold plating on the contacts. You have to stop somewhere.


This isn't what the parent comment is saying. Functional safety is an entire discipline. It neither results in titanium light switches nor uses "value of a human life" calculations. The scenario described by the parent would be a civil and potentially criminal liability.


I know what functional safety is. It's just not described by "you must always make all possible efforts to make things safer under all possible circumstances".

In the case of a normal light switch, for example, two cable screws rather than one per wire would be safer than one as the cables are less likely to come adrift. This would easily be affordable for the manufacturer and the buyer, and well within the abilities of the manufacturer. And yet no light switch has those.

And every aircraft manufacturer could think of something the costs $20 billion to develop and adds safety but they don't, unless the product is unsafe already The one crash a year is, to be fair, solidly in the unsafe end of the spectrum, but would they spend the same to avoid a once in 10,000 year crash and if not, what's the cut-off? There must be a cut-off or it would be impossible for any manufacturer of anything to make any profit, ever, as all spare cash must go into safety system research. And this is demonstratively not what they do.

Boeing has a slightly different problem in that they actively made something less safe and lied about it. They'd have gotten away with it but they flubbed the execution with a single point of failure, which made it easier to line up the Swiss cheese holes and the rest is history.


> It's just not described by...

Yes, I know - I think I read the parent comment as "If it is physically and financially [reasonable] to make your product safer, you do it"

This is basically what I was taught working in automotive safety.

> ... what's the cut-off? There must be a cut-off...

There's no hard $$ or other cutoff though, it's a soft cutoff that may come down to engineers and/or management arguing about what is really safer.

> Boeing has a slightly different problem in that they actively made something less safe and lied about it.

Yes I agree, for Boeing it is different, reading about their management's intention destruction of positive safety culture is sickening.


> For instance we don't install backup engines on the off chance that all 2 engines fail.

We used to. https://en.m.wikipedia.org/wiki/ETOPS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: