I think the generalization is overbroad. Risk of failure should be weighed against the consequences of failure.
If I'm writing code for robots as a hobby and my robots behave exactly as I intended all of the time, then I'm probably not learning anything, and I should try to make the robots do more sophisticated tasks. The consequences of failure are minimal, so the optimum failure rate is high.
If I'm at work writing avionics code, the cost of failure is astronomical. It's nice to push boundaries and learn things, but it's better to avoid plane crashes. The consequences of failure are high, so the optimum failure rate is low.
I think the problem with rocket science is really the tyranny of physics. All the potential and kinetic energy you give to the rocket has to be stored in chemical form on the launchpad. You have to sit right on the edge of catastrophe or you are not going to make it into space at all. We've been doing this for half a century and the safety record is, quite plainly, not very good.
When we learn how to do spaceflight safely, we'll do that.
Oh, I absolutelty agree with you, and did think about including the concept of failure costs in the above post, but decided against it since I believed it would make the post a little more convoluted. Even with hobby robotics there's a maximum failure cost you can stomach, there's only so many servos you can burn through, and you only have finite time to program, which limits the number of attempts you can make.
One great way to lower the amount of expesive failures is to hedge it with a number of cheaper failures, whether that'd be models, prototypes or testing rigs for individual components of an airplane or rocket. If you look at the early stages of avionics, there were many failures, some of them expensive, deadly or embarrasing. But over time, we did build up a repertoire of testing methods to verify a given airplane design. Does that mean our airplanes don't fail? No. Does that mean we aren't trying to develop even better airplanes because of the risk involved? No. But all in all, it's a considerable improvement. We'll get there for rockets, just like we did for cars, ships, airplanes or computers.
That's also why research, prototyping and product development should often be done in ways where failure is a lot cheaper and your optimum failure rate can be much higher, accelerating progress hugely.
For example at a smaller pilot scale or in test benches.
Yet, if your test bench is very complicated, slow, costly and introduces errors of its own, it might not be wise. Also some "flying" test configurations can be a dead end.
So in the big "Battlestar galactica" NASA missions with lots of new technology, the cost of failure is very high. That's why they analyze a lot and test stuff in test benches. But those can be dead ends. It makes everything even more costly, making failure even more expensive, requiring more tests. Schedules slip while you have zero science return to show... It's a vicious circle.
It might make more sense to just for example launch many smaller probes, each one somewhat better than the previous one in some degrees. Some might crash, but if your audience understands that, it's not a political disaster. You're going to fly the next one again in two years. This way you also don't have to wait 20 years for your technology development to pay off.
So SpaceX launched Falcon 1 quite many times, and learned a lot about technology as well as matured as an organization. They crashed quite many times as well. But those were not nearly as expensive as Falcon 9 crashes were at this point.
That's also why they were flying different versions of Grasshopper and now Falcon 9 R. Retire risk. Allow crashes - when you can afford them. This will reduce crashes later when you can not allow them.
So it is a slightly complex issue but nothing very out of the ordinary. Usually in the real world things settle into a good compromise between conflicting goals.
Actually, SpaceX only launched two successful F1s, flights 4 and 5, before moving on the the development of the F9. And there was only one demonstration flight of F9 before it flew with the first Dragon. So it's not exactly like the test flights have been that many.
If I'm writing code for robots as a hobby and my robots behave exactly as I intended all of the time, then I'm probably not learning anything, and I should try to make the robots do more sophisticated tasks. The consequences of failure are minimal, so the optimum failure rate is high.
If I'm at work writing avionics code, the cost of failure is astronomical. It's nice to push boundaries and learn things, but it's better to avoid plane crashes. The consequences of failure are high, so the optimum failure rate is low.
I think the problem with rocket science is really the tyranny of physics. All the potential and kinetic energy you give to the rocket has to be stored in chemical form on the launchpad. You have to sit right on the edge of catastrophe or you are not going to make it into space at all. We've been doing this for half a century and the safety record is, quite plainly, not very good.
When we learn how to do spaceflight safely, we'll do that.