Hacker Newsnew | past | comments | ask | show | jobs | submit | JorgeGT's commentslogin

I don't know about the A320 but this was certainly the model for the Eurofighter. One of my university professors was in one of the teams, they were given the specs and not allowed to communicate with the other teams in any way during the hw and sw development.

> they were given the specs and not allowed to communicate with the other teams in any way during the hw and sw development.

Jeez, it would drive me _up the wall_. Let's say I could somewhat justify the security concerns, but this seems like it severely hampers the ability to design the system. And it seems like a safety concern.


What you are trying to minimize here is the error rate of the composite system, not the error rate of the individual modules. You take it as a given that all the teams are doing their human best to eliminate mistakes from their design. The idea of this is to make it likely that the mistakes that remain are different mistakes from those made by the other teams.

Providing errors are independent, it's better to have three subsystems with 99% reliability in a voting arrangement than one system with 99.9% reliability.


This seems like it would need some referees who watch over the teams and intrude with, "no, that method is already claimed by the other team, do something else"!

Otherwise, I can easily see teams doing parallel construction of the same techniques. So many developments seem to happen like this, due to everyone being primed by the same socio-technical environment...


The idea was to build three completely different systems to produce the same data, so that an error or problem in one could not be reasonably replicated in the others. In a case such as this, even ideas about algorithms could result in undesirable similarities that could end up propagating an attack surface, a logic error, or a hardware weakness or vulnerability. The desired result is that the teas solve the problem separately using distinct approaches and resources.

And did they?

Sometimes the solution is obvious, such that if you ask three engineers to solve it you’ll get three copies of the same solution, whereas that might not happen if they’re able to communicate.

I’m sure they knew what they were doing, but I wonder how they avoided that scenario.


I can (and have in past) written a long explanation on my experience with this, but…

Redundancy is a tool for reducing the probability of encountering statistical errors, which come from things like SEUs.

Dissimilarity is a tool for reducing the “probability” of encountering non-statistical errors — aka defects, bugs — but it’s a bit of a category error to discuss the probability of a non-probabilistic event; either the bug exists or it does not, at best you can talk about the state coverage that corresponds to its observability, but we don’t sample state space uniformly.

There has been a trend in the past few decades, somewhat informed by NASA studies, to favor redundancy as the (only, effective) tool for mitigating statistical errors, but to lean against heavy use of dissimilarity for software development in particular. This is because of a belief that (a) independent software teams implement the same bugs anyway and (b) an hour spent on duplication is better spent on testing. But at the absolute highest level of safety, where development hours are a relatively low cost compared to verification hours, I know it’s still used; and I don’t know how the hardware folks’ philosophy has evolved.


Even with the same approach, I imagine the implementation could differ enough to still meet the goal. But I’m also curious if the differences were actually quantified after the fact, it seems an important step.

Not at airbus. Ask a german, french and british engineer the same question and you will never, ever get the same answer from each.

I think this would come down to team selection. At airbus they have the advantage of cultural diversity to lean on, I have no doubt that implementations would differ not only in implementation but in design philosophy, compromises, and priorities.

How so? It’s a safety measure at least as much as a security one.

It’s essentially a very intentional trade-off between groupthink and the wisdom of crowds, but it lands on a very different point on that scale than most other systems.

Arguably the track record of Airbus’s fly-by-wire does them some justice for that decision.


Lenovo has been doing this in Spain for quite a bit, and you can even buy them without any preinstalled OS at all for a lower price still. But at least here the option is typically available for ThinkPads rather than for more cheap consumer lines. Same with Acer. Dell offers Linux options but no "empty" option afaik.


Linux is free and proves the hardware isn't defective when you press the power button.


I work at an e-waste recycling company, where we use Linux in precisely this way. I take corporate decommisioned laptops, wipe them, install Linux, run fastfetch, take pictures, and post on Ebay.

https://www.ebay.com/str/evolutionecycling


How do you get your hands on the hardware?


Most of it comes from businesses decommissioning their IT assets, but we also accept people walking in to drop things off. Most of that is "certified", meaning that we go through everything they dropped off, remove the drives, scan serial numbers, and ensure data on the drives are destroyed (either by physically destroying the drives, or overwriting them[0]). Laptops with sufficiently high specs are stacked on pallets in the warehouse area I work in, where it becomes a sort of free-for-all with my 3 co-workers there. Most of what we have is 5 to 10 years old, but sometimes there's the occasional retro piece, or electronic equipment that isn't a computer, like radars, lidars, or A/V stuff. I've been excited to pull out a laptop with a recent 14th gen (or so) Intel Core CPU, only to be let down when I discover it has a broken screen or is unbootable.

[0] https://wipeos.com/


It's tangential, but this is the first time I've seen Fluent installed by simply decompressing a tar, instead of executing their big installer.


It is difficult, but there are modeling approaches that work, such as VoF (https://en.wikipedia.org/wiki/Volume_of_fluid_method). Basically, in addition to velocity, pressure, temperature, etc., you store an additional scalar in each cell of your computational mesh representing the liquid's volume fraction. Then, you solve an additional equation to transport that scalar.


Solving the Navier-Stokes equations numerically in 3D is very time-consuming, even on HPC clusters, not to mention the additional modeling required for multiphase flows. Your answer implies that the solutions are obtained almost instantaneously, which is not the case.


I think the reason these kind of simulations are fast enough is because they are very coarse and approximate. Don't think of asking how exactly the foam swirls around the individual longerons, more like a very rough estimation of which side of the tank the liquid is slumped to. Remember it doesn't have to be "exact" just close enough to be useful.

By their very nature model predictive controllers operate in a world where not everything is perfectly modelled. Engineers do their best and whatever is left is the "error" the controller is trying to deal with.


Or you compute variations ahead of time and do a situation based lookup, hoing through loops if a situation ressembles another one.


Maybe they don't need to model the fluid dynamics, they just need to detect the mass movement / acceleration forces caused by it, and use those sensor inputs to inform a picture that's fed into their correction thursting.

Sort of like how you can balance a few pitchers of beer on a tray in your hand by remaining aware of the weight, even when people remove one! hahaha :)


Still if there indeed is "free" mass moving about, you need to make sure your control inputs don't make it slosh harder, so you compensate for that, so it sloshes even harder, etc - basically avoiding oscilation. :)


Yeah..ah, control theory. Heh :)


Oh no, apologies if that was the impression I gave!! I actually perform CFD simulations in HPC clusters, and in fact I'm an admin of the small cluster at my research institute =)

These are indeed heavy computations. What I meant is that VoF is one additional equation to be solved besides the N-S equations (either filtered as in LES or Reynolds-averaged as in RANS), the energy equation, your turbulence model equations, and so on. Certainly, not instantaneous at all, but simply an additional "simple" model that we can hook into our current way of doing CFD.

So, my point was, sloshing is a problem that we know how to simulate, although certainly you need HPC resources. Though, looking at those 100k NVIDIA H100 Elon has, I guess they have them! :P


I'm curious, how long time wise do these type of "heavy computations" take on clusters of HPCs?


It really depends on the problem to be solved (domain size, complexity of physical phenomena such as turbulence model, heat transfer, acoustics, multiphase flows, combustion, etc., number of time steps required...). In our case we perform for instance simulations of turbomachinery acoustics that can take 3-4 weeks running in a few hundreds of CPU cores, combustion acoustics simulations that can take a week or two running in 1k-2k cores...


what if you had 100k H100s


In reality few codes are capable of effectively parallelizing to so many computing processes. But, how cool would it be?


They don't need to solve the Navier-Stokes equations, they don't care how the fluid is actually behaving, they just need to approximate how the mass is moving within a margin of error that the control system can handle.


Maybe the tank is just not a large hollow structure but contains fins/compartments/whatever to restrict the sloshing motion and it's not that big a contribution to the overall motion.

If it's no stronger than a sudden wind gust, it's just something the controller has to be able to take care of without a heads-up.


These are indeed part of the solution and are known as baffles. They have risks of their own, e.g.: https://wccftech.com/baffling-baffles-musk-explains-why-spac...


In the first spacex rocket Musk thought that it was a good idea to not install baffles. He learned from experience that they are indeed needed.


I remember a very similar anecdote about Von Braun & the early Juno/Jupiter rockets - with someone pointing out issues with sloshing on a press conference & Von Braun brushing it off as insignificant.

Then the next launch crashed due to slosh induced oscillation - and the one rocket after that had anti-slosh baffles. ;-)


That’s how tanks in race cars are made. Another solution is fill the tank with some kind of sponge-like material.


Sometimes… the baffles break off, and then become surfboard projectiles inside the tank.

More fluid dynamics


That would be far too heavy in this case. :)


That is how they build the tank in Formula One Racing (and probably many other race cars, I guess)


They probably pre solved a bunch of scenarios and interpolate between them known solutions


That usually doesn't work for chaotic systems.


If the computation is too difficult, another approach is build a test stand and try methods until it works.

Which is why we use wind tunnels, for example.


Wouldn't it burn most of the fuel to mitigate the effect?


I was thinking of the BFE...


There's also the issue of ingesting the turbulent/detached boundary layer of the fuselage at high angles of attack, causing inlet distortions and the corresponding risk of compressor stall.


This update broke my workflow!


I did warn him that people might complain about side effects

Edit: believe it or not, we got more complaints about this. Some people picked their top color specifically to go nicely with the orange Y! So I've reverted this change, except for the Christmas case. Users who have set their topcolor don't see the Christmas top bar anyhow, so don't have to worry.


is it possible that NetNewsWire (possibly other rss readers as well) don't show the favicon for https://news.ycombinator.com/rss anymore because of this change? I noticed this a few days ago


HN is still serving the orange Y for favicons. Only the Y to the left of "Hacker News" in the top bar now has a transparent background.

However, there was a brief period where we were serving the transparent Y for favicons before I noticed that and corrected it; I wonder if the transparent Y somehow got into a cache? Is there a way to force-refresh?


Unfortunately the issue still persists and it doesn't seem to be related to caching.

I think the problem might be that NetNewsWire doesn't support SVG favicons (/y18.svg) and that /favicon.ico isn't available as a fallback. Were there any changes regarding these files recently?


There were! That would likely be the issue then.

What do I need to do to make /favicon.ico available as a fallback?


https://news.ycombinator.com/favicon.ico would need to return the favicon instead of 404 Not Found


I've put it back. Does it work now?


Yes, it's working again. Thank you!


I use [bgcolor='#ff6600'] as a CSS selector for the <td> wrapping the header to remove the header bar color.

So this really did break my workflow, hah


That’s pretty amusing. Does switching the CSS selector to td[bgcolor] do the trick?


Astonishingly, Harold Fisk did this in 1944 without any LiDAR: https://publicdomainreview.org/collection/maps-of-the-lower-...


And if you prefer professionals, not amateurs: https://windshape.com/technology-2/


Even inside itself: filmmakers went to remote Los Angeles in the early 1900s because Thomas Edison, in NJ, held most of the patents on motion picture cameras and out west they were much more difficult to enforce.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: