Hacker Newsnew | past | comments | ask | show | jobs | submit | gatronicus's commentslogin

The speed of detonation also matters, a meteorite detonation is surely slower than a nuclear bomb which releases the energy in milliseconds.


This quite strongly depends on the composition of the metorite and the angle at which it enters the atmosphere.

The Chelyablinsk meteor entered at a very shallow angle & was a chondrite with little internal strngth, so it disintegrated over a longer distance forming a “line burst” rather than a single point airburst.

Had it entered more steeply, then it would have concentrated it’s KE over a much smaller area, causing much more damage.

A nuclear bomb may take a few ms, but after a few 100ms, you’ve got a dense, hot plasma in a small volume. Sounds a lot like the result of a metorite airburst! And indeed, people use the same codes that are used to model nuclear explosions to model meteor airbursts.


Moreover to that - as far as I understand the Tall el-Hammam's paper, the only reason why they ruled out nuclear explosion is "because of the age of the site". Other than that (as I understand), it's indistinguishable.

Even though an atomic bomb blast is not applicable because of the historic absence of atomic explosions in the area, an atomic blast produces a wide range of melt products that are morphologically indistinguishable from the melted material found at TeH (Fig. 51). These include shocked quartz64; melted and decorated zircon grains (Fig. 51a, b); globules of melted material (Fig. 51c, d); meltglass containing large vesicles lined with Fe-rich crystals likely deposited by vapor deposition (Fig. 51e, f); spherules embedded in a meltglass matrix (Fig. 51g, h). Also, atomic detonations can replicate the physical destruction of buildings, the human lethality, and the incineration of a city, as occurred in World War II.

added: though, I forget to mention, that they also found many chemical elements and compositions that are hard to find under normal conditions (atomic bomb tests included as I understood), but abundant in meteorites.


We should be able to find the impact geological records and build a probability model.


Did even Tunguska have a geological impact? And also, since 75% of the Earth is covered in water, it stands to reason that 75% of bolides exploded over an ocean, with literally no impact we could meaningfully measure.

My point is that human cities are "bolide sensors" and unevenly distributed in time and space, so the "measurements" are necessarily fewer than what occurs in nature.


Best definition of a city ever: “bolide sensor”. Love it


Yes, there is geological evidence that the Tunguska event was a bolide explosion.


trees flattened in a radial pattern might show up in the geological record somehow.


Unlikely after several years, let alone several thousands, right?


It would require coordinated excavation over how many square miles, to spot the radial pattern? (Assuming, as gota points out, that the fallen trees were somehow preserved.)

Then there's bolides over desert, savanna, tundra, ice sheets, etc.


Impacts of this size don’t leave enough evidence for you to know where to look, especially after 1000’s of years of man caused and natural erosion.

An air burst impact that’s large enough to take out a city won’t leave a major impact crater. And back in the day it might even be the case of the perceived cause contributing more to the decline of a city or even an empire than the physical damage.

If a meteor hit blows up your town when you have no ability to comprehend what just happened other than god/gods are angry you gonna move to a less cursed place.

It can also cause social impacts such as the toppling of a given religious or leadership class because they angered the gods.


OP's analysis demonstrates that it's very possible to detect theses events 1000s of years later, if you examine areas near the explosion.


Yes, but they are testing the area because we know a settlement there was destroyed. We'd have to test random uninhabited (or at least currently uninhabited) areas for which there are no indications of anything special to get a better picture

And not even that guarantees a complete picture - what if a tsunami event erases or conceals the record of something like this? Volcanic activity? Desertification?

My point is that we can't reason about the rarity or uniqueness of these events as a main factor for accepting or refuting the hypothesis


Except that for decades REP MOVS/STOS were avoided on x86 because they were much slower than hand written assembly. This only changed recently.


That was really only in the 286-486 era. On the 8086 it was the fastest, and since the Pentium II, which introduced cacheline-sized moves, it's basically nearly the same as the huge unrolled SIMD implementations that are marginally faster in microbenchmarks.

Linus Torvalds has some good comments on that here: https://www.realworldtech.com/forum/?threadid=196054&curpost...


Linus seems to consider rep mov still too slow for small copies:

https://www.realworldtech.com/forum/?threadid=196054&curpost...

https://www.realworldtech.com/forum/?threadid=196054&curpost...

It seems to me that rep move is so bad that you want to avoid it, but trying to write a fast generic memcpy results in so much bloat to handle edge cases that rep move remains competitive in the generic case.


Since there were so many TLS security bugs due to it's complexity, is there any push to replace it with something simpler and with less choices and attack surface?

Google gave us HTTP/2/3, but don't seem to care about fixing TLS.


TLS 1.3 is much better than TLS 1.2, and has fewer options and knobs (e.g. no need to choose cipher suites), but it is not what a modern protocol designed from scratch would look like. For that you should look at WireGuard, or the general Noise Protocol Framework.

For custom protocols, libsodium would be a popular modern approach. If you need compatibility with TLS, try locking down TLS to only version 1.3, or if you can't do that, lock it down to only TLS 1.2 with tls_ecdhe_rsa_with_aes_128_gcm_sha256.


This has been useful advice for many years, although restricting to AEAD is best when possible.

https://hynek.me/articles/hardening-your-web-servers-ssl-cip...


TLS the protocol has been simplified in version 1.3, with the goal of reducing complexity to improve security.

OpenSSL the implementation was forked a few times also with the goal of improving security. Notable forks: LibreSSL, BoringSSL.

PS: for all those confused why OpenSSL skipped version 2, it seems it's because FIPS builds identified themselves as version 2 (thanks to poster below!) Also the changelog explains the new version naming scheme:

"""

Switch to a new version scheme using three numbers MAJOR.MINOR.PATCH.

Major releases (indicated by incrementing the MAJOR release number) may introduce incompatible API/ABI changes.

Minor releases (indicated by incrementing the MINOR release number) may introduce new features but retain API/ABI compatibility.

Patch releases (indicated by incrementing the PATCH number) are intended for bug fixes and other improvements of existing features only (like improving performance or adding documentation) and retain API/ABI compatibility.

"""

Quoted from: https://www.openssl.org/news/changelog.html So there won't be a 3.0.0a, 3.0.0b, etc. They want to make it clear it will be 3.0.1, 3.0.2, etc


It's also because the FIPS builds of OpenSSL 1.x identified themselves as 2.x.


I didn't know! Yeah that seems to be the main reason


>Google gave us HTTP/2/3, but don't seem to care about fixing TLS.

Google is working on BoringSSL / Tink, which I believe is API compatible, but supports a lot less features. However I think a better way forward might be RustTLS, an implementation which is memory-safe. There is already support in Curl[1], showing there is a path forward for usage in languages other than Rust.

[1] https://daniel.haxx.se/blog/2021/02/09/curl-supports-rustls/


LibreSSL is an alternative from OpenBSD.


IIRC HTTP3/QUIC mandates TLS[1][2] so it itself still relies on TLS. It seems like it's set on TLS 1.3 as a baseline and I would hope the negotiation of the protocol is future compatible but I will admit I haven't fully read the RFCs.

[1] https://datatracker.ietf.org/doc/html/rfc9000#section-1 [2] https://datatracker.ietf.org/doc/html/rfc9001


Google employ at least one OpenSSL committer and have their own simpler version of the library, BoringSSL.


> I have no idea how challenging that is.

Technically it's easy, Moderna already has multi target vaccines (for other diseases), one with 7 different targets. It can be as easy as literally creating 7 different vaccine liquids and then mixing them all in a single vial.

The problem is deciding what to put in, what ratio, possible side effects, etc... Lots of combinations which can mean lots of expensive and slow clinical trials.


> should prove exceptionally difficult for the virus to overcome through evolutionary pressure

Which is why we need gain of function experiments to accelerate this process and get ahead of the virus so that we are prepared for the eventual successful mutation.


Gain of function experiments are not all the same. If you use an artificial selection approach (wherein selective pressure is applied to a huge library of random variations), then maybe.

If you have labs picking-and-choosing what mutations to make (what usually happens), then no. That's just hubris. People stumble around in the dark, occasionally find a truffle, and pat themselves on the back for having such a great truffle-finding method.

Even for good artificial selection systems, mutational space is so gigantic that it's hard to cover properly: not just the point mutations (i.e. converting one amino acid to another) but also insertions, deletions and transpositions. There's no artificial selection system I'm aware of that can recapitulate mutational space.


Which is why we need "intelligent design" to "help" the virus jump over this combinatorial problem.

For example we can try spikes from different related viruses, or pick and choose various other tricks, like furin cleavage sites, for example like in this rejected 2018 EcoHealth coronavirus research DARPA grant proposal:

https://twitter.com/JamieMetzl/status/1439989291858513929


One article said that the France contract was about 7 bil per diesel sub, while the new one will be 4 bil per nuclear sub.

I can't imagine how you can justify 7 bil for a diesel sub.

Is it possible that France though that Aus had no alternative and ran the price up?


> One article said that the France contract was about 7 bil per diesel sub, while the new one will be 4 bil per nuclear sub

Both of those are wrong.

> Is it possible that France though that Aus had no alternative and ran the price up?

No, there was an official procedure with multiple candidates ( France, Japan and a third one I can't recall). Australia chose the French option, even though it required significant modifications to a nuclear design to make it diesel electric.

> I can't imagine how you can justify 7 bil for a diesel sub

Furthermore, the contract included significant know-how transfers and construction in Australia, so whatever the price per sub turned out to be, it wouldn't be for the subs alone.

As for why the 4 bil per sub is wrong - they haven't decided anything. They don't know the design, upon what it will be based ( the smaller UK Astute class or the bigger US Virginias), etc. Lots of infrastructure needs to be built, and people have to be trained since Australia has no nuclear sector to speak of. So any price projection as of today is purely theoretical.

Oh and the first subs under the new contract should be available in 2040, so probably a decade later. That's a long time.


This is the real question. From the outsider perspective, it looks like the navies and militaries of the western world are evolving towards extremely expensive, specialized high-tech "toys", produced in very low numbers, that require extensive year-long training just to get anything done. Most of which are useless against guerilla warfare insurgencies, and of doubtful utility against peer nation state competitors. Feels more like a job program than efficient armed forces.

As a deterrent against nation states, it's questionable how a billion dollar submarine would handle a swarm of thousands of suicidal drones, at a fraction of the cost of the submarine. The submarine takes years to build, a no-name Chinese factory can pump out thousands of drones in a day.

Imho, the approach of the PLA, betting more on unmanned drones backed by China's industrial capacity, is a more logical evolution of warfare.

Some links: https://www.thedrive.com/the-war-zone/37062/china-conducts-t...

https://www.thedrive.com/the-war-zone/13284/americas-gaping-...


The main purpose of the submarine is to deliver nuclear missiles from near the enemy shore, so arguably it's most important feature would be to remain undetected.

It's unclear if modern tech can "reveal" where the subs are, rendering them useless.

I agree on your point regarding surface ships, they are walking dead, they can't defend against drone/missile swarming.


That's one class of submarines. One that doesn't really make sense for Australia, given they don't actually have any nukes to deliver.


Much more likely that the standard optimism that runs through people when they are starting new projects lead them to dramatically underestimate the difficulty of the project - which ended up being "design a new sub from scratch" instead of the intention, which was "lightly modify an existing design"


That’s kind of irrelevant to the situation though. It’s not Australia changing suppliers that’s the problem but how they did it. France found out they got dropped from the news if I’m right.


afaik, the cost comes from the original requirement from Australia that wanted the adaptation of existing nuclear designs into diesel ones. France was one of the few willing to tackle the work, and also why this 180° change towards nuclear-powered subs is a bit weird.


Taleb favours "street smarts" vs "intellectual yet idiot". In this context "street smarts" probably means being a one boxer.


Do not talk about the basilisk.


While the article does have a point, it's too soon to conclude why some places had more or less covid infections/deaths.

If you look at the overall global picture - location, climate, size, population, demographics, education, GDP, mask mandates, ... - there is no clear conclusion, it's a causal mess.

It's safe to say that at this moment nobody can predict where a covid wave will hit and how big it will be, outside a very general sense - that it might be bad in the winter in unvaccinated places.


I'm not really sure it's fair to consider the article as making its point at all. At the outset the author seems to rail against "The Science" but then goes on to show data that kind of disputes their own point.

As just one example, they make the point that Sweden's recently mortality rate is pretty close to Finland and Norway, and show a graph of rate of excess deaths per 100k people, showing Sweden at 0.06 and Finland and Norway at 0.04.

That's 2000 additional excess deaths in Sweden per 100k people than their neighbours. This equates to an additional 208k+ people that would die if the full country was infected, based on the presented mortality rate data.

If we want to be scientific about this point, we'd have to ask several questions to actually understand whether those numbers are meaningful in the way the author suggests:

- Is it typical for Sweden's all-cause mortality rate to be 50% higher than its neighbours? - If not, what is the typical difference? - Could there be other unaccounted-for confounding factors?

That these questions and answers were omitted I think demonstrates pretty clearly the author hasn't fully considered the point they are attempting to make, or suggests that these questions were not answered because by answering them it may undermine the point they set out to make.


A major confounding factor with things like mask mandates is that there is no reliable data on adherence and no reliable data on types of masks used by the general public. N95 masks are better than ordinary surgical masks and surgical masks are better than neck gators or cheap thin cloth masks.


Germany had a N95 mask mandate (called FFP2 in Europe), nothing else was allowed, and yet had a very similar infection curve to Sweeden (according to article picture) which had no mask mandate.

Mask quality is definitely a factor, but can't be the major factor, unless people wear masks 100% of the time and never take them off outside of the house, which we know it's not true. There is also the "eye protection" factor, which seems to exist.


We know it helps doctors as doctors had lower rate than general population in USA and we also know severity of case is linked to severity of exposure.. so it’s safe to assume masks have helped a ton


The mechanics are so simple it’s weird to me that people argue otherwise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: