The show makes reference to "Midnight in Chernobyl" in its epilogue, I think it's safe to say it was one of the main sources of information for the show (though of course they took liberties because it was a historical drama).
Mazin has talked about "Voices From Chernobyl" more than any other source.
I used as many sources as I could find. I was looking at research articles in scientific journals; I was looking at governmental reports; I was looking at books written by former Soviet scientists who were at Chernobyl; I was reading books by Western historians who had looked at Chernobyl. I watched documentaries; I read first-person documents.
And then there was Voices From Chernobyl, which is unique. What Svetlana Alexievich did there, I think, was capture an aspect of history we rarely see, which is the story of the people who you wouldn’t otherwise even know existed. We look at history from the point of view of the big movers, the big players, and she looks at history through the eyes of human beings. They’re all equal to her: Whether they are generals or party leaders or peasants, it doesn’t matter. And I thought that was just beautiful. It really inspired me.
So again this idea that anything not in the Legasov tapes was invented --- no. The show is a fictionalized retelling, but no, that criticism doesn't stand.
Mazin almost completely ignored the INSAG-7 report and in the last episode, they retell the same fictional story blaming the operators that Legasov himself presented in Vienna in 1986 to the IAEA meeting, which was published as the original INSAG-1 report.
I agree there are some claims in this article that should be further scrutinized but it's true that the levels of radiation were not as high as one might assume. The direction of the wind during and immediately following the disaster slowed the spread of radioactive material over Pripyat (this is also why southern Belarus was hit so hard). The prevailing winds in that region are north east and the Chernobyl power plant was on the north side of the city. By the time of the May day parade the winds had shifted such that Kiev was downwind from Chernobyl.
The KGB did their best to contain information about the disaster in general and the USSR wanted the May day parade to go on as-planned to make it look like things were fine. Even those with enough power or connections to be aware of the danger were pressured to participate. The May day parade was later often referred to in infamy by the Ukrainian independence movement following the disaster.
Most of my information comes from what I remember of reading "Midnight in Chernobyl" and "Chernobyl the History of a Nuclear Disaster"
It seems that in practice no it's not possible based on what I've read from people much closer to programming language design and compiler work.
"In practice, the challenge of programming language design is not one of expanding a well-defined frontier, it is grappling with a neverending list of fundamental tradeoffs between mutually incompatible features.
Subtyping is tantalizingly useful but makes complete type inference incredibly difficult (and in general, provably impossible). Structural typing drastically reduces the burden of assigning a name to every uninteresting intermediate form but more or less precludes taking advantage of Haskell-style typeclasses or Rust-style traits. Dynamic dispatch substantially assists decoupling of software components but can come with a significant performance cost without a JIT. Just-in-time compilation can use runtime information to optimize code in ways that permit more flexible coding patterns, but JIT performance can be unpredictable, and the overhead of an optimizing JIT is substantial. Sophisticated metaprogramming systems can radically cut down on boilerplate and improve program concision and even readability, but they can have substantial runtime or compile-time overhead and tend to thwart automated tooling. The list goes on and on."
I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.
It is truly risk free. You always buy the call using the customer's money but you only give them the call if every part of the parlay is correct. Assuming they charge a commission in addition to the asset price to cover transaction processing they shouldn't lose money
Edit: I don't really know how pricing these things usually works but I could see taking some risk on to price these attractively
I get that Robinhood's business model is stealing from the poor and giving to themselves, and their customers are mostly unsophisticated, but why wouldn't the customers just buy the underlying call options if the price to buy the parlay is the sum of the underlying options?
It was chosen to be implemented as a generic type in this design because the way that uncertainty "pollutes" underlying values maps well onto monads which were expressed through generics in this case.
The desktop background reminds me of some screens that MEPIS OS used [0] back when I was first getting into Linux in high school and the idea of live distributions blew my mind. I assume it's a coincidence people just like pyramids I guess.
I don't agree with the conclusion put forward in the article. I'm reminded of my time trying to get into Urbit many years ago or DAPs. That certainly require me to think differently about things but that didn't make me, or many other people, want them. It might be a necessary pre-condition for stickiness but it certainly isn't equivalent. Barriers to adoption lock people out and, once they have been overcome, lock people in.
I've done a number of text-based slide presentations with `marp` and I've been pleased with the results. Mostly it's just plain markdown slides but if you want to get into the weeds with HTML and have a 2-column slide or something you can do it. https://marp.app/
If I'm understanding the parent comment correctly: a fact may have political implications but it doesn't depend on politics. In other words reality is independent of our interpretation of it (i.e. philosophical realism). The rub of course being that coming to know facts about most things is a highly social process filtered through interpretation and biases. Everything can be political if it needs to be decided upon by a group.
EDIT: I have avoided using "truth" here because it's a more general term than "fact" which has the connotation of being in reference to something concrete.
I'm glad to see more sites giving visibility to long-lasting products. This reminds me of https://buymeonce.com/ but I like the community driven aspect of this. I'll definitely be keeping an eye on it.