Cheap QLC drives become super slow when it is starting to get fairly full and starts garbage collecting (collecting SLC writes into QLC at maybe 10MB/s). IMHO this is not good enough for an OS drive.
Very easy to reproduce: 1. Buy cheap QLC drive. 2. Fill with Steam games. 3. Delete some steam games and download new games. 4. Watch write speeds tank to zero for long periods when downloading.
It's due to garbage collecting on very slow QLC NAND. You won't see it until the drive starts to get 60%+ full. Until then, the drive pretends it is an SLC with very fast writes, but then it starts to show its true colors. Yuck.
About the Lobster language used: The first thing I do when encountering a new language is look at the memory management, since what I want to do with a piece of code is usually build and manipulate data in a safe and efficient manner, so this is central. I am happy to see Lobster seems to be trying to take a new(ish) and pragmatical approach to memory management and that there is a whole document describing it in detail (https://aardappel.github.io/lobster/memory_management.html) which means the language creator agrees that this is important. Also happy to see the language seems to support fast memory management in a multi threaded environment, which is absolutely not self evident in many languages.
Thanks for sharing, it's indeed a great way to quickly see what a language has to offer.
From what I understand, the main innovation of Lobster here is that `class Foo` is a boxed type, while `struct Bar` will be inlined. I'm not sure I see how that's an improvement over using either `Foo` or `Box<Foo>` on instantiation. It also does reference counting by default, and tries to optimise it out at compile time by assigning a single owner and borrow.
We often see complains that Rust's ownership puts a lot of burden on the programmer, but there is really a point at which it clicks and we stop having to fight the borrow checker almost entirely.
Lobster is meant to be a more high level language than Rust, so encoding what you want 99% of the time in the type made sense. It also makes it easier for people to read, knowing that common types like int2 are always by value.
That said, it be easy to have annotations to force the other use case, just so far that has not been needed :)
There is an interesting discussion about the need for ray tracing in one of the later Digital Foundry videos. The argument goes that sometimes baked lighting is impractical due to the size of the maps and how much dynamic lighting you need. The latest Doom game is one such game where light maps would be 100s of GBs. But I guess most other games are fine with baked lighting.
There's also much cheaper methods of dynamic lighting that aren't real time ray tracing. You can approximate, you can cheat, and it will look almost as good.
Maybe the buyers started migrating to IPv6 when they saw how expensive IPv4 addresses are and there is a delay before they actually migrate. IPv4 addresses are way more expensive than I thought, upwards of $60 per address, jeez...
At 5% roi though, even $60 an address would be $3 a year. Address owners typically charge far more -- Amazon for example charges $43 an address per year.
Splitting a piece of software into multiple pieces and shipping the pieces (dependencies) independently is sometimes a good idea, but it has its limits. Maybe the limit should be for dependencies which are very stable and used by many packages (libc, etc.). The hard line policy enforced by Debian here obviously is not working. Happy to see other distros solve this better. This might become really problematic for Debian in the future.