I also wonder about it recently. Also in regards to Rust which is hailed as the great savior but has the same, minimal, approach to standard library and needs loads of dependencies.
Rust doesn't have a very broad stdlib, but it has an extremely deep stdlib. Rust's stdlib is huge for the things it provides. Classical JS's stdlib was neither deep nor broad.
Furthermore, tons of those "loads of dependencies" that people point to are crates provided by the Rust project itself. Crates like serde, regex, etc aren't third-party dependencies, they're first-party dependencies just like the stdlib.
I don't think that more minutes of contact is better for anybody.
As a patient, I want to spend as little time with a doctor as possible and still receive maximally useful treatment.
As a doctor, I would want to extract maximal comp from insurance which I don't think is tied time spent with the patient, rather to a number of different treatments given.
Also please note that in most western world medical personnel is currently massively overprovisioned and so reducing their overall workload would likely lead to better result per treatment given.
GPU memory model quite different from CPU memory model, with application level explicit synchronization and coherency and so on. I don't think that transparent compression would be possible, and even if it would surely carry drastic perf downside
This is the real killer feature of Vulkan/DX12, it makes writing generalized renderer so much easier because you don't need to batch draw calls per vertex layout of individual meshes. Personally I use Buffer Device Address for connecting Multidraw Indirect calls to mesh definitions to materials as well.
I just wish there was more literature about this, especially about perf implications. Also synchronization is very painful, which may be why this is hard to do on a driver level inside OpenGL
I've turned on fastmath in python numba compiler while thinking "of course i want faster math, duh". Took me a while to find out it was a cause of many "fun" subtle bugs. Never touching that stuff again.
LINQ in EntityFramework certainly isn't perfect, but frankly it's so far ahead of anything else available in all other languages. It was a brilliant idea to add expressions type into the language AND to create a standard set of interfaces that enable collection interop AND to to then create universal clients for local and remote collections that use all this machinery to deliver first class DX.
Having been working in TS with Prisma for a bit, what stands out is how a Prisma query is effectively trying to express an expression tree in structural form
The difference being that the C# code is actually passing an expression tree and the code itself is not evaluated, allowing C# to read the code and convert it to SQL.
Proper macros operate on AST, which it to say, it is exactly like the Expression stuff in C#, except it can represent the entirety of the language instead of some subset of it (C# doesn't even fully support the entirety of System.Linq.Expressions - e.g. if you want LoopExpression, you have to spell the tree out explicitly itself).
Or you can do your own parsing, meaning that you can handle pretty much any syntax that can be lexed as Rust tokens. Which means that e.g. the C# LINQ syntactic sugar (from ... select ... where etc) can also be implemented as a macro. Or you could handle raw SQL and do typechecking on it.
It looks great, but I'm missing what's innovative about this? AAA procedural folliage has been done for 20 years, terrain too. Blender has had procedural geo nodes for a long time too. What is so interesting here?