But regardless, the GP post was not taking about matrix math, it seems it was talking about sending an HTTP request and waiting for a response, which is something that actually is I/O bound on the TCP socket.
The systems that use it as a native threading model are obsolete, but there's also this sentence there:
>Cooperative multitasking is used with await in languages with a single-threaded event-loop in their runtime, like JavaScript or Python.
There's no reason rust can't have an executor that does the same, and you only use that within the event loop on your one or two HTTP worker threads. If you're waiting in a thread for an HTTP request to return, that's never going to be CPU-bound. I still am failing to see what the problem here is besides a complaint about some rust crate only supporting a multi-threaded executor, which again is a different problem than whether it's done with async futures or not. One could just as easily write some C code that forces the use of threads.
Those languages are also well known for not handling multiple cpu bound threads well. (And for that matter, it's simply wrong about Python, which uses native threads, but locks very heavily: you need to write native code to use more than one core effectively from one process.)
The goal is to NOT do what you're suggesting. It's a holdover from when native threads were much more expensive than they are today, and multiple cores on a single cpu were rare.