The GPUs, sure. The mainboards and CPUs can be used in clusters for general-purpose computing, which is still more prevalent in most scientific research as far as I am aware. My alma mater has a several-thousand-core cluster that any student can request time on as long as they have reason to do so, and it's all CPU compute. Getting non-CS majors to write GPU code is unlikely in that scenario.
I provide infrastructure for such a cluster that is also available to anyone at the university free of charge. Every year we swap out the oldest 20% of the cluster as we run a five year depreciation schedule. In the last three years, we’ve mostly been swapping in GPU resources at a ration of about 3:1. That’s in response to both usage reports and community surveys.
Insane question, asked for the purposes of discussion: Would it make sense if those GPUs were top-of-the-line for years? Like if TSMC were destroyed?
Even then, I don't understand why being a landlord to the place were AI is trained would be financially exciting... Wouldn't investing in NVIDIA make a lot more sense?
> I'm sure there's alien civilisations that are more aggressive than us, but also ones that are less so.
What is the minimum amount of aggression necessary to evolve sentience? What is the maximum amount of aggression in an interstellar space-faring species? Where is humanity on that scale?
A super-aggressive species would likely self-annihilate before possessing sufficient energy to travel interstellar distances... So the jury's still out on us.
There are definitely categories of code where you could realistically expect lift-and-shift from C which you're confident is correct to safe Rust that's maybe not very idiomatic but understandable.
I believe Microsoft has a tool which did this for some bit twiddling crypto code. They have high confidence their C crypto code is correct, they run a process, they get safe Rust, and so their confidence transfers but now it's safe Rust so it just works in Rust software and you get all their existing confidence.
But it's never going to be all code, and it might well never be most code.
"you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language."
You could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
I can't say definitively but based on my experience in 1991 with a 68K-based PowerBook 100 I doubt it. Even with a 1-bit monochrome display and no custom co-processors executing in parallel the battery life wasn't great. Also, most LCD screens don't support some of the weird field/frame and timing things the Amiga could do with CRT displays. Even rounding up from 59.94 NTSC field rate to 60 fps would have caused some Amiga software to have display issues.
Consider a spoon-fed spectrum for AIs working in large codebases, where is state-of-the-art AI?
"Here's a bug report, fix it."
"Here's a bug report, an explanation of what's triggering it, fix it."
"Here's a bug report, an explanation of what's triggering it, and ideas for what needs to change in code, fix it."
"Here's a bug report, an explanation of what's triggering it, and an exact plan for changing it, fix it."
If I have to spoon-feed as much as the last case, then I might as well just do it. The second last case is about the level of a fresh-hire who is still ramping up and would still be considered a drain under Brook's Law.
I suppose the other axis is: How much do I dread performing the resultant code review?
Put them together and you have a "spoon-fed / dread" graph of AI programmer performance.
Another thing is that working on a large codebase is not even mostly about writing code it's about verifying the change. There are a lot of tickets in our backlog that I could roll through "fixing" by just joyriding my IDE through the codebase but verifying each of those changes will in some cases take days (I work on a platform supporting most of the company's business).
I guess the AI folks will insist that the next step is "agentic" AIs that will push the changes to a test environment that it keeps up to date, adds and modifies tests ensuring that they're testing intent creates a MR argues with the other agents in the review, checks the nightly integration report and supports it into production.
Wouldn't it be nice if popular libraries could export to .so files so the best language for a task could use the bits & pieces it needed without a programmer needing to know python (and possibly C)?
Were I to write a scripting language, trivial export to .so files would be a primary design goal.
Unfortunately the calling conventions and memory models are all different, so there's usually hell to pay going between languages. Perl passes arguments on a stack, Lisp often uses tagged integers, Fortran stores matrices in the other order, ... it goes on and on. SWIG (https://swig.org) can help a lot, but it's still a pain.
Exporting to .so (a) makes it non-portable (you suddenly need to ship a whole compatibility matrix of .so files including a Windows DLL or several) and (b) severely constrains the language design. It's very hard to do this without either forcing the developer to do explicit heap management using the caller's heap or very carefully hiding your VM inside the shared object .. which has interesting implications once you have multiple such libraries. Also you don't have a predefined entry point (there's no equivalent of DllMain) so your caller is forced to manage that and any multithreading implications.
It basically forces your language to be very similar to C.