No - because this eliminates entirely or shifts the majority of work from GPU to CPU - and Nvidia does not sell CPUs.
If the AI market gets 10x bigger, and GPU work gets 50% smaller (which is still 5x larger than today) - but Nvidia is priced on 40% growth for the next ten years (28x larger) - there is a price mismatch.
It is theoretically possible for a massive reduction in GPU usage or shift from GPU to CPU to benefit Nvidia if that causes the market to grow enough - but it seems unlikely.
Also, I believe (someone please correct if wrong) DeepSeek is claiming a 95% overall reduction in GPU usage compared to traditional methods (not the 50% in the example above).
If true, that is a death knell for Nvidia's growth story after the current contracts end.
I can see close to zero possibility that the majority of the work will be shifted to the CPU. Anything a CPU can do can just be done better with specialised GPU hardware.
Then why do we have powerful CPUs instead of a bunch of specialized hardware? It's because the value of a CPU is in its versatility and ubiquity. If a CPU can do a thing good enough, then most programs/computers will do that thing on a CPU instead of having the increased complexity and cost of a GPU, even if a GPU would do it better.
We have both? Modern computing devices like smart phones use SoCs with integrated GPUs. GPUs aren't really specialized hardware, either, they are general purpose hardware useful in many scenarios (built for graphics originally but clearly useful in other domains including AI).
People have been saying the exact same thing about other workloads for years, and always been wrong. Mostly claiming custom chips or FPGAs will beat out general purpose CPUs.
Yes, I was too hasty in my response. I should have been more specific that I mean ML/AI type tasks. I see no way that we end up on general purpose CPUs for this.
In terms of inference (and training) of AI models, sure, most things that a CPU core can do would be done cheaper per unit of performance on either typical GPU or NPU cores.
On desktop, CPU decoding is passable but it's still better to have a graphics card for 4K. On mobile, you definitely want to stick to codecs like H264/HEVC/AVC1 that are supported in your phone's decoder chips.
CPU chipsets have borrowed video decoder units and SSE instructions from GPU-land, but the idea that video decoding is a generic CPU task now is not really true.
Now maybe every computer will come with an integrated NPU and it won't be made by Nvidia, although so far integrated GPUs haven't supplanted discrete ones.
I tend to think today's state-of-the-art models are ... not very bright, so it might be a bit premature to say "640B parameters ought to be enough for anybody" or that people won't pay more for high-end dedicated hardware.
> Now maybe every computer will come with an integrated NPU and it won't be made by Nvidia, although so far integrated GPUs haven't supplanted discrete ones.
Depends on what form factor you are looking at. The majority of computers these days are smart phones, and they are dominated by systems-on-a-chip.
That's also what AVX is but with a conservative number of threads.. If you really understand your problem I don't see why you would need 32 threads of much smaller data size or why you would want that far away from your CPU.
Whether your new coprocessor or instructions look more like a GPU or something else doesn't really matter if we are done squinting and calling it graphics like problems and/or claiming it needs a lot more than a middle class PC.
If the AI market gets 10x bigger, and GPU work gets 50% smaller (which is still 5x larger than today) - but Nvidia is priced on 40% growth for the next ten years (28x larger) - there is a price mismatch.
It is theoretically possible for a massive reduction in GPU usage or shift from GPU to CPU to benefit Nvidia if that causes the market to grow enough - but it seems unlikely.
Also, I believe (someone please correct if wrong) DeepSeek is claiming a 95% overall reduction in GPU usage compared to traditional methods (not the 50% in the example above).
If true, that is a death knell for Nvidia's growth story after the current contracts end.