Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CUDA isn't all that and a bag of chips. It just is the Facebook/Twitter of the data science and from that LLM space. There are Tensor processors and other ASIC processing for specific compute functions that can give Nvidia a challenge but it's not unknown to every gamer that there has always been a performance difference between Nvidia and AMD/ATI.

Ok, point made Nvidia. Kudos.

ATI had their moment in the sun before ASIC ate their cryptocurrency lunch. So both still had/have relevance outside gaming. But, I see Intel is starting to take GPU space seriously and they shouldn't be ruled out.

And as mentioned elsewhere in the comments, there is Vulkan. There is also this idea of virtualized GPU as now the bottleneck isn't CPU... it's now GPU. As I mentioned there are Tensors, Moore's Law thresholds coming back again with 1 nanometer manufacturing, there is going to be a point where we hit a threshold again with current chips and we will have a change in technology - again.

So while Nvidia is living the life - unless they have a crystal ball of how tensors are going to go that they can move CUDA towards, there is going to be a "co-processor" future coming up and with that the next step towards NPUs will be taken. This is where Apple is aligning itself because, after all, they had the money and just said "Nope, we'll license this round out..."

AMD isn't out yet. They, along with Intel and others, just need to figure out where the next bottlenecks are and build those toll bridges.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: