Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I like back-foot, underdog NVIDIA. Ascendent AMD hasn't drawn my ire yet, let's hope power corrupts slowly.

That "back-foot" "underdog" nVidia has the edge in the video market still... and 3x the market cap of AMD.



It's fair to extrapolate because their strategic decisions will be based on extrapolations.

NVIDIA had to overclock and hustle the current generation of cards and it's looking even worse for the next generation. Software was a moat when AMD was heavily resource constrained, but now they can afford the headcount to give chase. Between the chip shortage and crypto, there was plenty of noise on top of fundamentals, but one doesn't make strategic plans based on noise.

This is all speculative, of course. I'm sure if asked they would say it was a total coincidence. Just like AMD and Intel switching places on their stance towards overclocking. Complete coincidence that it matches the optimal strategy for their market position -- "milk it" vs "give chase." Somehow it always seems to match, though, and speculation is fun :)


NVIDIA is well, well ahead of AMD.

NVIDIA's cards were faster than AMD's with the huge gap in transistor density that was the Samsung fab.

Don't get excited for the AMD graphics division up in Canada.


>NVIDIA's cards were faster than AMD's with the huge gap in transistor density that was the Samsung fab.

They are roughly at par. AMD does better at lower resolutions because of their cache setup.

With the refreshed cards, AMD is slightly ahead.


Keep in mind that is at a particular price point.

NVIDIA's top of the range chip is ahead of AMD's, and the 3080's SKU is at a lower binning point on the bell curve than the 6950's.

Hence NVIDIA would be able to maintain a performance per watt crown at the 6950's price point if it sold its highest bins cheaper.

Given the gap in transistor density, that is an exorbitant architectural delta.


I wish my company were in the same desperate situation as Nvidia. One where we’d be faster than the competition with similar perf/W while using a much inferior silicon process…


APUs are eating the market of novideo, see e.g. the performance of the M1 iGPU


Are APUs different from what we used to call integrated graphics cards?


The difference is getting blurry. Apus have generally better communication/latency/shared resources with the CPU. The ultimate ideal of an APU is to have a unified memory with the CPU, which is the case in e.g the PS3/PS4 Despite progress in heterogenous computing (the neglected HSA), in SOCs, 3D ingerposers, high bandwidth buses interconnects and 3D memory such as HBM, the PC platform has yet to see a proper APU. In fact the M1 is probably the closest thing to an ideal APU on the market. But yes the more time pass, the more the term IGPU denote APU. AMD bought ATI because of the fusion vision, the idea that sharing silicon, resources and memory between the CPU and the GPU would be the future of computing.

An unrelated but very underrated is the egpu. Egpus are external to the pc unlike a dgpu. So you can buy a thin laptop, connect it via Thunderbolt to a rtx 3080 and enjoy faster gpu performance than allowed on any laptop on the market, and enjoy a thin lightweight, silent laptop the rest of the time. Disclaimer Thunderbolt is still a moderate limiting factor in reaching peak performance.


> the PC platform has yet to see a proper APU

Wat. AMD literally invented the term 'APU' and has been shipping them since 2011. Fully unified CPU+GPU memory since 2014's Kavari. That's full cache coherent CPU & GPU along with the GPU using the same shared virtual pageable memory as the CPU.

The M1 didn't add anything new to the mix.


It's a spectrum. I don't think that cache coherency was useable by developers/compilers. The two only ways I know (HMM and HSA) are niche, used by nobody. GPGPU compute would GREATLY benefit from programs that can share memory between cpu and gou without having to do needless high latency round-trips and copies. So they failed in practice. They never did a CPU addressable HBM interposer (despite having invented HBM) unlike what I believe is the M1.


> An unrelated but very underrated is the egpu. Egpus are external to the pc unlike a dgpu. So you can buy a thin laptop, connect it via Thunderbolt to a rtx 3080 and enjoy faster gpu performance than allowed on any laptop on the market, and enjoy a thin lightweight, silent laptop the rest of the time. Disclaimer Thunderbolt is still a moderate limiting factor in reaching peak performance.

Not just for laptops: this sounds also a bit like what the Switch dock could have been.

(And in some sense, it reminds me of Super FX chip for the SNES.)


APUs are AMD-speak for CPU and GPU on the same die (Intel has similar but doesn't call them that). Integrated graphics cards (a misnomer since there is no card -- IGP or iGPU is probably more accurate) may or may not be on the same die (instead could be on the motherboard, particularly in the chipset). That design is pretty rare/antiquated at this point though. Being on the same die means higher bandwidth, lower latency, etc.


I think Intel calls them XPUs.


Integrated video cards were integrated onto the motherboard. APUs/iGPUs are integrated into the CPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: