I stacked the deck in AMD's favor using a 3-year-old chip on an older node.
Why is AMD using 3.6x more power than M1 to get just 32% higher performance while having 17% more cores? Why are AMD's cores nearly 2x the size despite being on a better node and having 3 more years to work on them?
Why are Apple's scores the same on battery while AMD's scores drop dramatically?
Apple does have a reason not to run at 120w -- it doesn't need to.
Meanwhile, if AMD used the same 33w, nobody would buy their chips because performance would be so incredibly bad.
You should try not to talk so confidently about things you don't know about -- this statement
> if AMD used the same 33w, nobody would buy their chips because performance would be so incredibly bad
Is completely incorrect, as another commenter (and I think the notebookcheck article?) point out -- 30w is about the sweet spot for these processors, and the reason that 110w laptop seems so inefficient is because it's giving the APU 80w of TDP, which is a bit silly since it only performs marginally better than if you gave it e.g. 30 watts. It's not a good idea to take that example as a benchmark for the APU's efficiency, it varies depending on how much TDP you give the processor, and 80w is not a good TDP for these
Halo products with high scores sell chips. This isn’t a new idea.
So you lower the wattage down. Now you’re at M1 Pro levels of performance with 17% more cores and nearly double the die area and barely competing with a chip 3 years older while on a newer, more expensive node too.
That’s not selling me on your product (and that’s without mentioning the worst core latency I’ve seen in years when going between P and C cores).
> if AMD used the same 33w, nobody would buy their chips because performance would be so incredibly bad
I’m writing this comment on HP ProBook 445 G8 laptop. I believe I bought it in early 2022, so it's a relatively old model. The laptop has a Ryzen 5 5600U processor which uses ≤ 25W. I’m quite happy with both the performance and battery life.
It's well known that performance doesn't scale linearly with power.
Benchmarking incentives on PC have long pushed X86 vendors to drive their CPUs at points of the power/performance curve that make their chips look less efficient than they really are. Laptop benchmarking has inherited that culture from desktop PC benchmarking to some extent. This is slowly changing, but Apple has never been subject to the same benchmarking pressures in the first place.
You'll see in reviews that Zen5 can be very efficient when operated in the right power range.
But then you would see an AMD CPU with a lower TDP getting higher benchmark results.
> Why is AMD using 3.6x more power than M1 to get just 32% higher performance while having 17% more cores?
Getting 32% higher performance from 17% more cores implies higher performance per core.
The power measurements that site uses are from the plug, which is highly variable to the point of uselessness because it takes into account every other component the OEM puts into the machine and random other factors like screen brightness, thermal solution and temperature targets (which affects fan speed which affects fan power consumption) etc. If you measure the wall power of a system with a discrete GPU that by itself has a TDP >100W and the system is drawing >100W, this tells you nothing about the efficiency of the CPU.
AMD's CPUs have internal power monitors and configurable power targets. At full load there is very little light between the configured TDP and what they actually use. This is basically required because the CPU has to be able to operate in a system that can't dissipate more heat than that, or one that can't supply more power.
> Meanwhile, if AMD used the same 33w, nobody would buy their chips because performance would be so incredibly bad.
33W is approximately what their mobile CPUs actually use. Also, even lower-configured TDP models exist and they're not that much slower, e.g. the 7840U has a base TDP of 15W vs. 35W for the 7840HS and the difference is a base clock of 3.3GHz instead of 3.8GHz.
> Getting 32% higher performance from 17% more cores implies higher performance per core.
I don't disagree that it is higher perf/core. It is simply MUCH worse perf/watt because they are forced to clock so high to achieve those results.
> The power measurements that site uses are from the plug, which is highly variable to the point of uselessness
They measure the HX370 using 119w with the screen off (using an external monitor). What on that motherboard would be using the remaining 85+W of power?
TDP is a suggestion, not a hard limit. Before thermal throttling, they will often exceed the TDP by a factor of 2x or more.
As to these specific benchmarks, the R9 7945HX3D you linked to used 187w while the M2 Max used 78w for CB R15. As to perf/watt, Cinebench before 2024 wasn't using NEON properly on ARM, but was using Intel's hyper-optimized libraries for x86. You should be looking at benchmarks without such a massive bias.
> I don't disagree that it is higher perf/core. It is simply MUCH worse perf/watt because they are forced to clock so high to achieve those results.
The base clock for that CPU is nominally 2 GHz.
> They measure the HX370 using 119w with the screen off (using an external monitor). What on that motherboard would be using the remaining 85+W of power?
For the Asus ProArt P16 H7606WI? Probably the 115W RTX 4070.
> TDP is a suggestion, not a hard limit. Before thermal throttling, they will often exceed the TDP by a factor of 2x or more.
TDP is not really a suggestion. There are systems that can't dissipate more than a specific amount of heat and producing more than that could fry other components in the system even if the CPU itself isn't over-temperature yet, e.g. because the other components have a lower heat tolerance. There are also systems that can't supply more than a specific amount of power and if the CPU tried to non-trivially exceed that limit the system would crash.
The TDP is, however, configurable, including different values for boost. So if the OEM sets the value to the higher end of the range even though their cooling solution can't handle it, the CPU will start out there and gradually lower its power use as it becomes thermally limited. This is not the same as "TDP is a suggestion", it's just not quite as simple as a single number.
> As to these specific benchmarks, the R9 7945HX3D you linked to used 187w while the M2 Max used 78w for CB R15.
Which is the same site measuring power consumption at the plug on an arbitrary system with arbitrary other components drawing power. Are they even measuring it though the power brick and adding its conversion losses?
These CPUs have internal power meters. Doing it the way they're doing it is meaningless and unnecessary.
> You should be looking at benchmarks without such a massive bias.
Do you have one that compares the same CPUs on some representative set of tests and actually measures the power consumption of the CPU itself? Diligently-conducted benchmarks are unfortunately rare.
Note however that the same link shows the 7945HX3D also ahead in Blender, Geekbench ST and MT, Kraken, Octane, etc. It's consistently faster on the same process, and has a lower TDP.
It's the only one where they measured the power use. I don't get to decide which tests they run. But if their method of measuring power use is going to be meaningless then the associated benchmark result might as well be too, right?
> Geekbench 6 is perfectly fine for that stuff. But that still shows apple tieing in MT and beating the pants off x86 in 1T efficiency.
It shows Apple behind by 8% in ST and 12% in MT with no power measurement for that test at all, but an Apple CPU with a higher TDP. Meanwhile the claim was that AMD hadn't even caught up on the same process, which isn't true.
> x86 1T boosts being silly is where the real problem comes from. But if they don’t throw 30-35w at a single thread they lose horribly.
They don't use 30-35W for a single thread on mobile CPUs. The average for the HX 370 from a set of mostly-threaded benchmarks was 20W when you actually measure the power consumption of the CPU:
34W was the max across all tests, presumably the configured TDP for that system, derived from the tests like compiling LLVM that max out arbitrarily many cores.
Why is AMD using 3.6x more power than M1 to get just 32% higher performance while having 17% more cores? Why are AMD's cores nearly 2x the size despite being on a better node and having 3 more years to work on them?
Why are Apple's scores the same on battery while AMD's scores drop dramatically?
Apple does have a reason not to run at 120w -- it doesn't need to.
Meanwhile, if AMD used the same 33w, nobody would buy their chips because performance would be so incredibly bad.