Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AMD marketing learned their lesson. There was controversy last generation with the boost clocks being up to, where many chips couldn't reach their advertised numbers at launch. This time with an advertised boost of 4800MHz AnandTech got up to 4950MHz and Gamers Nexus hit 5050MHz, though these are probably golden review samples.

It's really impressive to me how big of an improvement AMD has made under the exact same constraints as last generation. Same process (even down to the same PDK) though generally higher yields probably let them choose higher bins. Same exact chiplet size to mount to the same substrate. Same power availability. Same IO die with same interfaces and memory controller. Yet they achieve +19% IPC and higher frequency with just design changes. I wish there were more detailed information available about what day-to-day engineering work goes into these design changes. Speculating:

- Designing better workload and electrical simulations to make better decisions when evaluating changes. - Running lots of simulations to determine the optimal size and arrangement of each cache. - Improving CPU binning processes and data, improving on-die sensors, and improving boost algorithms to reliably run closer to the edge. - Tweaking parameters of automated layout algorithms to reduce area of specific functional units. - Improving the algorithms implemented in logic for various processes.



Apart from the performance increase, one of the impressive feats is the TDP of around 142 W, where Intel chips use from 200 W up to 260 W Source: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

And the Linux performance is looking great too: https://www.phoronix.com/scan.php?page=article&item=ryzen-59...


Is it valid to compare TDP numbers between vendors?


Not really, the TDP is as the Anandtech article states, not the same as Peak Power for modern Intel and AMD CPUs.

The difference between TDP and Peak Power for AMD is around 35W, while for Intel it is up to 140W.

When choosing a PSU that difference can have a real impact on system stability.


No, but the linked data is actual power usage, not TDP.


Yeah, wow, and here I just bought a new setup with multiple home servers all on 3600 thinking the improvement wouldn't be that significant and the prices would be much higher initially anyway.

The 5600X looks fantastic at 65WTDP. Add the unavoidable price drop of Zen 2(+) on top of that. Buyers remorse is real.


shrug

I bought a Ryzen 7 2700X a few months before the release of the Zen 2 / 3700X. And yeah, it's faster for nearly the same price (though due to timing I paid ~$230 so it's not insignificant when compared to the launch day pricing.)

There's "always" more, more, more performance for the dollar in tech, and there is no perfect price/performance product. The Ryzen 5 3600 was and still is a great price/performance product and that doesn't change because a more expensive, higher performing chip is released.

You can make decisions at a point in time, or if time is irrelevant, you can keep waiting for new milestones... and then make a decision in that point of time instead.

For me, I waited nearly 10 years to replace a 3+ Ghz quad-core CPU, so the 4+ Ghz octa-core was a nice upgrade (albeit still not earth shattering) and I am very glad to have it. I love the recent increase in performance growth over time, but it will likely be quite a few years before I upgrade my desktop again. Of course, I'm just one guy, and I'm glad many enthusiasts and computer-using professionals will benefit from all this advancement!


This has always been the case, too. In school I once read somewhere that if you knew you had a computation to do that was 100 MFLOPS (or whatever seemed like a large number at the time), and you had to complete it sometime in the next 18 months, a solid strategy is to wait 9 months and buy more-modern hardware.

You'd end up with more 'residual' computing power (when your computation is done, you have the hardware lying around, and now it's more powerful than if you had bought on day 0), and it would probably cost less in energy as well.


we probably used the same book haha, I also remember questions like this from my architecture class. I suspect the book was written at a time when inter-generational scaling was a bit more dramatic.


Same here, but son was too excited about 5950x, and wanted birthday/xmas gift, so you go. I'm geeking out, for the first time since I did my own upgrade from Apple ][ -> PC 286 -> PC 386SX -> Cyrix 6x86 - I wanted to skip Threadripper, as he normally works in single app - whether it's game (Fortnite, COD, Roblox, etc.) or video editing (Premiere, After Effects, others). The only issue is that we have older motherboard B440, but were notified that BIOS would update would be coming!


> I wanted to skip Threadripper, as he normally works in single app

I thought those types of applications would actually be able to take good advantage of all those cores.


We've hit a performance problem with multi-socket setups (NUMA) - basically one core talking to the other's core memory (different sockets), hence slowdowns. And it's amplified by the use of atomics (shared_ptr, etc.). What we did was simply to disable one core (it was two CPU sockets). I guess for servers, executing many (mostly) single threaded jobs (a.k.a "borg", "kubernetes", "docker") is okay, but when you have app using all threads (TBB-style) then it becomes bottlenecked.

SysInternal's https://docs.microsoft.com/en-us/sysinternals/downloads/core... actually captures this detail (probably there are better apps, but it was nice to inspect machines quickly). The tool measure some relative slowdown.


16 cores ought to be plenty for the next few years.


in my experience the ideal time to buy is usually two or three months after the most recent launch. at this point any major kinks will be worked out in a new stepping, and you don't have to fight over the last few units in stock. plus you get to see if the competition responds with price cuts and/or new skus. of course at this point, the next launch is six months away, give or take. I find that having a deliberate strategy like this reduces buyer's remorse a lot.


The 3600 is an incredible chip that will be relevant for a long time. Buyer's remorse is real because of marketing. Do you really need that extra 20% performance?


The 5600X is also as expensive as the 3700X is now. The 3600 is cheaper, a lot. It's still a good option and a good deal.


This is why I almost always wait for second gen of a new kind of hardware. Eg, my current CPU is a Sandy Bridge. When the i series came out I wanted one badly, but I waited and on the first week of launch bought second gen.

The same with the upcoming Macbooks set to be demoed on the 10th. If I want one, I'll wait for at least second gen.

Regarding AMD CPUs this is an unusual situation, because 3rd gen or 4th gen may be optimal, which is highly uncommon. This is because next gen will run on faster ram. The bottleneck on the Zen 2 processors was cache delays. The bottleneck on the Zen 3 processors is ram delay. This shows to me there is more improvements to come. However, unfortunately, this also means cost will go up quite a bit in newer generations with smaller gains making it not as worth it. I don't game much and I do data science work, so processing is done in the cloud, so I have virtually no reason to upgrade my nearly 10 year old CPU as odd as that may sound.


It looks like AMD actually has two teams working on Zen. So Zen1 was from the first team, Zen2 was refinement of Zen1, and Zen3 is the first gen from the second team. So if you follow your rule, you might want to wait for Zen4, although it isn't clear to what degree the teams work together (if team 2 works pretty closely with team 1, then this could be thought of as a big refinement with and 'outside' point of view looking at it).


Zen was refined into Zen+, then later Zen2 was completely different.

Zen and Zen+ did NOT have an I/O die yet. Zen2 added the I/O die, and now Zen3 improves the compute die (without changing the I/O die).


It's arbitrary. If you count Core i series from Nehalem, Westmere is 2nd so Sandy Bridge is 3rd. Count for * bridge, Sandy is 1st. Additionally Zen+ is 2nd and it's least improved generation in this time.

My general preference: buy Tock generation (but Intel recently no Ticks for desktop, AMD does Tick+Tock on Zen2)


I know, right? That was the silver lining of the Intel stagnation era. You could buy a CPU and not feel like you're missing out for the next 3-4 generations.


There's buyers remorse and there's generational shifts. We're going through a huge transformative moment right now. With the addition of Zen3 and the new GPUs this a huge bang/buck improvement across the board. We haven't seen such a shift in many years.

Most of the time I would advise against waiting but right now there's a clear advantage to being on the other side of this shift.


I had a twinge of that. But for the same price I got my i7 3770k, I've gotten a 3600 and double the memory I had with even faster clocks. Oh, and more cores, higher clock speed, cooler running and less power hungry.

Just waiting on the radeon reviews before I decide on a new GPU to get into 1440p gaming.


I still use my 2012 Desktop with a i7 3770K. Except for NVMe upgrades and a RTX 2060 Super (originally 670, then 980) it's still the original Build.

I will be going for 3900X for my next machine. Probably waiting for the RTX 3080 Ti I now see being leaked, or a regular 3080 (not interested in the 3090).


Also still running my 3770k desktop. Also the original build with the exception of replacing my existing GTX680 with a 980 a year or two in.

Amusingly, my thoughts of upgrading were initially triggered by wanting new storage. I currently have the original 250GB SATA SSD I put in originally, along with a handful of hard drives from various older systems. Motherboard doesn't have an M.2 slot and I was thinking, hmmm...I bet there are a lot of things I can't upgrade at this point.

Waiting on these new Zen 3 CPUs to be available, but not in a huge rush since I also can't get my intended 3080, so we will see which comes into stock first. It will be a nice bump to a new gen CPU, DDR4, and a few gens of GPU once I can actually buy the dang parts. If they were around I would've built the thing a month ago.


I'm even willing to pay the premium for a prebuilt PC this time but it will be a while before I can find that Ryzen 5900X / RTX 3080 (Ti?) combo anywhere.


I have a 3770k at home too. How did you upgrade to an NVME? There's no M2 port on my motherboard and I can't boot the OS on the NVME / PCI express I bought.



Thanks, I saw this forum a while ago but there's no way I'm running a hacked bios on a machine where I value my data!


Its "hacked" by adding NVME bootrom module. You can also run Clover EFI from HDD/pendrive without modding bios https://www.win-raid.com/t2375f50-Guide-NVMe-boot-without-mo...


My OS drive is still the original 100GB SSD from 2012. I replaced my 2TB HDD with a 2TB Samsung 970 Evo Plus in a M.2 PCIE 3.0 x4 riser card.

Unfortunately when I switched from the GTX 980 to the RTX 2060 Super all PCIE 3.0 lanes were being used (ASRock Z77 Extreme 6) so had to put the riser card into a PCIE 2.0 slots which means I am not getting the full performance.


> wouldn't be that significant and the prices would be much higher initially anyway

I mean, the 5600X is about double the price of the 3600, depending on when you got the latter.


I went from a 1600 to a 3700X and it was quite a jump, the 5800x looks great but I think I'm staying with the 3700X this gen especially since I would need a new mobo (still on a B350 that I bought with the 1600!)


My problem is that I'm building a new rig and trying really hard to not go all out and get a 5900X instead of a 3700X.

I keep my desktop for a long time, so it seems to be "okay" to do. For reference, upgrading from launch year 2500K. But still..


Anecdotally I upgraded from an (overclocked) 2500K to a mere 3600 and am perfectly happy with the performance I'm getting, even running virtual machines or compiling large code bases.

Would a 3700X have been even nicer? Probably, but when I bought my new CPU last year it was almost twice the price for two extra cores. So I do feel your temptation, but I don't think you would regret the 3700X.


I have a different problem. I can't find a rational justification for getting one. My work is not significantly impacted by CPU speed and 32 gigabytes of RAM still feel sufficient.

Oh well...


   > It's really impressive to me how big of an improvement AMD has made under the exact same constraints as last generation.
It's 8-core chiplets now, but yeah same die size according to marketing.


It was 8-core chiplets last generation, too. It's now 8 core CCXs instead of 4 core CCXs, but same total number of cores per chiplet as last gen it was 2 CCXs per chiplet and this gen it's 1 CCX per chiplet.

This image shows the change more clearly: https://images.anandtech.com/doci/16214/Zen3_arch_19.jpg

You can see it's the same overall amount of "stuff" in the chiplet. It went from 8c / 32MB L3 per chiplet to... 8c / 32MB L3 per chiplet. But it's now not divided into 2 smaller chunks as it was.


Yield must be very good for AMD to not put out a cheap 5800 with two defective 4c CCXs


Defective 4c will go into the 3300x replacement (or an eventual 3100 replacement if things get really bad)


I think they put two 6c into 5900


It seems like evidence of incredible discipline, forethought and prioritization to achieve this. They must have really been thinking about how to build a solid foundation with a clear path for iteration. To achieve it without discovering some bottleneck they hadn't accounted for along the way is really impressive.


I still find the 4000 series for laptops impressive. I just ordered a 4800HS system with 8C/16T in 14” screen format. That’s astounding to me.


A Zephyrus G14?

Make sure to read the subreddit, if it happens to be that one. The CPU is fine, but dGPU integration is, uh, interesting.


Good guess and thanks for the tip. All I could turn up from searching Reddit is that I’d need to disable the AMD iGPU to use the dGPU for games. This doesn’t seem a problem for me since I will use the dGPU for CUDA work only, and I’d expect that should work ok or I’ll return it.


Only issue I've seen with mine is there was a glitch that prevented the dGPU going 100 percent to sleep when not playing games and limited battery to only 4 hours.Otherwise works like a dream. Impressive cpu


You absolutely don't need to do that. It'll automatically enable the dGPU when needed.

The biggest issue I've had is with making sure it turns off, and /r/zephyrusg14 is somewhat helpful there. On Linux it's more straightforward, but currently needs a few kernel patches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: