Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.
DDR5 32GB is currently going for ~$330 on Amazon
DDR4 32GB is currently going for ~$130 on Amazon
DDR3 32GB is currently going for ~50 on Amazon (4x8GB)
For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.
Currently trying to source a large amount of DDR4 to upgrade a 3 year old fleet of servers at a very unfortunate time.
It's very difficult to source in quantity, and is going up in price more or less daily at this point. Vendor quotes are good for hours, not days when you can find it.
Have you considered dropping by in person to your nearest computer recycler/refurbisher? As a teen I worked at one, and the boxes and boxes of RAM sticks pulled from scrapped machines (usually scrapped due to bad main boards) made a strong impression. They tend to not even put the highest-spec stuff they pull into anything, as refurbished machines are mostly sold/donated in quantity (to schools and the like) and big customers want standardized SKUs with interchangeable specs for repairability more than they want performance. Workers at these places are often invited to build personal machines to take home out of these “too good” parts — and yet there’s so much that they can’t possibly use it all up that way. If someone showed up asking if they could have some DDR3, they might literally just let you dig through the box and then sell it to you by weight!
> at these places are often invited to build personal machines to take home out of these “too good” parts — and yet there’s so much that they can’t possibly use it all up that way.
I work in the refurb division of an e-waste recycling company. Those days are over for now. We're listing RAM that will sell for more than scrap value (about $2 per stick), which is at least 4 GB DDR3. And we got a list of people who will buy all we got.
I think it's not about own versus someone else's money.
Hardware is usually a small piece of the financial puzzle (unless you're building a billion dollar AI datacenter I guess) and even when the hardware price quadruples, it's still a small piece and delivery time is much more important than optimizing hardware costs.
The price of the hardware, even with inflated prices, is probably equal or less than the combined price of all the software licenses that go on that machine.
At some point that’s true, but don’t they run the n-1 or 2 generation production lines for years after the next generation launches? There’s a significant capital investment there and my understanding is that the cost goes down significantly over the lifetime as they dial in the process so even though the price is lower it’s still profitable.
This is only true as long as there's enough of a market left. You tend to end up with oversupply and excessive inventory during the transition, and that pushes profit margins negative and removes all the supply pretty quickly.
Undoubtably the cost would go up, but nobody is building out datacenters full of DDR4, either, so I don't figure it would go up nearly as much as DDR5 is right now.
One possible outcome is the remaining memory manufacturers have dedicated all their capacity for AI and when the bubble pops, they lose their customer and they go out of business too.
I wouldn't be too surprised to find at least some of the big ram foundries are deeply bought into the fake money circles where everybody is "paying" each other with unrealised equity in OpenAI/Anthropic/whoever, resulting in a few trillion dollars worth of on-paper "money" vanishing overnight, at which stage a whole bunch of actual-money loans will get called in and billion dollar companies get gutted by asset strippers.
Maybe larger makerspaces and companies like Adafruit, RasPi, and Pine should start stockpiling (real) money, and pick themselves up an entire fab full of gear at firesale prices so they can start making their own ram...
Unfortunately, older RAM also means an older motherboard, which also means older socket and older CPUs. It works, but it's not usually a drop in replacement.
Well, they're entirely different, not just slot. Intel 12th/13th/14th gens all support DDR4 or DDR5. However, the motherboards you buy can only support one or the other. I don't think there are a single AMD CPU that supports both?
I bought 384 GB of DDR5-4800 RDIMM a few months back for a Zen 4c system with lots of I/O like dual 25G NIC's and ten MCIO x8 ports... So far it has been the best value for money compared to any memory before it. The bandwidth is nuts. Power consumption went DOWN compared to DDR4. Doesn't matter much if you got two sticks, but as soon as you get into 6+ territory, it does matter a lot. The same goes for NVMe's.
I’m really happy I padded out one of my home servers with 128GB of DDR4 a couple of years back. I am, however, quite sad that I missed the opportunity to do the same to a newer one…
I wondered how much of this is inflation -- after adjusting for CPI inflation, $160 in 2020 is worth $200 in today's dollars [$], so the price of that ddr4 kit is 10% higher in real terms.
The other comment already covered why comparing CAS latency is misleading. CAS latency is measured in clock cycles. Multiply by the length of a clock cycle to get the CAS delay.
So? If the net result is more reliable memory, it doesn't matter.
Many things in electrical engineering use ECC on top of less reliable processes to produce a net result that is more reliable on the whole. Everything from hard drives to wireless communication. It's normal.
Just like increasing the structure size "only" decreases the likelihood of bit flips. Correcting physical unreliability with more logic may feel flimsy, but in the end, probabilities are probabilities.
CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.
While it has nothing to do with how responsive your mouse feels, as that is measured in milliseconds while CAS latency is measured in nanoseconds, there has indeed been a small regression with DDR5 memory compared to the 3 previous generations. The best DDR2-4 configurations could fetch 1 word in about 6-7 ns while the best DDR5 configurations take about 9-10 ns.
RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.
Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.
I only felt it on Windows, maybe tht is due to the special USB mouse drivers Microsoft made? Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.
Dan Luu actually measured latency of older computers (terminal, input latency), and compared it to modern computers. It shows older computers (and I mean previous century-wise old) have lower input latency. This is much more interesting than 'feelings', especially when discussing with other people.
Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get modern SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. The one caveat there to modern stuff being better than older stuff is Optane (the enterprise stuff like the 905P or P5800X, not that memory and SSD combo shitshow that Intel was shoveling out the consumer door). We still haven't reached parity with the 3DXpoint stuff, and it's a damn shame Intel hurt itself in it's confusion and cancelled that, because boy would they and Micron be printing money hand over fist right now if they were still making them. Still, Point being: Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space. If you want write endurance, capacity, and/or performance, you have more and better options today than ever before (Optane/3DXPoint excepted).
Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.
I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).
What if you overprovision the newer SSD to a point where it can run the entirety of the drive in pseudo-SLC ("caching") mode? (You'd need to store no more than 25% of the nominal capacity, since QLC has four bits per cell.) That should have fairly good endurance, though still a lot less than Optane/XPoint persistent memory.
Which tells me your experience is incredibly limited.
Intel was very good, and when they partnered with Micron, made objectively the best SSD's ever made (3DXPoint Optanes). I lament that they sold their storage business unit, though of all the potential buyers, SK was probably the best case scenario (they since rebranded that into Solidigm).
The intel X25-E was a great drive, but it is not great by modern standards and in any write-focused workload it is an objectively, provably bad drive by any standard these days. Let's compare it to a Samsung 9100 Pro 8TB which is a premium consumer drive, and a quasi mid level enterprise drive (depends on usecase, it's lacking a lot of important enterprise features such as PLP) that's still a far cry from the cream of the crop, but has an MSRP comparable to the X25-E's at launch
X25-E 64GB vs 9100 Pro 8TB:
MSRP: ~$900 ($14/GB) vs ~$900 ($0.11/GB)
Random Read (IOPS): 35.0k vs 2,200k
Random Write (IOPS): 3.3k vs 2,600k
Sustained/Seq Read (MBps): 250 vs 14,800
Sustained/Seq Write (MBps): 170 vs 13,400
Endurance: >=2PB writes vs >= 4.8 PB writes
In other words, it loses very badly in every metric, including performance and endurance per dollar (in fact, it loses so bad on performance that it still isn't close even if we assume the X25-E is only $50), and we're not even into the high end of what's possible with SSD's/NAND flash today. Hell, the X25-E can't even compare to a Crucial MX500 SATA SSD except on endurance which it only barely beats (2PB for X25-E vs 1.4PB for 4TB). The X25-E's incredibly limited capacity (64GB max) also makes it a non-starter for many people no matter how good the performance might be (but isn't).
Yes, per cell the X25-E is far more durable than a MX500 or 9100 Pro yielding a Disk Write Per Day endurance of about 17DWPD, which is very good. An Intel P4800X however (almost a 10 year old drive itself) had 60DWPD, or more than 3x the endurance when normalized for Capacity, while also blowing it - and nearly every other SSD ever made until very very recently - out of the water on the performance front as well. And let's not forget, not only can you supplement per-cell endurance with having more cells (aka more capacity), but the X25-E's maximum capacity of 64GB makes it a non-starter for the vast majority of use-cases right out of the gate, even if you try to stack them in an array.
For truly high end drives, look at what the Intel P5800X, Micron 9650 MAX, or Solidigm D7-5810 are capable of for example.
Oh, and btw, a lot of those high end drives have SLC as their Transition Flash Layer, sometimes in capacities greater than the X25-E was available in. So the assertion that they don't make SLC isn't true either, we just got better about designing these devices so that we aren't paying over $10/GB anymore.
So no. By todays standards the X25-E is not "the diamond peak". It's the bottom of the barrel and in most cases, non-viable.
Yes, we've already established your experience is incredibly limited and not indicative of the state of the market. Stop buying bad drives and blaming the industry for your uninformed purchasing decisions.
Hell, as you admitted that your experience is limited to intel, I'd wager at least one of those drives that failed were probably the 660P's, no? Intel was not immune from making trash either, even if they did also make some good stuff (which for their top tier stuff, was technically was mostly Micron's doing).
I've deployed countless thousands of solid state drives - hell well over a thousand all-flash-arrays - that in aggregate probably now exceeds an exabyte of raw capacity since. This is my job. I've deployed individual systems with more SSD's than you've owned in total from the sound of it. And part of why it's hard to kill those old drives is they are literal orders of magnitude slower, meaning it takes literal orders of magnitude more time to write the same amount of data. That doesn't make them good drives, it makes them near-worthless even when they work, especially considering the capacity limitations that come with it.
I'm not claiming bad drives don't exist, they most certainly do, and would consider over 50% of what's available in the consumer market to fit that bill, but I also have vastly higher standards than most, because if I fuck something up, the cost to fix it is often astronomical. Modern SSD's aren't inherently bad, they can be, but not necessarily so. Just like they aren't inherently phenomenal, they can be, but not necessarily so. But they do exist, at a variety of price points and use-cases.
TL;DR Making uninformed purchasing decisions often leads to bad outcomes.
CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).
Old machines use a lot more power (worse nm), and DDR5 has equivalent to ECC, while previously you had to specifically get ECC RAM and it wouldn't work on cheaper Intel hardware (bulk of old hardware is going to be Intel).
The on-chip ECC in DDR5 is there to account for lower reliability of the chips themselves at the higher speeds. It does NOT replace dedicated ECC chips which cover a whole lot more.
Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.
DDR5 32GB is currently going for ~$330 on Amazon
DDR4 32GB is currently going for ~$130 on Amazon
DDR3 32GB is currently going for ~50 on Amazon (4x8GB)
For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.