Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This reminds me of the recent LaurieWired video presenting a hypothetical of, "what if we stopped making CPUs": https://www.youtube.com/watch?v=L2OJFqs8bUk

Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.

DDR5 32GB is currently going for ~$330 on Amazon

DDR4 32GB is currently going for ~$130 on Amazon

DDR3 32GB is currently going for ~50 on Amazon (4x8GB)

For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.





If everyone went for DDR4 and DDR3, surely the cost would go up. There is no additional supply there, as they are no longer being made.

Currently trying to source a large amount of DDR4 to upgrade a 3 year old fleet of servers at a very unfortunate time.

It's very difficult to source in quantity, and is going up in price more or less daily at this point. Vendor quotes are good for hours, not days when you can find it.


Have you considered dropping by in person to your nearest computer recycler/refurbisher? As a teen I worked at one, and the boxes and boxes of RAM sticks pulled from scrapped machines (usually scrapped due to bad main boards) made a strong impression. They tend to not even put the highest-spec stuff they pull into anything, as refurbished machines are mostly sold/donated in quantity (to schools and the like) and big customers want standardized SKUs with interchangeable specs for repairability more than they want performance. Workers at these places are often invited to build personal machines to take home out of these “too good” parts — and yet there’s so much that they can’t possibly use it all up that way. If someone showed up asking if they could have some DDR3, they might literally just let you dig through the box and then sell it to you by weight!

> at these places are often invited to build personal machines to take home out of these “too good” parts — and yet there’s so much that they can’t possibly use it all up that way.

I work in the refurb division of an e-waste recycling company. Those days are over for now. We're listing RAM that will sell for more than scrap value (about $2 per stick), which is at least 4 GB DDR3. And we got a list of people who will buy all we got.


We are at a state, where used DDR4 gets bought large scale reballed and resold as refurb chips, because they are so rare.

We're looking to buy 1TB of DDR5 for a large database server. I'm pretty sure that we're just going to pay the premium and move on.

And that’s why everybody’s B2B these days. The decision-making people at companies are not spending their own money

I think it's not about own versus someone else's money.

Hardware is usually a small piece of the financial puzzle (unless you're building a billion dollar AI datacenter I guess) and even when the hardware price quadruples, it's still a small piece and delivery time is much more important than optimizing hardware costs.


The price of the hardware, even with inflated prices, is probably equal or less than the combined price of all the software licenses that go on that machine.

At some point that’s true, but don’t they run the n-1 or 2 generation production lines for years after the next generation launches? There’s a significant capital investment there and my understanding is that the cost goes down significantly over the lifetime as they dial in the process so even though the price is lower it’s still profitable.

Unless plans have changed, the foundries making DDR4 are winding down, with the last shipments going out as we speak.

The specialty DRAM vendors (Nanya notably) will keep making DDR4 and earlier

Isn’t that mostly because Chinese manufacturers are flooding the market with cheaper products?

This is only true as long as there's enough of a market left. You tend to end up with oversupply and excessive inventory during the transition, and that pushes profit margins negative and removes all the supply pretty quickly.

Undoubtably the cost would go up, but nobody is building out datacenters full of DDR4, either, so I don't figure it would go up nearly as much as DDR5 is right now.

https://pcpartpicker.com/trends/price/memory/

You can see the cost rise of DDR4 here.


That's the Avarage price for new DDR4 which has dwindling supply. Meanwhile used DDR4 is being retired in both desktops and data centers.

DDR4 production is winding down or done. Only "new old stock" will remain, or used DDR4 modules. Good luck buying that in quantity.

Awesome charts, thanks! I think it bears out that the multiplier for older hardware isn't as extreme as the newer hardware, right?

~2.8x for DDR4, ~3.6x for DDR5. DDR5 is still being made, though, so it will be interesting to see how it changes in the future.

Either way, it's going to be a long few years at the least.


Unless the AI bubble pops.

One possible outcome is the remaining memory manufacturers have dedicated all their capacity for AI and when the bubble pops, they lose their customer and they go out of business too.

I wouldn't be too surprised to find at least some of the big ram foundries are deeply bought into the fake money circles where everybody is "paying" each other with unrealised equity in OpenAI/Anthropic/whoever, resulting in a few trillion dollars worth of on-paper "money" vanishing overnight, at which stage a whole bunch of actual-money loans will get called in and billion dollar companies get gutted by asset strippers.

Maybe larger makerspaces and companies like Adafruit, RasPi, and Pine should start stockpiling (real) money, and pick themselves up an entire fab full of gear at firesale prices so they can start making their own ram...


And then the RAM factories... get sold to another business, if that's more profitable than dismantling them.

Unfortunately, older RAM also means an older motherboard, which also means older socket and older CPUs. It works, but it's not usually a drop in replacement.

Can't you use DDR3 in DDR5 compatible board?

No, the different generations have fundamentally incompatible interfaces, not just mechanically, but in terms of voltage and signaling.

Unfortunately not, each version of RAM uses a different physical slot.

Well, they're entirely different, not just slot. Intel 12th/13th/14th gens all support DDR4 or DDR5. However, the motherboards you buy can only support one or the other. I don't think there are a single AMD CPU that supports both?

I bought 384 GB of DDR5-4800 RDIMM a few months back for a Zen 4c system with lots of I/O like dual 25G NIC's and ten MCIO x8 ports... So far it has been the best value for money compared to any memory before it. The bandwidth is nuts. Power consumption went DOWN compared to DDR4. Doesn't matter much if you got two sticks, but as soon as you get into 6+ territory, it does matter a lot. The same goes for NVMe's.

A friend built a new rig and went with DDR4 and a 5800x3d just because of this as he needed a lot of ram.

I have exactly this and was planning to upgrade to AM5 soon :'(

Looks like AM4 will live on for many more years in my flat


I’m really happy I padded out one of my home servers with 128GB of DDR4 a couple of years back. I am, however, quite sad that I missed the opportunity to do the same to a newer one…

Nice, My current PC uses DDR4 Time to dust off my 2012 PC and put Linux on it.

> works great with your Linux distro of choice.

...or you could go with FreeBSD. There's even a brand new release that just came out!

https://www.freebsd.org/releases/15.0R/announce/


a 2x16 ddr4 kit I bought in 2020 for $160 is now $220. older memory is relatively cheap but not cheaper than before at all.

I wondered how much of this is inflation -- after adjusting for CPI inflation, $160 in 2020 is worth $200 in today's dollars [$], so the price of that ddr4 kit is 10% higher in real terms.

[$] https://www.usinflationcalculator.com/


USD is also weaker than it was in the past.

Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

Just like SSDs from 2010 have 100.000 writes per bit instead of below 10.000.

CPUs might even follow the same durability pattern but that remains to be seen.

Keep your old machines alive and backed up!


> Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

DDR5 is more reliable. Where are you getting this info that DDR3 lasts longer?

DDR5 runs at lower voltages, uses modern processes, and has on-die ECC.

This is already showing up in reduced failure rates for DDR5 fleets: https://ieeexplore.ieee.org/document/11068349

The other comment already covered why comparing CAS latency is misleading. CAS latency is measured in clock cycles. Multiply by the length of a clock cycle to get the CAS delay.


It has on-die ECC _because_ it is so unreliable. ECC is there to fix its terribleness from factory.

So? If the net result is more reliable memory, it doesn't matter.

Many things in electrical engineering use ECC on top of less reliable processes to produce a net result that is more reliable on the whole. Everything from hard drives to wireless communication. It's normal.


ECC doesnt completely fix it, it only masks problems for most common use patterns. Rowhammer is a huge problem.

Just like increasing the structure size "only" decreases the likelihood of bit flips. Correcting physical unreliability with more logic may feel flimsy, but in the end, probabilities are probabilities.

CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.

Not for small amounts of data.

Bandwith increases, but if you only need a few bytes DDR3 is faster.

Also slower speed means less heat and longer life.

You can feel the speed advantage by just moving the mouse on a DDR3 PC...


While it has nothing to do with how responsive your mouse feels, as that is measured in milliseconds while CAS latency is measured in nanoseconds, there has indeed been a small regression with DDR5 memory compared to the 3 previous generations. The best DDR2-4 configurations could fetch 1 word in about 6-7 ns while the best DDR5 configurations take about 9-10 ns.

https://en.wikipedia.org/wiki/CAS_latency#Memory_timing_exam...


RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.

Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.


I only felt it on Windows, maybe tht is due to the special USB mouse drivers Microsoft made? Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.

You are conflating two things that have nothing to do with each other. Computers have had mice since the 80s.

Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.

No it isn't, your computer is doing tons of stuff and the cursor on windows is a hardware feature of the graphics card.

Should I even ask why you think memory bandwidth is the cause of mouse latency?


Dan Luu actually measured latency of older computers (terminal, input latency), and compared it to modern computers. It shows older computers (and I mean previous century-wise old) have lower input latency. This is much more interesting than 'feelings', especially when discussing with other people.

> 100.000 writes per bit

per cell*

Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get modern SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. The one caveat there to modern stuff being better than older stuff is Optane (the enterprise stuff like the 905P or P5800X, not that memory and SSD combo shitshow that Intel was shoveling out the consumer door). We still haven't reached parity with the 3DXpoint stuff, and it's a damn shame Intel hurt itself in it's confusion and cancelled that, because boy would they and Micron be printing money hand over fist right now if they were still making them. Still, Point being: Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space. If you want write endurance, capacity, and/or performance, you have more and better options today than ever before (Optane/3DXPoint excepted).

Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.

I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).


Maybe but in my experience a good old <100GB SSD from 2010-14 will completely demolish any >100GB from 2014+ in longevity.

Some say they have the opposite experience, mine is ONLY Intel drives, maybe that is why.

X25-E is the diamond peak of SSDs probably forever since the machines to make 45nm SLC are gone.


What if you overprovision the newer SSD to a point where it can run the entirety of the drive in pseudo-SLC ("caching") mode? (You'd need to store no more than 25% of the nominal capacity, since QLC has four bits per cell.) That should have fairly good endurance, though still a lot less than Optane/XPoint persistent memory.

Which tells me your experience is incredibly limited.

Intel was very good, and when they partnered with Micron, made objectively the best SSD's ever made (3DXPoint Optanes). I lament that they sold their storage business unit, though of all the potential buyers, SK was probably the best case scenario (they since rebranded that into Solidigm).

The intel X25-E was a great drive, but it is not great by modern standards and in any write-focused workload it is an objectively, provably bad drive by any standard these days. Let's compare it to a Samsung 9100 Pro 8TB which is a premium consumer drive, and a quasi mid level enterprise drive (depends on usecase, it's lacking a lot of important enterprise features such as PLP) that's still a far cry from the cream of the crop, but has an MSRP comparable to the X25-E's at launch

X25-E 64GB vs 9100 Pro 8TB:

MSRP: ~$900 ($14/GB) vs ~$900 ($0.11/GB)

Random Read (IOPS): 35.0k vs 2,200k

Random Write (IOPS): 3.3k vs 2,600k

Sustained/Seq Read (MBps): 250 vs 14,800

Sustained/Seq Write (MBps): 170 vs 13,400

Endurance: >=2PB writes vs >= 4.8 PB writes

In other words, it loses very badly in every metric, including performance and endurance per dollar (in fact, it loses so bad on performance that it still isn't close even if we assume the X25-E is only $50), and we're not even into the high end of what's possible with SSD's/NAND flash today. Hell, the X25-E can't even compare to a Crucial MX500 SATA SSD except on endurance which it only barely beats (2PB for X25-E vs 1.4PB for 4TB). The X25-E's incredibly limited capacity (64GB max) also makes it a non-starter for many people no matter how good the performance might be (but isn't).

Yes, per cell the X25-E is far more durable than a MX500 or 9100 Pro yielding a Disk Write Per Day endurance of about 17DWPD, which is very good. An Intel P4800X however (almost a 10 year old drive itself) had 60DWPD, or more than 3x the endurance when normalized for Capacity, while also blowing it - and nearly every other SSD ever made until very very recently - out of the water on the performance front as well. And let's not forget, not only can you supplement per-cell endurance with having more cells (aka more capacity), but the X25-E's maximum capacity of 64GB makes it a non-starter for the vast majority of use-cases right out of the gate, even if you try to stack them in an array.

For truly high end drives, look at what the Intel P5800X, Micron 9650 MAX, or Solidigm D7-5810 are capable of for example.

Oh, and btw, a lot of those high end drives have SLC as their Transition Flash Layer, sometimes in capacities greater than the X25-E was available in. So the assertion that they don't make SLC isn't true either, we just got better about designing these devices so that we aren't paying over $10/GB anymore.

So no. By todays standards the X25-E is not "the diamond peak". It's the bottom of the barrel and in most cases, non-viable.


My experience is 10 drives from 2009-2012 that still work and 10 drives from 2014 that have failed.

Yes, we've already established your experience is incredibly limited and not indicative of the state of the market. Stop buying bad drives and blaming the industry for your uninformed purchasing decisions.

Hell, as you admitted that your experience is limited to intel, I'd wager at least one of those drives that failed were probably the 660P's, no? Intel was not immune from making trash either, even if they did also make some good stuff (which for their top tier stuff, was technically was mostly Micron's doing).

I've deployed countless thousands of solid state drives - hell well over a thousand all-flash-arrays - that in aggregate probably now exceeds an exabyte of raw capacity since. This is my job. I've deployed individual systems with more SSD's than you've owned in total from the sound of it. And part of why it's hard to kill those old drives is they are literal orders of magnitude slower, meaning it takes literal orders of magnitude more time to write the same amount of data. That doesn't make them good drives, it makes them near-worthless even when they work, especially considering the capacity limitations that come with it.

I'm not claiming bad drives don't exist, they most certainly do, and would consider over 50% of what's available in the consumer market to fit that bill, but I also have vastly higher standards than most, because if I fuck something up, the cost to fix it is often astronomical. Modern SSD's aren't inherently bad, they can be, but not necessarily so. Just like they aren't inherently phenomenal, they can be, but not necessarily so. But they do exist, at a variety of price points and use-cases.

TL;DR Making uninformed purchasing decisions often leads to bad outcomes.


CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).

Old machines use a lot more power (worse nm), and DDR5 has equivalent to ECC, while previously you had to specifically get ECC RAM and it wouldn't work on cheaper Intel hardware (bulk of old hardware is going to be Intel).

The on-chip ECC in DDR5 is there to account for lower reliability of the chips themselves at the higher speeds. It does NOT replace dedicated ECC chips which cover a whole lot more.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: