Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not if you actually understand the technical constraints vs the business applications of each product in their specific market segments.

A Ford F-150 can tow more weight than a Lamborghini despite having less power and being significantly cheaper but each is geared towards a different use case so direct comparisons are just splitting hairs.



That's also a good point. EPYC has 2TB or 4TB of DDR4 RAM support.

That being said: its amusing to me to see the x86 market move into the "server-like" arguments of the 90s. x86 back then was the "little guy" and all the big server folks talked about how much bigger the DEC Alpha was and how that changes assumptions.

It seems like "standard" servers / systems have grown to outstanding sizes. The big question in my mind is if 64GB RAM is large enough?

Moana scene (https://www.disneyanimation.com/resources/moana-island-scene...) is 131 GBs of RAM for example. It literally wouldn't fit on the M1 Max. And that's a 2016-era movie, more modern movies will use even more space. The amount of RAM modern 3d artists need is ridiculous!!

Instinctively, I feel like 64GB is enough for power-users, but not in fact, the digital artists who have primarily been the "Pro" level customers of Apple.


> Moana scene (https://www.disneyanimation.com/resources/moana-island-scene...) is 131 GBs of RAM for example. It literally wouldn't fit on the M1 Max. And that's a 2016-era movie, more modern movies will use even more space. The amount of RAM modern 3d artists need is ridiculous!!

I doubt there was any laptop available in 2016 that could be loaded with enough RAM to handle those Moana scenes. I doubt such beasts exist in 2021.

It seems the M1s are showing that Apple can just increase the number of cores and memory interfaces to beef up the performance. While there's obviously practical limits to such horizontal scaling, a theoretical M1 Pro Max Plus for a Mac Pro could have another doubling of memory interfaces (over the Mac) or add in an interconnect to do multi-socket configurations.

That's all just horizontal scaling before new cores or a smaller node process becomes available. A 3NM process could get roughly double the current M1 Max circuitry into the same footprint as today's Max.


> That's all just horizontal scaling before new cores or a smaller node process becomes available. A 3NM process could get roughly double the current M1 Max circuitry into the same footprint as today's Max.

I/O / off-chip SERDES doesn't scale very easily.

If you need more pins, you need to go to advanced packaging like HBM or whatnot. 512-bit means 512-pins on the CPU, that's a lot of pins. Doubling to 16-channel (1024-bit bus) means more pins.

You'll run out of pins on your chip, not without micro-bumps that are on HBM. That's why HBM can be 1024-bit or 4096-bit, because it uses advanced packaging / microbumps to communicate across a substrate.


I am waiting for the photography pros of YouTube to weigh in on that last bit but the Disney bit about 131GB of ram usage is intense. Surely a speedy disk can page but likely not enough to make the 64GB of ram a bottleneck. Maybe things like optane or ssds will get so much quicker that we'll see a further fusion of io down to disks and one day we'll really have a chip that thinks all it's storage is ram/storage and doesn't really distinguish between it. Sure it's unlikely SSDs will get to 400GB/s in speed but if they could get to 10GB/s or more sustained that latency could be handled by smart software probably.

I think for that cohort any future iMac Pro or Mac Pro with the future revisions of these chips will surely increase the ram to 128GB maybe even 256GB or more.

I am super curious how Apple will tackle those folks who want to put 1TB of ram or more into their systems if they'll do an SOC with some ram plus extra slotted ram as another layer?


> Sure it's unlikely SSDs will get to 400GB/s in speed but if they could get to 10GB/s or more sustained that latency could be handled by smart software probably.

Yeah, I'm not part of the field, but the 3d guys always point to "Moana" to show off how much RAM they're using on their workstations. Especially since Disney has given away the Moana scene as a free-download, so anyone can analyze it.

The 131GB is the animation data (ie: trees swaying in the wind). 93GBs are needed per frame (roughly). So the 131GB can be effectively paged, it takes several seconds (or much much longer) to render a single frame. So really, the data is over 220GBs needed for the whole scene.

In practice, a computer would generate the wind and calculate the effects on the geometry. So the 131 GB animation data could very well be procedural and "not even stored".

The 93GB "single frame" data however, is where all the rays are bouncing (!!!) and likely needs to be all in RAM.

That's the thing: that water-wave over there will reflect your rays basically anywhere in the scene. Your rays are, in practice, bouncing around randomly in that 93GB of scene data. Someone managed to make an out-of-core GPU raytracer using 8GB of GPU-VRAM (they were using a very cheap GPU) to cache where the rays are going, but it still required keeping all 93GB of scene data in the CPU-RAM.


> Instinctively, I feel like 64GB is enough for power-users, but not in fact, the digital artists who have primarily been the "Pro" level customers of Apple.

I'd assume the Mac Pro will have _significantly_ more RAM. Artists who need >64GB were _never_ particularly well-served by the MacBook Pro (or just about any other laptop).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: