Hacker Newsnew | past | comments | ask | show | jobs | submit | humanfromearth9's commentslogin

Yes, but the comment is targeting those people that usually would say about themselves that they embrace agile, while actually fighting everything that changes their little schemes...

So should we be talking about how "those democratic people" put everyone who disagrees with them in concentration camps because that's what the Democratic People's Republic of Korea does?

There are, of course, extreme-right-wing reactionaries who make exactly that argument, but I don't think their example is a good one to follow.


Isn't a spectrum limited to a single dimension? If yes, that doesn't sound like Autism disorders (Asperger's, ADHD, verbal, non-verbal, violence, exacerbated sensitivity, social abilities...). They all suggest that there are multiple more or less independent/orthogonal. dimensions. And everyone scores differently on the combination of these dimensions. Which puts us on different coordinates in a vector space. Is this still a partition?

I do think the word spectrum is most usefully applied to something that can vary only in a single dimension.

The partition I'm talking about is a set of sets of behaviors. I think the vector space you're talking about is a set of people (each person being a vector on the basis of the sets of behaviors).

So I think we're on the same page, just referring to different parts of the construction. I.e. everybody is somewhere on the verbal/nonverbal spectrum, and somewhere else on the sensitive/tolerant stimulus spectrum and so on for each dimension.


>Isn't a spectrum limited to a single dimension?

Typically no in the english language usage.


There's nothing like direnv + nix + a flake with the appropriate dev shell config... And seriously, any LLM can write the .envrc, nix.conf, flake.nix files if it's too complicated.

Are these new prices there to stay?


If AI bubble exploded today, we'd probably still see it at that level for year.

If it doesn't, expect years, till enough new capacity will be build


With AI generation of code or text, I have found that quality improvements have to be run multiple times, successively, until the achieved quality reaches my expectations. Prompts must also be refined before letting it run multiple times.


That strictly depends on your ability to direct it precisely with accurate prompts.


Last night, while writing a LaTeX article, with Ollama running for other purposes, Firefox with its hundreds of tabs, multiple PDF files open, my laptop's memory usage spiked up to 80GB RAM usage... And I was happy to have 128GB. The spike was probably due to some process stuck in an effing loop, but the process consuming more and more RAM didn't have any impact on the system's responsiveness, and I could calmly quit VSCode and restart it with all the serenity I could have in the middle of the night. Is there even a case where more RAM is not really better, except for its cost?


> Is there even a case where more RAM is not really better, except for its cost?

It depends. It takes more energy, which can be undesirable in battery powered devices like laptops and phones. Higher end memory can also generate more heat, which can be an issue.

But otherwise more RAM is usually better. Many OS's will dynamically use otherwise unused RAM space to cache filesystem reads, making subsequent reads faster and many databases will prefetch into memory if it is available, too.


Firefox is particularly good at having lots of tabs open and not using tons of memory.

    $ ~/dev/mozlz4-tool/target/release/mozlz4-tool \
        "$(find ~/Library/Application\ Support/Firefox/Profiles/ -name recovery.jsonlz4 | head -1)" | \
        jq -r '[.windows[].tabs | length] | add'
    5524
Activity monitor claims firefox is using 3.1GB of ram.

    Real memory size:      2.43 GB
    Virtual memory size: 408.30 GB
    Shared memory size:  746.5  MB
    Private memory size: 377.3  MB
That said, I wholeheartedly agree that "more RAM less problems". The only case I can think of when it's not strictly better to have more is during hibernation (cf sleep) when the system has to write 128GB of ram to disk.


In my experience firefox is "pretty good" about having lots of tabs and windows open if you don't mind it crashing every week or two.


I've not had a crash on Firefox in like a decade, basically since the Quantum update in like 2016.


Try living like I do. I currently have 1,838 tabs open across 9 different windows. On second thought, maybe don't live like I do...


I've got ~5k+ tabs, and I've also seen basically zero crashes in the last decade. I'm on Macos, not very many extensions though one of them is Sidebery (and before that Tree Style Tabs) which seems to slow things down quite a lot.

Why do you need all of these tabs open? How do you find what you need?


I likely don't need all the tabs. Some were opened only because they might be useful or interesting. Others get opened because they cover something I want to dig into further later on, but in this case it's the buildup of multiple crash>restore cycles. Eventually I'll get to each tab and close it or save the URL separately until it's back to 0, but even in that process new tabs/windows get opened so it can take time.

On consumer chips the more memory modules you have the slower they all run. I.e. if you have a single module of DDR5 it might run at 5600MHz but if you have four of them they all get throttled to 3800MHz.


Mainboards have two memory channels so you should be able to reach 5600mhz on both and dual slot mainboards have better routing than quad slot mainboards. This means the practical limit for consumer RAM is 2x48GB modules.


Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

However this does not make sense, as for more than a decade the processors have only grown increasing the number of threads, therefore two channels sounds like a negligent and deliberately imposed bottleneck to access the memory if one use all those threads (Lets say 3D render, Video postproduction, Games, and so on).

And if one want four channels to surpass such imposed bottleneck, the mainboards that nowadays have four channels don't contemplate consumer use, therefore they have one or two USB connectors with three or four LAN connectors at prohibitive prices.

We are talking about consumer quad-channel DDR4 machines ten years old, wildly spread, keeps being competent compared with current consumers ones, if not better. It is like if all were frozen along this years (and what remains to be seen with such pattern).

Now it is rumoured that AMD may opt for four channels for its consumer lines due to the increased number of pin connectors (good news if true).

It is a bad joke what the industry is doing to customers.


> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).

There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.

What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.


At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.


Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.


I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.


I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.

> Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)

A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.


You have a processor with more than eight threads, at same bus bandwidth, what do you choose, dual channeled or four channeled processor.

That number of threads will hit a bottleneck accessing only through to channels of memory.

I don't understand why you brought up the topic of single-threading in your response to the user, given that processors reached a frequency limit of 4 GHz, and 5 GHz with overclocking, a decade ago. This is why they increased the number of threads, but if they reduce the number of memory channels for consumer/desktop...


What is the best single thread performance possible right now? With over locked fast ram.


But you can easily have 128GB and still on 2 modules


Larger capacity is usually slower though. The fastest ram is typically 16 or 32 capacity.

The OP is talking about a specific niche of boosting single thread performance. It’s common with gaming pcs since most games are single thread bottlenecked. 5% difference may seem small, but people are spending hundreds or thousands for less gains… so buying the fastest ram can make sense there.


> Is there even a case where more RAM is not really better, except for its cost?

RAM uses power.


It also consumes more physical space. /s


Not really /s, since it is a limited resource in e.g. Laptops.



Unwavering discipline?


Looks blurry on my phone.


The site uses bitmap images, not web fonts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: