Hacker Newsnew | past | comments | ask | show | jobs | submit | nickdothutton's commentslogin

I strongly encourage all HN readers to seek out any of his work online. Including the meta, where he tasks about being hired for the job, and the approach he took to communicating. Although some of it will look a little dated, the messages are timeless.

Back in the late 90s a senior Microsoft exec explained this to me, they had acquired staff and continued to operate entire divisions which he described as "ballast". In the future, once the stock price increases slowed, they would be heaved over the edge of the balloon basket so that it could continue to rise. I often think about that.

old sysadmin trick: create large file on a disk and in a dire situation when DB runs out of space delete it.

Genuine kubernetes scaling strategy: add a do-nothing container that runs with a lower priority than your real workloads, that requests half a machine’s worth of mcpu.

When you deploy a new container, and all your nodes are fully allocated, that low priority container will get evicted, and your container will immediately get scheduled in its place. Then k8s will try to find somewhere to put that half-machine container. If it finds somewhere it fits, it’ll schedule it. If not, it’ll trigger your cluster auto scale to add a new node where that task can run, making sure the next container you want to deploy has some readily available capacity to drop on to.

Basically the same sysadmin strategy, automated.


Or on Amazon elastic filesystems... create giant files just to ensure you're in the right performance class for the files you do need (that was the official way of doing it for a while!).


old defence against unreasonably demanding manager: add deliberate pockets of slow processing as insurance so that when things get too hot about performance, you unclog a few of those to acquiesce management.

Zero it first.

No need to study for Cloudflare certifications[], just have your agent do it all.

[] Joke, there are no certifications.


Remember kids. Don't believe in anything. Don't join anything. Don't give even a small part of yourself up to anything. Don't be part of anything bigger than yourself.

Don't be part of anything bigger than yourself that treats you as expendable human oil.

Stop and reflect for a moment. Then continue as usual (quite likely)

I had to check your other comments and now I get it that you still regard flags as having some sacred meaning in the great national past, but for me they always were about gathering as much human expendables underneath.

Sure, they might have had generated enough sacred reverence, those bloodbaths of past.


> you still regard flags as having some sacred meaning

I would like to disagree on this point.


Sorry if I got you wrong!

You forgot to add:

... that blinds you to any alternative; that indoctrinates distrust in different perspectives; that elevates the humanity of fellow believers above others.


much more sound advice than you think…

My terminal has not been ensh*ttified. I used the Internet for work, for knowledge, more than I use it for entertainment. One of the reasons I like TUIs.

Every now and then I look for a vt320 from my university days. Still miss the smell of hot dust on CRT electron guns.

I used old terminals like this to directly interface to the COM ports of older electronic instruments, well into the 2000's.

By that point the most common failure due to age was from cobwebs that had formed internally between the high-voltage CRT circuitry and the PCB containing the low-voltage logic.

For anybody reusing or restoring vintage CRT units, I would blow them out with compressed air to get rid of stuff like this.

Otherwise in a flash with a final scream and a slightly different smell than normal, it's an instant cadaver :(


As an aside, I always wondered why GitHub had a web interface. Admittedly I’m a pre-web SCCS/RCS “old timer” but I wouldn't have put a web interface on it at all.

Managing just about any complex service is far easier in a GUI.

It's targeted from the beginning to the masses.

It's used for non-technical people too; for documentation, dashboards, and bug tracking.

Viewing all this data is far easier in a GUI than a TUI.


Casio/G-SHOCK, one of the few brands which I think could plausibly stretch/apply itself into more tech areas than it currently does. Wearables, re-entering the market for ruggedised android phones, etc.

They are well positioned for wearables certainly; but a phone play would be much too risky. Practically they would end up re-casing an existing phone which would never feel G-SHOCK.

You should google their gshock ring. It’s a mini Casio watch. Same style. Hilarious

How can I be sure this article isn’t sponsored by Big Toast?


I would never.


That's what I'd expect Big Toast to say!


Switched to local models after quality dropped off a cliff and token consumption seemed to double. Having some success with Qwen+Crush and have been more productive.


Would love some more info on how you got any local model working with Crush. Love charmbracelet but the docs are all over the place on linking into arbitrary APIs.


assuming you have a locally running llama-server or llama-swap, just drop this into your crush.json with your setup details/local addresses etc:

Edit: i forgot HN doesn't do code fences. See https://pastebin.com/2rQg0r2L

Obviously the context window settings are going to depend on what you've got set on the llama-server/llama-swap side. Multiple models on the same server like I have in the config snippet above is mostly only relevant if you're using llama-swap.

TL;DR is you need to set up a provider for your local LLM server, then set at least one model on that server, then set the large and small models that crush actually uses to respond to prompts to use that provider/model combo. Pretty straightforward but agree that their docs could be better for local LLM setups in particular.

For me, I've got llama-swap running and set up on my tailnet as a [tailscale service](https://tailscale.com/docs/features/tailscale-services) so I'm able to use my local LLMs anywhere I would use a cloud-hosted one, and I just set the provider baseurl in crush.json to my tailscale service URL and it works great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: