Yeah but for $6/mo you can get a tiny linode or digital ocean droplet, and not worry about hardware failing. It's true that a laptop probably has more resources than the smallest VMs, but no remote management interface and can't scale if you suddenly had a surge of traffic.
> Yeah but for $6/mo you can get a tiny linode or digital ocean droplet
That gets you, what, 1 "vCPU" with maybe a gig of ram and a couple of dozen gig of disk.
If you (or a friend) work for a company of any size, there's probably a cupboard full of laptops that won't upgrade to Win11 sitting there doing nothing that you could get for free just by asking the right person. It'll have 4 or 8 cores, each of which is more powerful that the "vCPU" in that droplet. It'll have 8 or maybe 16gig of ram, and at least half a TB of disk and depending on that laptop quite likely to be able to be configured with half a TB of fast nVME storage and a few TB of slower spinning rust storage.
If you want 8vCPUs/cores, 16GB of ram, and 500GB of SSD, all of a sudden Digital Ocean looks more like $250/month.
If you are somewhere in that grey area where you need more than ivCPU and 1GB of memory, grabbing the laptop out of the cupboard that your PM or one of the admin staff upgraded from last year and shipping not off to a datacenter with your flavour of linux installed seems like it's worth considering.
Hell, get together with a friend and have two laptops hosted for 14Euro/month between you, and be each others "failing hardware" backup plan...
I bet colos will plug a KVM into your hardware and give you remote access to that KVM. I also bet rachelbythebay has at least one article that talks about the topic.
> ...can't scale if you suddenly had a surge of traffic.
1) If your public server serves entirely or nearly-entirely static data, you're going to saturate your network before you saturate the CPU resources on that laptop.
2) Even if it isn't, computers are way faster than folks give them credit for when you're not weighing them down with Kubernetes and/or running swarms of VMs. [0]
Yeah. I got bored a couple of hours after I posted that speculation and found several other colo facilities that mentioned that they'd do remote KVM. I'd figured that it was a common thing (a fair chunk of hardware you might want to colo either doesn't have IPMI or doesn't have IPMI that's worth a damn), but wasn't sure.
You (the person paying to co-locate hardware) don't buy the KVM that the colo facility uses. The colo facility hooks up the KVM that they own to your hardware and configures it so that you can access it. Once you stop paying to colo your hardware, you take your hardware back (or maybe pay them to dispose of it, I guess) and they keep the KVM, because it's theirs.
k8s doesn't really weigh you down, especially if tuned for the low end use case (k1s). It encourages some dumb decisions that do, such as using Prometheus stack with default settings, but by itself it just eats a lot of ram.
Now using CPU limits in k8s with cgroups v1 does hurt performance. But doing that would hurt performance without k8s too.
> k8s doesn't really weigh you down, especially if tuned for the low end use case (k1s).
Sorry, what "k1s" are you referring to? The only projects with that name that I see are either cut-down management CLIs and GUIs that only work with a preexisting Kubernetes cluster, or years-old abandoned proof-of-concept/alpha-grade Kubernetes reimplementations that are completely unsuitable as a Kubernetes replacement.
The only actually-functional cut-down Kubernetes I'm aware of is 'minikube'. Minikube requires -at minimum- 2 CPUs, 2GB of RAM and 20GB of disk space. [0] That doesn't fit on either of the machines 'nrdvana' was talking about... not the "tiny ... digital ocean droplet", with its 1CPU, 1GB of RAM, and 25GB of disk space [1], nor the "tiny linode" (which has roughly the same specs [2]).
Given that it's not at all uncommon for discarded laptops to have 4 CPUs, 8GB of RAM, and like 250GB of disk, eating 1/4th of the RAM, (intermittently) half of the CPU power, and roughly a tenth of the disk space just for Kubernetes housekeeping kinda sucks. That's pretty damn "heavy", in my judgment. So. Do you have a link to this 'k1s' thing you were talking about? Does it use less than 2 CPUs, 2GB of RAM, and 20GB of disk?
There are a couple of things here: first off, minikube isn't tuned for low resources, it's a full k8s packaged for developers. Second, it doesn't burn much CPU at all unless you blew it up and it's churning pods. Third, try to remember that you're running a tool that provides most of what you need to run a service, it's not a bare VM, it comes with reverse proxy, tons of tools, and cluster management.
Basically those minimal specs let you actually run quite a lot of stuff on that minikube, they're not just for the management system.
k0s needs roughly 500MB RAM and 1.5GB drive to run a controller+worker. Can probably pare k3s down to that as well.
And I repeat, this gets you single pane cluster management across all your laptops, reverse proxy, DNS, resource shaping, namespaces, etc. Only big problem with it is that adding distributed storage is quite heavy and unstable (longhorn) or really heavy (ceph), so storage management might need to be manual which is a pain in k8s.
You noticed that I said "(intermittently) half of the CPU power", yeah?
> Third, try to remember that you're running a tool that provides most of what you need to run a service...
I already get everything that I need to run a service with old-ass systems like OpenRC+netifrc or -hack, gag- systemd and its swarm of dependencies. "Run a service on a *nix box" is a thing that we've had pretty well nailed down for decades now. It is -after all- how the services that run Kubernetes get run. Do note that you're talking to someone who does Linux system administration both as a hobby and professionally.
> Sorry I meant k0s. Off by one error at 3 am.
Sure, no problem. Shit happens.
So, k0s? Compared to minikube, the official minimum spec tables [0] indicate that -if you colocate the controller and worker- it cuts the CPU and RAM needs in half, and cuts the disk space by a factor of ten. That's nice, but that's still an eighth of the RAM and (intermittently) a quarter of the CPU of our hypothetical-but-plausible castoff laptop. That's still a lot of resources. And compared to what it costs you, you don't get much. If we were talking about some big, bad, mutually-untrusted-multitenant situation, it could be worth the cost, but -despite what folks like the CNCF might like you to believe- that's not the only scenario out there.
Also Mirantis is responsible for k0s? [1][3] After their rugpull with their Openstack distro way back when, I don't trust that they'll keep maintaining and providing complex stuff that's free to use for long enough to make it worth making a critical part of one's business. (Yes, this has absolutely nothing to do with its resource usage. I'm absolutely not bringing it up to support that argument in any way. I "just" thought it important to mention that I don't trust that Mirantis's free stuff will continue to be free for long enough to safely build (more of?) your business on.)
[1] From [2] "Mirantis offers technical support, professional services and training for k0s. The support subscriptions include, for example, prioritized support (Phone, Web, Email) and access to verified extensions on top of your k0s cluster."
[3] As additional evidence, the only two humans on the first page of commits to the k0s repo are Tom Wieczorek, whose Github profile indicates his affiliation with Mirantis [4], and Jussi Nummelin who is very, very obviously a Mirantis employee. [5] I tried to look at the complete Github contributor list for the k0s repo, but it simply wouldn't load. [6] But, I'd be shocked if this wasn't Mirantis's totally-functional-but-still-pet project that intends to more than make up the cost of development and maintenance with support contracts.
Hey I'm not arguing that you in particular shouldn't use old tech for fun. Heck serve your emacs written internet website via nc from your Amiga for all I care.
I'm just pointing out that it's easy and sufficiently efficient to run k8s on old computers, which makes running homelabs quite easy and also allows projects such as the OP to really shine. You seem to really enjoy telling me how bad it is that k8s needs 0.05 CPU and 500MB of RAM to run but the thing is it will scale horizontally a lot, and also presents the APIs you'll be expected to know in a devops job in 2026.
Maybe at rendering menus and documents, but flash had graphic routines written in optimized assembly that simply weren't possible with JavaScript on that era of hardware.
I feel like people are talking past each other a bit here. FlashScript was never very fast, and rendering a document as a giant collection of bezier curves was not fast, but the people doing animations with it were getting the equivalent of modern day CSS3 animations + SVG, and it ran nicely on hardware two orders of magnitude slower than what we need for CSS3+SVG
Maybe the large number of standard library functions that operate on globals and require you to remember the "_r" variant of that function exists, or the mess with handling signals, or the fact that Win32 and Posix use significantly different primitives for synchronization? Or maybe just the fact that most libraries for C/++ won't have built-in threading support and you need to synchronize at each call site?
Unless I'm writing Java, I avoid multithreading whenever possible. I hear it's also nice in Go.
The third mitigating feature the article forgot to mention is that tmpfs can get paged out to the swap partition. If you drop a large file there and forget it, it will all end up in the swap partition if applications are demanding more memory.
The Linux OOM killer is kinda sketchy to rely on. It likes to freeze up your system for long periods of time as it works out how to resolve the issue. Then it starts killing random PIDs to try to reclaim RAM like a system wide russian roulette.
It's especially janky when you don't have swap. I've found adding a small swap file of ~500 MB makes it work so much better, even for systems with half a terabyte of RAM this helps reduce the freezing issues.
Yeah. I always disable overcommit (notwithstanding that Linux cannot provide perfectly accurate strict memory accounting), and I'd prefer not to use swap, but Linux VM maintainers have consistently stated that they've designed and tuned the VM subsystem with swap in mind. Is swap necessary in the abstract? No. Is swap necessary on Linux? No. But don't be surprised if Linux doesn't do what you'd expect in the absence of swap, and don't expect Linux to put much if any effort into improving performance in the absence of swap.
I've never ran into trouble on my personal servers, but I've worked at places that have, especially when running applications that tax the VM subsystem, e.g. the JVM and big Java apps. If one wonders why swap would be useful even if applications never allocate, even in the aggregate, more anonymous memory than system RAM, one of the reasons is the interaction with the buffer cache and eviction under pressure.
Install earlyoom or one of its near-equivalents. That mostly solves the problem of it freezing up the system for long periods of time.
I haven't personally seen the OOM killer kill unproductively - usually it kills either a runaway culprit or something that will actually free up enough space to help.
For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
> For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
I meant it more in the sense that it doesn't have to be more than a few hundred MB even for large RAM. It's not the size of the swap file that makes the difference, but its presence, and advice of having it be proportional to RAM are largely outdated.
nohang also has been a good one for desktops, with friendly notifications under memory stress and sane defaults.
Aside these complementary tools, the amount of systemd traps (OOM adjustment score defaults & restrictions, tmux user sessions killed by default etc etc) associated to OOM has really been taking a toll on my nerves over the years.. And kernel progress on this also has been underwhelming.
Also, why has firefox switched off automatic tab unloading when memory is low ONLY FOR LINUX? Much better ux since I turned on browser.tabs.unloadOnLowMemory ...
OOMKiller, as far as I understand it, will just pick a random page, figure out who owns it, and then kill that process, repeating until enough memory is available. This will bias toward processes with larger memory allocations, but may kill any process.
> If it ever becomes necessary for the OOM Killer to kill processes, the decision of which processes to kill will be made based on something called the OOM score. Each process has an OOM score associated with it.
> Every running process in Linux has an OOM score. The operating system calculates the OOM score for a process, based on several criteria - the criteria are mainly influenced by the amount of memory the process is using. Typically, the OOM score varies between -1000 and 1000. When the OOM Killer needs to kill a process, again, due to the system running low on memory, the process with the highest OOM score will be killed
first!
Swapping still occurs regardless. If there is no swap space the kernel swaps out code pages instead. So, running programs. The code pages then need to be loaded again from disk when the corresponding process is next scheduled and needs them.
This is not very efficient and is why a bit of actual swap space is generally recommended.
Unlike swapping, freeing code pages does no writing to HDD/SSD, but it only needs to reload the pages when they are needed again in the future, therefore it is more efficient than swapping.
I have stopped using swapping on all my Linux servers, desktops and laptops more than 20 years ago. At that time it was a great improvement and since then it has never caused any problems. However, I have been generous with the amount of RAM I install, for any computer having at least the NUC size there are many years since I have never used less than 32 GB, while for new computers I do not intend to use less than 64 GB.
With recent enough Linux kernels, using tmpfs for /tmp is perfectly fine. Nevertheless, for decades using tmpfs for /tmp had been dangerous, because copying a file through /tmp would lose metadata, e.g. by truncating file timestamps and by stripping the extended file attributes.
Copying files through /tmp was frequent between the users of multi-user computers where there was no other directory where all users had write access and the former behavior of Linux tmpfs was very surprising for them.
Using Desktop mode on SteamDeck before they increased the swap was fun. Launch a game, everything freezes, go for an hour long walk, see that the game has finally killed, make and drink cofee while system becomes usable again.
> The place for small temporary files. This directory is usually mounted as a tmpfs instance, and should hence not be used for larger files. (Use /var/tmp/ for larger files.) This directory is usually flushed at boot-up. Also, files that are not accessed within a certain time may be automatically deleted.
Trivia: CIS Guidelines (security tasks applied to a server to pass an enhanced security audit to be compliant with a standard, in a soundbite) has an item requiring /var/tmp to be a bind mount to /tmp (as well as setting specific security options on /tmp). A server attempting to pass CIS audits (very common in my work-related experience w/Enterprises) may well not have a unique /var/tmp.
> I thought /var/tmp is for applications while /tmp is for the user.
/tmp is for stuff that is 'absolutely' temporary, in that on many/most systems it is nuked between reboots. /var/tmp is 'relatively' temporary in that applications can put stuff there that they're working on, but if there is a crash, the contents are not deleted and can be recovered across reboots.
Note though that if you don't have swap now, and enable it, you introduce the risk of thrashing [1]
If you have swap already it doesn't matter, but I've encountered enough thrashing that I now disable swap on almost all servers I work with.
It's rare but when it happens the server usually becomes completely unresponsive, so you have to hard reset it.
I'd rather that the application trying to use too much memory is killed by the oom manager and I can ssh in and fix that.
That's not true. Without swap, you already have the risk of thrashing. This is because Linux views all segments of code which your processes are running as clean and evictable from the cache, and therefore basically equivalent to swap, even when you have no swap. Under low-memory conditions, Linux will happily evict all clean pages, including the ones that the next process to be scheduled needs to execute from, causing thrashing. You can still get an unresponsive server under low memory conditions due to thrashing with no swap.
Setting swappiness to zero doesn't fix this. Disabling swap doesn't fix this. Disabling overcommit does fix this, but that might have unacceptable disadvantages if some of the processes you are running allocate much more RAM than they use. Installing earlyoom to prevent real low memory conditions does fix this, and is probably the best solution.
Disabling swap on servers is de-facto standard for serious deployments.
The swap story needs a serious upgrade. I think /tmp in memory is a great idea, but I also think that particular /tmp needs a swap support (ideally with compression, ZSWAP), but not the main system.
> Disabling swap on servers is de-facto standard for serious deployments.
I guess I have not been deploying seriously over the last couple of decades because the (hardware) systems that I deploy all had some swap, even if it was only a file.
Pretty much all the guidelines about swap partitions out there reference old allocator behaviour from way over a decade ago - where you'd indeed typically run into weird issues without having a swap partition, even if you had enough RAM.
Short (and inaccurate) summary was that it'd try to use some swap even if it didn't need it yet, which made sense in the world of enough memory being too expensive, and got fixed at the cost of making the allocator way more complicated when we started having enough memory in most cases.
Nowadays typically you don't need swap unless you work on a product with some constraints, in which case you'd hand tune low memory performance anyway. Just don't buy anything with less than 32GB, and you should be good.
yeah pretty much, also configuring memory limits everywhere where apps allow it. some software also handles malloc failures relatively gracefully, which helps a whole lot (thank you postgres devs)
Ive spent the last day thinking about that, I really can't see any big negative side effects, the only issue that I'd have is being notified of OOM conditions, and that would just be a syslog regex match. Great plan.
But then you can just ask it to write that missing library! Some day in the future you can probably ask it to author the whole package and publish it itself.
"Oh sorry, that package doesn't exist yet, but it ought to. One moment... Ok, try installing it now."
I've run into this a few times and did just that. It hallucinates a js or python or micropython package, I get annoyed trying to find or use it because it lacks features I explicitly stated I needed, or it just doesn't exist, and then make it write the whole thing for me from scratch. I don't use ChatGPT anymore (the model they have on the free tier has become terrible in the past few months), but Gemini 2.5 Pro Preview free through AI Studio (ex-API) is generally up to the task here.
Most recently when this happened, I made it write an SF2 loader/parser/player and a MIDI parser/"relay" library compatible with it for javascript to use in a WebGL game. It's familiar enough with ABC notation that you can have it write a song and then write a converter from modified ABC notation to MIDI, too. It can generate coordinates for a xylophone model with individual keys in WebGL with no fuss and wire it up to the SF2 module to play notes based on which key was struck. We can do things like switch out instruments on tracks, or change percussion tracks, or whatever, based on user interactions without fuss.
It's not worth setting up a whole repo for and documenting, because when I make something with it, I inherently prove making it is trivial.
Okay, but then I need to ask what kind of use case doesn't mind the extra latency from ethernet but does care about the difference between 40Gbps and 70Gbps.
In most of the video clips I saw, he was saying "I don't know anything about that", which could be entirely true. Often I see hints that he's attempting to play the Aes Sedai game of "speak no word that is untrue" but he's too dumb to do it well. Anyway, as an extension, both comments can be true, that Trump himself has no plan and is an idiot, but that his administration is enacting Project 2025.
If they cared about fiscal responsibility they wouldn't have passed the 2017 tax cuts, or be trying to renew them now. IMHO taxes were fine in the 2010-2017 era and if we'd just kept that, we'd be close to a balanced budget. Instead, they cut taxes (the popular part) without cutting spending (the unpopular part) and let the debt run wild under the assumption "well the economy will grow so much it will pay for itself". Well, they were half right - it caused so much inflation that the debt is effectively 30% less because dollars are worth 30% less. They want to pile on new spending (deportation) while keeping the tax cuts that they still can't find enough spending cuts to justify. The fiscal arguments are basically a joke at this point.
reply