> there's no hope of getting a world-wide, free, uncensored, unlimited IP4/6 network back.
What do you mean "back"? It was never free, as in zero-cost. It was also not very unlimited; I remember times when I had to pay not only for the modem time online, but also for the kilobytes transferred. Uncensored, yes, because basically nobody cared, and the number of users was relatively minuscule.
The utopia was never in the past, and it remains in the future. I still think that staying irrelevant for large crowds and big money is key.
Hmm, doesn't this work equally well with a wad of $10 and $20 notes? I mean, yes, notes could be clandestinely marked. But aren't bitcoins also traceable after the first transaction?
(1) Security. An always-on, externally accessible device will always be a target for breaking in. You want the device to be bulletproof, and to have defense in depth, so that breaking into one service does not affect anything else. Something like Proxmox that works on low-end hardware and is as easy to administer as a mobile phone would do. We are somehow far from this yet. A very limited thing like a static site may be made both easy and bulletproof though.
(2) Connectivity providers should allow that. Most home routers don't get a static IP, or even a globally routable IPv4 at all. Or even a stable IPv6. This complicates the DNS setup, and without DNS such resources are basically invisible.
From the pure resilience POV, it seems more important to keep control of your domain, and have an automated way to deploy your site / app on whatever new host, which is regularly tested. Then use free or cheap DNS and VM hosting of convenience. It takes some technical chops, but can likely be simplified and made relatively error-proof with a concerted effort.
Both or those are solved by having a tunnel and a cache that is hosted in the cloud. Something like tailscale or cloudflare provides this pretty much out of the box, but wireguard + nginx on a cheap VPS would accomplish much the same if you are serious about avoiding the big guys.
True, however as these products have been designed and coded by LLMs from the ground up in 2025+, they are generally using modern (typed even) languages, the latest version of third party libraries, usually have documentation of sorts... sometimes they even have test suites.
As such, they can often be improved as easily as one can prompt, which is much faster and easier than before. Notably in the FOSS world where one had to ask the maintainer, get ghosted for a year and have them go back with a "close: wontfix (too tedious)".
Better languages do not necessarily mean better architectural decisions, or even better performance, unless the humans pressure for that and burn tokens on that. With no engineer in the room, more technical issues will be left unnoticed and unaddressed.
Compare it to visual arts. With a guidance form an artist, AI tools can help create wonderful pictures. Without such guidance, or at least expert prompting, a typical one-shot image from Gemini is... well, at best recognizable as such.
Thanks cap'n-py. Yeah, I love Sandstorm. My goal is to be more portable, lighter, and a 'download binary and run' kind of tool. There are also other attempts around what I call the 'packaging with Docker' approach (Coolify, etc.), which are more attempts at packaging existing apps. But my approach—the platform—gives a bunch of stuff you can use to make apps faster, but you have to bend to its idiosyncrasies. In turn, you do not need a beefy home lab to run it (not everyone is a tinkerer). It's more focused, so it will be easier for the end user running it than for the developer.
Specifically the M series from Apple have a very wide, very fast interface to DRAM, which is connected to DRAM chips soldered basically next to the CPU. That makes it possible to use the entire unified RAM as the GPU RAM, and reasonably run decent ML models (for code, text, audio, pictures) locally. No CUDA, no kilowatt power supplies. This is the real differentiator.
> That makes it possible to use the entire unified RAM as the GPU RAM, and reasonably run decent ML models (for code, text, audio, pictures) locally. No CUDA, no kilowatt power supplies. This is the real differentiator.
That might be relevant and a differentiator in your circles; it is entirely irrelevant in mine. Plain basic integer performance wins here.
No, it it a suite of tools to handle Typescript (and Javascript as its subset). So far it's a parser, a tool to strip Typescript declarations and produce JS (like SWC), a linter, and a set of code transformation tools / interfaces, as much as I can tell.
What do you mean "back"? It was never free, as in zero-cost. It was also not very unlimited; I remember times when I had to pay not only for the modem time online, but also for the kilobytes transferred. Uncensored, yes, because basically nobody cared, and the number of users was relatively minuscule.
The utopia was never in the past, and it remains in the future. I still think that staying irrelevant for large crowds and big money is key.
reply