Larger storage structures are easier to (thermally) insulate. Because geometry.
But going with larger structures probably means aggregation (fewer of them are built, and further apart). Assuming homes to be heated are staying where they are, that requires longer pipes. Which are harder to insulate. Because geometry.
I can't help but wonder how the efficiency compares to generating electricity, running that over wires, and having that run heat pumps.
The conversion to electricity loses energy, but I assume the loss is negligible in transmission, and then modern heat pumps themselves are much more efficient.
And the average high and low in February in 26°F and 14°F according to Google, while modern heat pumps are more energy-efficient than resistive heating above around 0°F. So even around 14–26°F, the coefficient of performance should still be 2–3.
So, in your scenario (heat->electricity conversion, then transmission, then electricity->heat conversion), overall efficiency is going to be 50% * 50% = 25%, assuming no transmission losses and state-of-art conversion on both ends.
25% efficiency (a.k.a. 75% losses) is pretty generous budget to work with. I guess one can cover a small town or a city's district with heat pipes and come on top in terms of efficiency.
We've got lots of heating districts around the world to use as examples. They only make sense in really dense areas. The thermal losses and expense of maintaining them make them economically impractical for most areas other than a few core districts in urban centers... Unless you have an excess of energy that you can't sell on the grid.
Geothermal heat is also not that functional in cities, you'd need so many wells so close together that you'd most likely cool down the ground enough in winter so your efficiency tanks.
I don't understand, what am I missing? The heat pump increases efficiency by having COP 2-4 right? Assuming air to air and being in, say, Denmark.
Heat (above 100C, say, burning garbage) to electricity: 50% (theoretical best case)
Electricity to heat (around 40C): 200%-400%
Net win?
The surplus energy comes from air or ground temperatures..
Yes you cannot heat back to the temperature you started with but for underfloor heating 40C is plenty. And you can get COP 2 up to shower water of 60C as well.
If the heat is stored at high temperature, but the demand (for heating buildings, say) is at lower temperature, it could make sense to generate power, then use that power to drive heat pumps. You could end up with more useful heat energy than you started with, possibly even if you didn't use the waste heat from the initial power generation cycle.
Alternately, if you are going to deliver the heat at low temperature to a district heating system, you might as use a topping cycle to extract some of the stored energy as work and use the waste heat, rather than taking the second law loss of just directly downgrading the high temperature heat to lower temperature.
High temperature storage increases the energy stored per unit of storage mass. If the heating is resistive, you might as well store at as high a temperature as is practical.
Gas-fired heat pumps have been investigated for heating buildings; they'd have a COP > 1.
I am interested if there are any cheap small scale external combustion engines available (steam? stirling? ORC?)
It can be anything between easy and impossible depending on the temperature difference. 200 C steam is easy with a commercially available turbine, but 50 C is really hard. There are things like Sterling engines that can capture waste heat but they've never really been commercially viable.
There's no way around it: We have to respect entropy.
Isn't that the unfortunate status quo? At least hard requirement for JS, that is.
Google's homepage started requiring this recently. Linux kernel's git, openwrt, esp32.com, and many many others now require it too, via dreaded "Making sure you're not a bot" thing:
"Wiring", which constitutes Arduino's primary API surface, was taken wholesale from Hernando Barragán's 2003 master's thesis project. It was a fork of processing for microcontrollers and was not written by the Arduino team: Massimo Banzi, David Cuartielles, David Mellis, Gianluca Martino, and Tom Igo.
Yeah, the software side is basically only an IDE, a build system and a package manager an another system API (basically an alternative to libc). Which is useful for C++, but far from being non-replaceable.
...except current peak in demand is mostly driven by build-out of AI capacity.
Both inference and training workloads are often bottlenecked on RAM speed, and trying to shoehorn older/slower memory tech there would require non-trivial amount of R&D to go into widening memory bus on CPU/GPU/NPUs, which is unlikely to happen - those are in very high demand already.
Even if AI stuff does really need DDR5, there must be lots of other applications that would ideally use DDR5 but can make do with DDR3/4 if there's a big difference in price
I mean, AI is currently hyped, so the most natural and logical assumption is that AI drives these prices up primarily. We need compensation from those AI corporations. They cost us too much.
Most cell OEMs will specify safe discharge (low threshold) voltage in a datasheet. 2.75V is quite common [1].
That being said, system designer might choose higher cut-off point, since:
1) charge/discharge curve is S-shaped. There is very little energy in that last few millivolts;
2) battery (protection) circuit, and/or battery itself probably have some small leakage current. However minuscule, over months/years on a shelf, even some nano-amps of leakage will add up. If you want device to survive that, you have to factor this in, so that rest cell voltage still stays above safety threshold even after storage.
Also, "Li-ion" is quite a wide category. Don't use arbitrary voltage as a fast rule. Look up datasheet, or characterize actual cell you use. For some[2], disconnecting at 3.6V would mean leaving 50% of capacity unused. For other[3], that would be a reasonable, if somewhat conservative threshold.
Netfilter is plenty fast, when configured sensibly. You'd probably want script to populate a "hash:net" ipset instead, and have just one iptables rule:
I never thought I’d live to see the day a link to Action got posted on HN but alas, it has arrived! Show those Dollar General losers across the pond how it’s really done
Action is awesome. Shopping there you quickly realize that almost everyone (except Action) is selling junk from China that they bought for pennies at huge markups.
lol. Available languages include 4 kinds of Dutch/German, 3 kinds of French, 2 kinds of Netherlands, 2 kinds of Swiss, 1 kind of Spanish, and no English. Really defined their market, I suppose.
Seems to cover most of EU languages. So that seems to be the market. Though switzerland is not part of the EU. But the no english option is weird as hell, as english is more and more lingua franca in europe.
(swiss is not its own language btw. but italian, german and french)
How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.
My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:
Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".
Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".
I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.
> "containers" broadly, including things like pipx, venv, or uv.
This statement makes no sense. First off, those are three separate tools, which do entirely different things.
The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).
Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).
> This statement makes no sense. First off, those are three separate tools, which do entirely different things.
They are all various attempts at solving the same fundamental problem, which I broadly referred to as containerization (dependency isolation between applications). I avoided using the term "virtual environment" because I was not referring to venv exclusively.
But going with larger structures probably means aggregation (fewer of them are built, and further apart). Assuming homes to be heated are staying where they are, that requires longer pipes. Which are harder to insulate. Because geometry.