Hacker Newsnew | past | comments | ask | show | jobs | submit | kees99's commentslogin

Larger storage structures are easier to (thermally) insulate. Because geometry.

But going with larger structures probably means aggregation (fewer of them are built, and further apart). Assuming homes to be heated are staying where they are, that requires longer pipes. Which are harder to insulate. Because geometry.


I can't help but wonder how the efficiency compares to generating electricity, running that over wires, and having that run heat pumps.

The conversion to electricity loses energy, but I assume the loss is negligible in transmission, and then modern heat pumps themselves are much more efficient.

And the average high and low in February in 26°F and 14°F according to Google, while modern heat pumps are more energy-efficient than resistive heating above around 0°F. So even around 14–26°F, the coefficient of performance should still be 2–3.


> heat pumps themselves are much more efficient.

For electricity-to-heat conversion, heap pumps are indeed much more efficient relative to resistive heating, yes. About 4 times more efficient.

In absolute terms, though - that is still only 50% of "Carnot cycle" efficiency.

https://en.wikipedia.org/wiki/Coefficient_of_performance

Similarly, heat-to-electricity conversion is about 50% efficient in best case:

https://en.wikipedia.org/wiki/Thermal_efficiency

So, in your scenario (heat->electricity conversion, then transmission, then electricity->heat conversion), overall efficiency is going to be 50% * 50% = 25%, assuming no transmission losses and state-of-art conversion on both ends.

25% efficiency (a.k.a. 75% losses) is pretty generous budget to work with. I guess one can cover a small town or a city's district with heat pipes and come on top in terms of efficiency.


We've got lots of heating districts around the world to use as examples. They only make sense in really dense areas. The thermal losses and expense of maintaining them make them economically impractical for most areas other than a few core districts in urban centers... Unless you have an excess of energy that you can't sell on the grid.


Geothermal heat is also not that functional in cities, you'd need so many wells so close together that you'd most likely cool down the ground enough in winter so your efficiency tanks.


I don't understand, what am I missing? The heat pump increases efficiency by having COP 2-4 right? Assuming air to air and being in, say, Denmark.

Heat (above 100C, say, burning garbage) to electricity: 50% (theoretical best case)

Electricity to heat (around 40C): 200%-400%

Net win?

The surplus energy comes from air or ground temperatures..

Yes you cannot heat back to the temperature you started with but for underfloor heating 40C is plenty. And you can get COP 2 up to shower water of 60C as well.


Yes, this is exactly why I asked. You need to include the COP in the calculations.


If the heat is stored at high temperature, but the demand (for heating buildings, say) is at lower temperature, it could make sense to generate power, then use that power to drive heat pumps. You could end up with more useful heat energy than you started with, possibly even if you didn't use the waste heat from the initial power generation cycle.

Alternately, if you are going to deliver the heat at low temperature to a district heating system, you might as use a topping cycle to extract some of the stored energy as work and use the waste heat, rather than taking the second law loss of just directly downgrading the high temperature heat to lower temperature.

High temperature storage increases the energy stored per unit of storage mass. If the heating is resistive, you might as well store at as high a temperature as is practical.

Gas-fired heat pumps have been investigated for heating buildings; they'd have a COP > 1.

I am interested if there are any cheap small scale external combustion engines available (steam? stirling? ORC?)


It can be anything between easy and impossible depending on the temperature difference. 200 C steam is easy with a commercially available turbine, but 50 C is really hard. There are things like Sterling engines that can capture waste heat but they've never really been commercially viable.

There's no way around it: We have to respect entropy.


I think the big cost difference is the geothermal generators to convert the heat back into electricity. More of a cost issue versus efficiency.


Existing district heating systems can be large.

I live in Denmark the powerplant that heats my home is about 30km away. There are old powerplants in between that can be powered in an emergency.

Yes, building district heating systems that large is difficult and expensive, it wasn't built yesterday, more like 50 years of policies.


> bloated, buggy JavaScript framework

Isn't that the unfortunate status quo? At least hard requirement for JS, that is.

Google's homepage started requiring this recently. Linux kernel's git, openwrt, esp32.com, and many many others now require it too, via dreaded "Making sure you're not a bot" thing:

https://news.ycombinator.com/item?id=44962529

If anything, github is (thankfully) behind the curve here - at least some basics do work without JS.


> So now I am wondering what will be available once the AI investment implodes.

Memory/RAM. See also:

https://news.ycombinator.com/item?id=45934619


Probably some usable GPUs too.


> Probably some usable GPUs too.

They will find another bullshit to use them. Just like crypto to AI transition.


Also, didn't early Arduino heavily borrow from another open-source project, "Processing"?

Processing was/is graphics-centered, so that's where Arduino's term "sketch" come from, if you ever wondered.

https://en.wikipedia.org/wiki/File:Processing_screen_shot.pn...

https://en.wikipedia.org/wiki/File:Arduino_IDE_-_Blink.png


"Wiring", which constitutes Arduino's primary API surface, was taken wholesale from Hernando Barragán's 2003 master's thesis project. It was a fork of processing for microcontrollers and was not written by the Arduino team: Massimo Banzi, David Cuartielles, David Mellis, Gianluca Martino, and Tom Igo.


I have to dig around, I think I still have one of the original wiring boards from around 2006 (maybe)?


Yeah, the software side is basically only an IDE, a build system and a package manager an another system API (basically an alternative to libc). Which is useful for C++, but far from being non-replaceable.


> produce more mature technology ... DDR3/4

...except current peak in demand is mostly driven by build-out of AI capacity.

Both inference and training workloads are often bottlenecked on RAM speed, and trying to shoehorn older/slower memory tech there would require non-trivial amount of R&D to go into widening memory bus on CPU/GPU/NPUs, which is unlikely to happen - those are in very high demand already.


Even if AI stuff does really need DDR5, there must be lots of other applications that would ideally use DDR5 but can make do with DDR3/4 if there's a big difference in price


I mean, AI is currently hyped, so the most natural and logical assumption is that AI drives these prices up primarily. We need compensation from those AI corporations. They cost us too much.


It is still an assumption.


Most cell OEMs will specify safe discharge (low threshold) voltage in a datasheet. 2.75V is quite common [1].

That being said, system designer might choose higher cut-off point, since:

1) charge/discharge curve is S-shaped. There is very little energy in that last few millivolts;

2) battery (protection) circuit, and/or battery itself probably have some small leakage current. However minuscule, over months/years on a shelf, even some nano-amps of leakage will add up. If you want device to survive that, you have to factor this in, so that rest cell voltage still stays above safety threshold even after storage.

Also, "Li-ion" is quite a wide category. Don't use arbitrary voltage as a fast rule. Look up datasheet, or characterize actual cell you use. For some[2], disconnecting at 3.6V would mean leaving 50% of capacity unused. For other[3], that would be a reasonable, if somewhat conservative threshold.

[1] https://docs.rs-online.com/080b/A700000007848112.pdf

[2] https://www.murata.com/-/media/webrenewal/products/batteries...

[3] https://ntrs.nasa.gov/api/citations/20140005830/downloads/20... (page 4)


> eBPF & XDP would be much faster than netfilter.

Netfilter is plenty fast, when configured sensibly. You'd probably want script to populate a "hash:net" ipset instead, and have just one iptables rule:

  -A INPUT \
    -m set --match-set geoblock \
    -j DROP
(where "geoblock" is aforementioned set)


> 2$ and 15$

That estimate is way to high. More like 90 eurocents (~$1) for the whole thing, assembled. That's retail price:

https://www.action.com/de-de/search/?q=lesebrillen


I never thought I’d live to see the day a link to Action got posted on HN but alas, it has arrived! Show those Dollar General losers across the pond how it’s really done


Action is awesome. Shopping there you quickly realize that almost everyone (except Action) is selling junk from China that they bought for pennies at huge markups.


lol. Available languages include 4 kinds of Dutch/German, 3 kinds of French, 2 kinds of Netherlands, 2 kinds of Swiss, 1 kind of Spanish, and no English. Really defined their market, I suppose.


Seems to cover most of EU languages. So that seems to be the market. Though switzerland is not part of the EU. But the no english option is weird as hell, as english is more and more lingua franca in europe.

(swiss is not its own language btw. but italian, german and french)


> disadvantages are speed, and garbage collection.

And size. About 10x increase both on disk and in memory

  $  stat -c '%s %n' {/opt/fil,}/bin/bash
  15299472 /opt/fil/bin/bash
   1446024 /bin/bash

  $ ps -eo rss,cmd | grep /bash
  34772 /opt/fil/bin/bash
   4256 /bin/bash


How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.


My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:

Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".

Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".


I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.


> Please show me a project where you believe you "effectively require containers" just to run the code

I guess GP meant "containers" broadly, including things like pipx, venv, or uv. Those are, effectively, required since PEP 668:

https://stackoverflow.com/questions/75608323/how-do-i-solve-...


> "containers" broadly, including things like pipx, venv, or uv.

This statement makes no sense. First off, those are three separate tools, which do entirely different things.

The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).

Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).


> This statement makes no sense. First off, those are three separate tools, which do entirely different things.

They are all various attempts at solving the same fundamental problem, which I broadly referred to as containerization (dependency isolation between applications). I avoided using the term "virtual environment" because I was not referring to venv exclusively.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: