Hacker Newsnew | past | comments | ask | show | jobs | submit | axlee's commentslogin

Can't they point these dns records to working servers meanwhile to avoid degradation?


My understanding is that people who connect specifically to the NIST ensemble in Boulder (often via a direct fiber hookup rather than using the internet) are doing so because they are running a scientific experiment that relies on that specific clock. When your use case is sensitive enough, it's not directly interchangable with other clocks.

Everyone else is already connecting to load balanced services that rotate through many servers, or have set up their own load balancing / fallbacks. The mistakenly hardcoded configurations should probably be shaken loose anyways.


If you use a general purpose hostname like time.nist.gov: that should resolve to an operational server and it makes sense to adjust during an incident. If you use a specific server hostname like time-a-b.nist.gov: that should resolve to the specific server and you're expected to have multiple hosts specified; it doesn't make sense to adjust during an incident, IMHO. You wanted boulder, you're getting boulder, faults and all.


>It becomes a pain point when the IT team never heard of docker

Where do you work ? Is that even possible in 2025?


'corp IT' in a huge org typically all outsourced MCSEs who are seemingly ignorant of every piece of technology outside of azure.

Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|


Its absolutely possible. Weve also had them unaware of github, and had them label Amazon S3 as a risk since it specifically wasn't Microsoft.

There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.


I work at a cool place now that is well aware of it, but in 2023 I worked at a very large insurance company with over a thousand people in IT. Some of the gatekeepers were not aware of docker. Luckily another team had set up Openshift, but then the approval process for using it was a nightmare.


Apparently they work in the past...


I'd recommend changing names, nitro is already a semi-popular server engine for node.js https://nitro.build/


Any well-known generic word is very likely to already have been used by a bunch of projects, some of them already prominent. By now, the best project name is a pronounceable but unique string, for ease of search engine use. Ironically, "systemd" is a good name in this regard, as are "runit" or even "s6".


I use tiny init systems regularly in AWS Nitro Enclaves. Having the enclave and init system both named nitro is not ideal.


Dinit, runit, tini -- all avoid the name clash :)


> Any well-known generic word is very likely to already have been used by a bunch of projects,

Are you sure? There are lots of words, and not so many projects that use words like these as their names.

Of the 118179 packages I see on this Ubuntu 18.04 system I can roughly roughly ask how many have names that are dictionary (wamerican) words:

  comm -12 <(apt-cache dumpavail | awk -F': ' '/^Package:/{sub(/^lib/,"",$2); print $2}') /usr/share/dict/words | wc -l
This gives 820 (or about 1000 if you allow uppercase). Not so scientific, but I think a reasonable starting point.


nitronit obviuously


I think I would have gone with nitr0.


Wouldn't n1tro make more sense if it's going to run as PID 1?


One of the hard things in computer science: naming things.


Using this for testing instead of regular playwright must 10000x the cost and speed, doesn't it? At which points do the benefits outweigh the costs?


I think depends a lot on how much you value your own time, since its quite time consuming to write and update playwright scripts. It's gonna save you developer hours to write automations using natural language rather than messing around with and fixing selectors. It's also able to handle tasks that playwright wouldn't be able to do at all - like extracting structured data from a messy/ambiguous DOM and adapting automatically to changing situations.

You can also use cheaper models depending on your needs, for example Qwen 2.5 VL 72B is pretty affordable and works pretty well for most situations.


But we can use an LLM to write that script though and give that agent access to a browser to find DOM selectors etc. And than we have a stable script where we, if needed, manually can fix any LLM bugs just once…? I’m sure there are use cases with messy selectors as you say, but for me it feels like most cases are better covered by generating scripts.


Yeah we've though about this approach a lot - but the problem is if your final program is a brittle script, you're gonna need a way to fix it again often - and then you're still depending on recurrently using LLMs/agents. So we think its better to have the program itself be resilient to change instead of you/your LLM assistant having to constantly ensure the program is working.


I wonder if a nice middle ground would be: - recording the playwright behind the scenes and storing - trying that as a “happy path” first attempt to see if it passes - if it doesn’t pass, rebuilding it with the AI and vision models

Best of both worlds. The playwright is more of a cache than a test


I think the difficulty with this approach is (1) you want a good "lookup" mechanism - given a task, how do you know what cache should be loaded? you can do a simple string lookup based on the task content, but when the task might include parameters or data, or be a part of a bigger workflow, it gets trickier. (2) you need a good way to detect when to adapt / fall back to the LLM. When the cache is only a playwright script, it can be difficult to know when it falls out of the existing trajectory. You can check for selector timeouts and things, but you might be missing a lot of false negatives.


Are you sure? Couldnt you just just go back to the LLM if the script breaks? Pages changes but not that often in general.

It seems like a hybrid approach would scale better and be significantly cheaper.


We do believe in a hybrid approach where a fast/deterministic representation is saved - but think there is a more seamless way were the framework itself is high level and manages these details by caching the underlying actions that can run


I think you are overstating. Just use Playwright codegen. No need for manual test writing, or at least 90% can get generated. Still 10x faster and cheaper.


Especially back then, in the time of vinyls and cassettes (browsing music wasn't exactly as easy as pressing "play"), it shows the amazingly deep musical culture of these artists. The samples they use are from all over the place, and their songs are often built around a handful of seconds from obscure b-sides.


not about Daft punk but..

> The samples they use are from all over the place

> built around a handful of seconds

have you seen/head this?

https://www.youtube.com/results?search_query=mondovision

the original was at www.giovannisample.com which disappeared..


What's your stack ? I have the complete opposite experience. LLMs are amazing at writing idiomatic code, less so at dealing with esoteric use cases.

And very often, if the LLM produces a poopoo, asking it to fix it again works just well enough.


> asking it to fix it again works just well enough.

I've yet to encounter any LLM from chatGPT to cursor, that doesn't choke and start to repeat itself and say it changed code when it didn't, or get stuck changing something back and forth repeatedly inside of 10-20 minutes. Like just a handful of exchanges and it's worthless. Are people who make this workflow effective summarizing and creating a fresh prompt every 5 minutes or something?


One of the most important skills to develop when using LLMs is learning how to manage your context. If an LLM starts misbehaving or making repeated mistakes, start a fresh conversation and paste in just the working pieces that are needed to continue.

I estimate a sizable portion of my successful LLM coding sessions included at least a few resets of this nature.


> using LLMs is learning how to manage your context.

This is the most important thing in my opinion. This is why I switched to showing tokens in my chat app.

https://beta.gitsense.com/?chat=b8c4b221-55e5-4ed6-860e-12f0...

I treat tokens like the tachometer for a car's engine. The higher you go, the more gas you will consume, and the greater the chance you will blow up your engine. Different LLMs will have different redlines and the more tokens you have, the more costly every conversation will become and the greater the chance it will just start spitting gibberish.

So far, my redline for all models is 25,000 tokens, but I really do not want to go above 20,000. If I hit 16,000 tokens, I will start to think about summarizing the conversation and starting a new one based on the summary.

The initial token count is also important in my opinion. If you are trying to solve a complex problem that is not well known by the LLM and if you are only starting with 1000 or less tokens, you will almost certainly not get a good answer. I personally think 7,000 to 16,000 is the sweet spot. For most problems, I won't have the LLM generate any code until I reach about 7,000 since it means it has enough files in context to properly take a shot at producing code.


I'm doing ok using the latest Gemini which is (apparently) ok with 1 million tokens.


all that fiddling and copy pasting takes me longer than just writing the code most of the time


And for any project that's been around long enough, you find yourself mostly copy-pasting or searching for the one line you have to edit.


Exactly, while not learning anything along the way.


Only if you assume one is blindly copy/pasting without reading anything, or is already a domain expert. Otherwise you’ve absolutely got the ability to learn from the process, but it’s an active process you’ve got to engage with. Hell, ask questions along the way that interest you, as you would any other teacher. Just verify the important bits of course.


No, learning means failing, scratching your head, banging your head against the wall.

Learning takes time.


I’d agree that’s one definition of learning, but there exists entire subsets of learning that don’t require you to be stuck on a problem. You can pick up simple, and related concepts without first needing to struggle with them. Incrementally building on those moments is as true a form of learning as any other I’d argue. I’d go as far as saying you can also have the moments you’re describing while using an LLM, again with intentionality, not passively.


Hm, I use LLMs almost daily, and I've never had it say it changed code and not do it. If anything, they will sometimes try to "improve" parts of the code I didn't ask them to modify. Most times I don't mind, and if I do, it's usually a quick edit to say "leave that bit alone" and resubmit.

> Are people who make this workflow effective summarizing and creating a fresh prompt every 5 minutes or something?

I work on one small problem at a time, only following up if I need an update or change on the same block of code (or something very relevant). Most conversations are fewer than five prompt/response pairs, usually one-three. If the LLM gets something wrong, I edit my prompt to explain what I want better, or to tell it not to take a specific approach, rather than correcting it in a reply. It gets a little messy otherwise, and the AI starts to trip up on its own past mistakes.

If I move on to a different (sub)task, I start a new conversation. I have a brief overview of my project in the README or some other file and include that in the prompt for more context, along with a tree view of the repository and the file I want edited.

I am not a software engineer and I often need things explained, which I tell the LLM in a custom system prompt. I also include a few additional instructions that suit my workflow, like asking it to tell me if it needs another file or documentation, if it doesn't know something, etc.


Creating a new prompt. Sometimes it can go for a while without, but the first response (with crafted context) is generally the best. Having context from the earlier conversation has its uses though.


The LLM you choose to work with in Cursor makes a big difference, too. I'm a fan of Claude 3.5 Sonnet.


In my experience you have to tell it what to fix. Sometimes how as well.


Thus, a problem in search of a solution. Who has asked for that exactly ?`It's been more than a decade and still no one has found an actually useful real-world application for blockchain besides gambling and money laundering.


I'm just a newb passing, but a self sustained decentralized 24/7 logic+network layer could be nice for stupid computational tasks like tracking physical items or updating multi-source (different services, different companies) data easily, which are often done at the human level today (even if 99% of the time it's not necessary and very wasteful). Again i'm not knowledgeable but when thinking of blockchains, I always have this in mind.


It can't track physical items, because you can always just lie about the data. Company A says "I put the jewels in box 1352", and Company B opens the box to receive a bunch of dish towels. Just because it's cryptographically verified, doesn't mean it has any semantic value.

If Company A and B already trust each other not to put bad data on the chain, then they also trust each other to just send emails back and forth, and you don't need a cryptographically verified blockchain. It secures the least important part of the process.


But how would you do that with ~all companies ? it would be standardization and heterogeneity hell.


Why would the addition of blockchains help standardize things?


they standardize transport, verification, encryption at least, consensus too iiuc


No, they don't do any of that. There are 100 different blockchain implementations at least, and none of them are standardized across any of those metrics. Everything from the consensus mechanism to the packet format to the bytecode is different. The standard way to interop between the Tezos blockchain and the Polygon blockchain is to create a token on both sides and attach a tag saying "don't touch me I'm actually somewhere else".

The only way to standardize ~all companies would be to have them all run the same exact ledger system, and if it was easy enough to make them do that, it would be easy enough to have them standardize a different reporting system for the same purpose.


i meant each but using a single one to host events and logic, not a multichain (yeah according to your input would be brittle at best)


IBM tried to do that (asset tracking for marine shipping) with some rather neat technology.

It sank like Titanic. Nobody was interested in it, turns out that companies are fine with emailing each other Excel sheets.


Yeah I think i remember stories about that, that said, a failed attempt doesn't mean future will not be different (nobody seeing programming languages evolution as comparison point). Neither will it means that a decentralized event chain store is the solution.. just saying it kinda felt like an interesting "small logic global network application layer" .. which felt way more efficient than the litany of different b2b apps.


Now that’s the 1m dollar question. Virtually no one


Note: while this is a 2019 article, the Getty Center has not burned during the 2025 Palisades fire.


The Getty Villa was far more threatened by the Palisades Fire than the center.


The fire is still burning, is only partially contained, has gotten close to the Center, and the winds are forecast to pick up again tomorrow. So there's still a chance it will be at risk.

Now, they've had days to prepare for this, and apparently have plenty of contingencies in place, but this is still relevant the fire could get there.


No, but it's been inside the evacuation area for a while:

https://www.google.com/maps/@34.0876669,-118.5930521,12z/dat...

It's also relevant because the Getty Center has been rather smug about how awesome their fire protection is.


> It's also relevant because the Getty Center has been rather smug about how awesome their fire protection is.

I think your "smug" comment is unwarranted. They put a ton of solid engineering thought, money and planning into protecting the center from fire. Nothing is 100% but I think their confidence is warranted.

Related, the Getty Villa right in the middle of the Palisades also put a lot of thought, planning and money into fire prevention, and despite being directly in the path of the Palisades firestorm, no structures on the Villa burned


They are being really smug, talking about designs and systems that mean nothing when you've got temperatures outside the building hot enough to melt aluminum engine blocks, infrared radiation intense enough to set fire to things hundreds of feet away - as well as very low oxygen and very high CO/CO2 levels along with dozens of different toxic gasses - all of which HEPA filtration won't do squat about.

A "stone facade" doesn't stop +1200 degree temperatures, especially when everything on the outside will undergo incredible thermal expansion and at the least open up gaps. Steel expands about 1-2% for just an increase to 100 degrees C. 300C means about 3-4% expansion. And then there's the huge expanses of windows which will shatter or pop out - and even if they don't, the intense IR radiation will by and large go through them.

People don't realize just how insanely hot wildfires get. Go look at the pictures of neighborhoods that have burnt- they're leveled with the exception of some chimneys, steel girders for houses that have them (most these days don't, builders have been using wood-composite beams) iron fences, car bodies. Everything else is burned or melted.

There isn't a building in the world that will stop the megawatts of heat energy per square meter wildfires can generate in IR radiation.


>They are being really smug...

Just out of sheer curiosity, I would be tremendously curious to understand what kind of personal/professional background/experience you have that would qualify you to certify their emergency systems as functionally ineffective and their messaging "smug".


Yes, wildfires get incredibly hot. But the fires essentially always travel by embers or direct contact with fire - your comments about IR radiation seem to imply that IR alone will cause ignition, which is rarely if ever the case.

Here is a story about a bunch of people who survived the Camp fire in Paradise, CA, surrounded by the raging inferno, by staying in the middle of a parking lot: https://www.firehouse.com/operations-training/wildland/news/...


It is rarely the case, indeed.

However, in incidents like e.g. the Fort McMurray fire (Alberta, 2016), this is precisely what happens. One property with a heavy fuel load fanned by strong winds (i.e. plentiful O2 supply) gets hot enough that it causes ignition in a neighboring exposure.

In Ft. McMurray, there were documented cases of an entire 4+ bedroom house being reduced to ash in roughly 5 minutes. The heat generated by that process is easily sufficient to cause ignition in buildings <typical suburban layout> apart.


Even in that case I'm sure a huge part of the heat transfer is convection, especially with the high winds.

Comment I was replying to was talking about IR igniting things by shining through windows, which I believe is mostly bullshit.


Stone doesn't burn, and neither does concrete. Glass melts. Steel evidently didn't burn at the temperatures these fires got to. So it makes sense that a building made of concrete and steel with stone facades and fiberglass insulation would survive the fire, especially after clearing out and hydrating the surrounding landscape so it wouldn't have the density or flammability of a forest. The Getty Center may have gotten lucky, but they might have also earned their "luck" through investment and planning.


Never have I seen a better case of projection on HN. You come off as so fucking smug yourself.


Everyone with a fire-hardened house should be feeling good. If all Pacific Palisades houses were fire-hardened, the fire would have burned vegetation but few houses.

Even modest fire hardening would help. If a wood-frame house burns, it is a danger to all nearby houses. Hardening reduces the chain reaction potential.


There's just not enough talk about this. This is the actual failure of government, not focusing enough on surviving fires


How is it a failure of government? The people of these areas reasonably have not wanted spend large amounts of their money to prevent unusual disasters like this one. Do you spend that in your community?


the cost would be enormous. it’s in an earthquake zone and it needs to withstand that. there’s already way too few houses for people at a reasonable cost. they’ve already evacuated and only a handful of lives have been lost to the people that stayed behind. it’s not normal santa ana winds it’s the hardest winds i’ve seen here in decades... probably 100 years or more. people are the important thing here.


How effective is fire-hardening in these conditions? And how much would it cost? And finally, does anyone know how fire-hardened structures have actually performed?


The difference between this concept and, let's say, Descarte's evil demon, is not the philosophical skepticism but its explicit grounding in physics and thermodynamics. It basically attempts to answer the question "Where would that evil demon come from?". It materializes Descarte's thought experiment and shows that it could actually happen within the confines of our scientific knowledge, unlike malicious demons.


I think it's the other way around.

Descartes/Hume are saying that to even bootstrap our understanding of reality, we have a hard dependency on sensory perception. (I mention Hume because he points out that even Descartes' singular ground-truth can't lead anywhere else without linking sensory perception back into the mix.) And when I say "nearly anything" it includes our notions about the laws of physics. (Which, btw, cannot be derived from Descartes' singular ground-truth.)

At best, BB is a restatement of what I wrote with the philosophically irrelevant detail that the BB hypothesis relies on all the same laws of physics we have in common with our universe. But I imagine it's really meant commonly as a weaker claim-- one which takes the laws of physics as epistemological ground-truth to derive an ambiguity about the nature of our reality within that universe.

My speculation is that science-minded people think BB is the most potent thought experiment for the same reason non-musicians might think Pachelbel's Canon in D is the best ever-- they've heard it a lot at places filled with people they admire.


As someone familiar with both Descartes and Boltzmann, I will chime in and say that you're approaching this from an angle of contempt and defensiveness, imagining the Boltzmann brain as an inferior subset of or analogy to long-studied philosophical and metaphysical issues such as ground truth or the evil demon. Instead, I implore you to give benefit of the doubt and attempt to understand the differences.

The Boltzmann brain is not making some grand statement on ground truth or perception. It's not about intrinsics or perception at all. Boltzmann discussed how the universe, even in a state of 100% thermodynamic equilibrium, may spontaneously end up in a state of non-equilibrium, reducing entropy. The Boltzmann brain was a concept developed by others in response to this theory.

In fact, many theories are such that a Boltzmann brain actually has a higher chance of occurring than all of the billions of years of coincidences which led up to me typing this message out to you.

It's purely an argument of entropy and spontaneous symmetry breaking. The sensory and perceptive states described by the Boltzmann brain only serve to illustrate the point, and are not the main subject of the problem.

Don't forget that philosophy was the first science, and viewing people as "science-minded" (and therefore not philosophy-minded) hurts the scientific legitimacy of philosophy, and also only serves to exclude. Many scientists also have deep philosophical grounding. Many also have deep musical grounding. You yourself are exhibiting a lack of domain knowledge regarding the Boltzmann brain, filtering it through your "philosophy-minded" perspective, so maybe we can dispense with these kinds of judgements and focus on the core argument.

The world is not so black and white, and there is no false dichotomy between people who are "science-minded" and "philosophy-minded". Both follow the same exact scientific method of inquiry.

Additionally, "the same reason non-musicians might think Pachelbel's Canon in D is the best ever" comes across as a strawman. Some people might prefer that piece overall, but it's not a crime for someone to enjoy it. But relatively few probably consider it "the best ever".

But also, who cares? Why judge? I have a lot of favorite modern pieces which are technically inferior to most classical pieces. But as a musician, not just an engineer, I consider sensory evocation to be equally as important as technicality.


Sounds about right, except Pachelbel's Canon actually is the best song ever for non-musicians.

It showcases harmony and contrasting lines in the simplest, punchiest, most pleasing way. It's fundamentally "the good stuff", and untrained ears slurp it up like babies w pats of butter.


Probably no physicist thinks that Boltzman brains are a potent thought experiment. BBs are short-hand for a problem in combining cosmology and statistical mechanics in a way in which there is a hierarchy of vastly improbable configurations fluctuating into existence out of thermal equilibrium.

Discounting Brain-in-a-Vat (because it's cognitively useless), the problem in a nutshell is that we inhabit a universe which appears (a) to have had a hot dense phase in approximate thermal equilibrium, (b) a future sparse phase in approximate thermal equilibrium, and (c) a whole bunch of structure in between those. Is the structure a fluctuation in (a)? Could (a) be a fluctuation in (b)? These are reasonable questions about which one can ask: is there astrophysical or laboratory evidence available to determine the answers?

One problem is that if (a) (early conditions) is a fluctuation in (b) (late conditions), wherein (a) simply evolves into (c) (complex structure with galaxies and so on) and then (b), what mechanisms could suppress simpler configurations than (a)?

A huge huge huge number of low-entropy Boltzmann brains fluctuating into existence is vastly more likley (on Boltzmann entropy grounds) than an early very-very-very-very-very-low-entropy universe compatible with the standard model of particle physics and the cosmic microwave background and galaxies all over the sky, in which there is a nonzero chance of human brains arising via evolutionary processes.

A tiny change in a Boltzmann brain as it fluctuates into existence could lead to a significant loss of false memory; a tiny change in a maximally-hot maximally-dense phase in the early universe could lead to completely different chemical elements (or none at all).

So Boltzmann brains highlight some metaphysical ratholes one can fall into with respect to the fine-tuning of the (a) state, and have provoked work on how (a) could be so generic an outcome that the evolution of (a)->(c) is "unsurprising". The hard part is coming up with observables which usefully compare a given hypothetical solution and our own sky.


I would question the idea of bootstrapping; rather sensory perception is a QC/QA function that confirms the brain's construct of reality.

The only reason we can generally agree on the nature of any object is our common evolution and generally the same sensory ability. (but that isn't universal and differs widely across species)


Yes, and the idea of Boltzmann brains depends for its credence on those physical principles used to derive it. And those principles in turn depend on the reality (or at least reasonable reliability) of our memories and the whole history of experimental and theoretical development leading to them. So trying to use it as an arugment for philosophical scepticism, or that it's a probable scenario, would be self-defeating, denying its own evidence.

It does give a technically detailed construction for how such a scenrio might come about though, as you say, so it can be interesting to think about.


Sure. But as I said to another respondent, it's a much weaker claim than those which already exist in the philosophical literature.

That doesn't matter if BB is just a bridge for physicists to a deeper understand of philosophy. But I have a sneaking suspicion that BB is part of a basket of ideas in a kind of bubble category of "Philosophy for the Scientist." Similar to those "101 Jokes for Golfers" books-- I mean, fine, but if those are the only jokes you know you're probably insufferable at parties.


Well, the BB isn’t necessarily a serious proposal.

Rather, when you do the math on all the billion/trillion to one shots that are definitely happening, every second of every day, in physics - and look around at the universe as it appears to exist now and how many of those shots had to play out in a specific way - and then do the math on the probability of a BB spontaneously existing, then it’s really absurd that we aren’t somehow BB’s.


I don't know, I've always seen Boltzman brains like I see Schröedinger's cat. I think it was intended to show the absurdity of certain philosophical interpretations of the data available, but somehow got misapprehended as an argument defending those interpretations. Namely that randomness could somehow spontaneously lead to ordered complexity.


You seem to have it exactly backwards: we have 200 years of practical evidence in favour of modern fluctuation theory thanks to things like steam turbines. Structure does sponataneously appear in a gas in thermal equilibrium, one can show this in a classroom experiment.

Landau and Lifshitz vol 5 <https://en.wikipedia.org/wiki/Course_of_Theoretical_Physics#...> is the standard textbook. There's an older copy on the Internet Archive <https://archive.org/details/landau-and-lifshitz-physics-text...>. (Having the background of L&L vol 9 makes a classroom demonstration even easier: a small handful of electronic parts for a resistive-capactive low-voltage DC electric dipole, a decent oscilloscope or other apparatus to measure and record fluctuating voltage, and a thermometer).

The probability of an out-of-equilibrium structure spontaneously fluctuating (briefly!) into existence depends on the complexity of the structure, and Boltzmann brains are much much much less complex than the whole Earth, solar system, Milky Way, or the early universe in which these structures' precursors originated. So therefore any theory compatible with statistical physics in which the early low-entropy state of the universe is a fluctuation in a higher-entropy "gas" is imperiled.

For the philosopher or two who wrote comments above in this thread, https://plato.stanford.edu/entries/statphys-Boltzmann/ is probably of interest.


Yes I am familiar with the physics behind what I said. Quoting a university textbook on a philosophical argument is an interesting choice.

I have studied and reflected on the subject and I really think Boltzman brains and Schröedinger cat are thought experiments that go way over the head of their pop sci/undergrad classroom interpretations.


> Schröedinger [sic]

Schrödinger or Schroedinger: it is just you on HN <<https://duckduckgo.com/?q=%22Schr%C3%B6edinger%22+site%3Anew...> who has used "öe", multiple times.

Boltzmann brains have nothing to do with Schrödinger: statistical mechanics works fully classically, and nobody treats Boltzmann brains in a quantum mechanical way because a Boltzmann brain is an unembodied ephemeral human brain. Natural human brains have a history of being warm and electrically noisy (any quantum features decohere faster than thought), while fluctuated-out-of-equilibrium human brains aren't around long enough to have their temperature measured, nor to produce much electrical noise.

A quantum-mechanical Boltzmann brain emerging from a gas of photons in (cold) equilibrium in which fluctuations take photons out of equilibrium and into the farrrr UV is going to be on the short-lived end of Boltzmann brains: things like annihilations and complicated decay chains will dissolve them away quickly. Which is the point. Boltzmann brains, unlike the bowl of petunias in The Hitchhiker's Guide to the Galaxy, should not have time to compose poetry.

Very hot thermal radiation is what destroys classical Boltzmann brains. They sort-of are a cool spot in an extremely violently hot bubble in an otherwise cold (colder than brains!) gas of almost everywhere uniform temperature.

> Yes I am familiar with the physics ...

> Quoting a university [physics] textbook on a philosophical argument

... which is philosophizing about actual physics ...

> is an interesting choice.

Well, what textbooks or other sources do you rely upon when philosophizing about fluctuation theory? What have you read or taught from?

> pop sci/undergrad classroom interpretations

?


I understand the concepts thanks for explaining them again. They have things in common because they are both thought experiments that I think most people miss the point of.

Undergrad textbooks are not exactly cutting edge philosophy wise and you're giving me a "?" for using one in a philosophical argument and having it called pop sci/undergrad classroom interpretations, makes me doubt your reading comprehension entirely.


Could be a swede who meant "GDP" (BNP = BruttoNationalProdukten = Gross Domestic Product)


More plausible than "British National Party".

Does it raise GDP, though? I would have thought a more accurate thing to say is it raises the global temperature.


>More plausible than "British National Party".

Maybe they were using Grok :-)


Two units of work have been carried out, so yes it raises GDP.


GDP isn't about units of work, it's about the final product. So using two units of work instead of one to produce the same thing shouldn't affect GDP. But perhaps you are implying GDP is not correctly calculated?


That depends on how you're calculating GDP. If you are summing up all expenditures or incomes, the paid work done by each AI would be counted. If you're counting production, meaning the value added, it probably wouldn't count since digging a hole only to fill it back in created no value.


GDP calculates the market value delivered. In this case of the labour. If someone is paying for the labour, GDP will be affected by the value of that labour. If the net output in terms of a product at the end is zero, then that does not erase the labour.

The only case where digging and filling a hole does not increase GDP is if the labour is not paid for.

EDIT: Basically, the two methods you list are the income or expenditure ways of calculating GDP, but in both cases consumption by employers is a factor, and so the payment for the labour increases the GDP irrespective of whether they also increase the final output.


I'm not so sure calculating GDP by production would capture this. It should, mind you, as all GDP calculations should get the same answer, but a silly/stupid example of two LLMs digging and filling in a hole may not fit in a production calculation.

> the production approach estimates the total value of economic output and deducts the cost of intermediate goods that are consumed in the process (like those of materials and services)[1]

This is a very rough definition of it, but role with it. There is no economic value since the hole was dug only to be filled back in. There was a service paid for on each end of the project, but those are services that could fall into the category of intermediate goods consumed that is actually deducted. The transaction could actually have a negative GDP when using the production calculation approach.

[1] the production approach estimates the total value of economic output and deducts the cost of intermediate goods that are consumed in the process (like those of materials and services)


In both income and expenditure-based GDP calculations income or consumption by households are part of the calculation (which means the calculations will not give the same result).

You can make an argument that if the hypothetical workers are salaried they're not technically paid for any given task, while I'd argue that there was an opportunity cost (they could have done other work than digging/filling it in), so there's some subjectivity to it.

My stance is that if it was done as part of paid work, they were paid to carry out the task as there's at least in aggregate if not per every one individual event an opportunity cost in having them do that work instead of something else, and so part of their consumption was paid for by the labour for those tasks, and hence they affect GDP.

That the output does not add value for the procurer of that labour does not nullify the expenditure on that labour. Whether you're calculating GDP on income or expenditure, those add to GDP either as income for the workers or an expenditure for the employer.


Sounds like you prefer expenditure or income calculations over production. That makes sense and I think I'd agree.

I'm not sold on tying it back to opportunity cost though. That may require knowing the potential value of work that could have been done instead. It also means that we could view GDP as the potential economic value if everything is optimized, regardless of what is actually produced. That feels wrong to me at first glance but I'd have to really dig into it further to have a more clear argument why.


Production will only change things there if both tasks are carried out as part of the same service, charged as one. Otherwise, there will still be two outputs that nullify each other but both cause GDP to increase. But even then, if you charge someone for a service to dig and fill in holes, that there is no tangible product at the end does not mean there isn't an output that has a price, and that so increases GDP, just the same as, say, performing a dance does not leave a tangible product at the end, but the service still has a price and a value, and paying for it still increases GDP.

With respect to the opportunity cost, the point is not being able to quantity it, but that whether or not the task is productive, because it takes time, it has a cost.


> even then, if you charge someone for a service to dig and fill in holes, that there is no tangible product at the end does not mean there isn't an output that has a price, and that so increases GDP, just the same as, say, performing a dance does not leave a tangible product at the end, but the service still has a price and a value, and paying for it still increases GDP.

That blurs the line between the different calculation methods though, doesn't it? If nothing is produced then the production method of calculating wouldn't account for the transaction.

This method would also open the possibility for fraud. If the government wanted to boost GDP, for example, they could hire a bunch of people to dig a whole and fill it in all year. Would they? Probably not, they have easier ways to waste money and game GDP. But they could and that seems like a problem.

> because it takes time, it has a cost.

I don't know of any economic metrics that quantify the cost of time like this though. People like to point to unpaid labor as a huge blindspot for GDP precisely because of that - when your day is spent taking care of your home, children, or elderly parents the time is spent but GDP isn't impacted.


GDP is about products or services. If someone is paid for digging a hole, then that a finished, delivered service. Filling it the same. If you dig and fill a hole without anyone paying you for either, sure it won't affect GDP, but if someone pays you, that the net result is no change does not alter the fact that you have been paid to dig a hole and to fill a hole.

The method used to calculate the investment can affect whether the income produced increase the GDP or whether only the consumption generated by that increased income is counted, but in a real-world scenario either alternative will increase the GDP.

> But perhaps you are implying GDP is not correctly calculated?

That GDP doesn't accurately reflect productive, useful effort for this reason has been a core part of the criticism of GDP since it was first formulated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: