Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since a human uses ~100W of power, the .24Watt-hours of energy for an AI prompt is about 40human-seconds [Edit: 9human-seconds] of energy.

And unlike the human who spent multiple hours writing that article, an LLM would have linked to the original study: https://services.google.com/fh/files/misc/measuring_the_envi...

[ETA] Extending on these numbers a bit, a mean human uses 1.25KW of power (Kardashev Level .7 / 8 Gigahumans) and the mean American uses ~8KW of power according to https://en.wikipedia.org/wiki/List_of_countries_by_energy_co.... So if we align AIs to be eco-friendly, they will definitely murder all humans for the sake of the planet /s



>And unlike the human who spent multiple hours writing that article, an LLM would have linked to the original study

Or made up a fake citation, complete with fake or unrelated author names, on the spot


Seconded, I’ve yet to use an LLM that can consistently provide sources that back up what it’s telling me.


> I’ve yet to use an LLM that can consistently provide sources that back up what it’s telling me.

Mmmh, that would have been my take as well up to around end of Q1 2025.

Theses days, the flagship LLM's have reduced hallucination by quite a bit, and are also way better at citing sources (you sometimes have to nudge them).

ChatGPT 5 has been very decent on that particular axis.


The Gemini app is pretty good at giving sources.


Experienced that today with 2.5 Pro. Though I was amused that all the links were links to Google searches for the source, rather than direct links.


In my chats with 2.5 Flash it gives me the direct sources lol. Also not going to lie, I've found 2.5 Flash generally gets straight to the point with sources a lot quicker than Pro. To the point I don't really use Pro.

Edit - just used Pro, gave me a direct source. Who knows...


> Since a human uses ~100W of power, the .24Watt-hours of energy > for an AI prompt is about 40human-seconds of energy.

.24 Watt-hours is 864 Watts for one second, so a 100W human takes ~9 seconds for that output.


You're right. It's clear I should have had an LLM write my comment rather than do it myself before a cup of coffee. I've already spent an order of magnitude more energy thinking about this article compared to an LLM.

Also, since I live a first-world life style which consumes multiple KW of power, I've probably consumed multiple orders of magnitude energy more than an LLM on this topic.


Perhaps sxp used an LLM to do his calculation.


> an LLM would have linked to the original study

There is a non-trivial chance that the LLM would've added a link to _something_, but links/references seem like a very common thing to hallucinate, no?


The way around that is that is for LLM-based tools to run a regular search engine query in the background and feed the results of that in alongside the prompt. (Usually a two-step process of the LLM formulating the query, then another pass on the results)

The used results can then have their link either added to the end result separately, guaranteeing it is correct, or added to the prompt and "telling the LLM to include it", which retains a risk of hallucination, yes.

Common to both of these is the failure mode that the LLM can still hallucinate whilst "summarizing" the results, meaning you still have no guarantee that the claims made actually show up in the results.


> The way around that is that is for LLM-based tools to run a regular search engine query in the background and feed the results of that in alongside the prompt. (Usually a two-step process of the LLM formulating the query, then another pass on the results)

Would the LLM-based tool be able to determine that the top results are just SEO-spam sites and move lower in the list, or just accept the spam results as gospel?


This is an extremely tricky question.

The practical and readily-observable-from-output answer is "No, they cannot meaningfully identify spam or misinformation, and do indeed just accept the results as gospel"; Google's AI summary works this way and is repeatedly wrong in exactly this way. Google's repeatedly had it be wrong even in the adcopy.

The theoretical mechanism is that the attention mechanism with LLMs would be able to select which parts of the results are fed further into the results. This is how the model is capable of finding parts of the text that are "relevant". The problem is that this just isn't enough to robustly identify spam or incorrect information.

However, we can isolate this "find the relevant bit" functionality away from the rest of the LLM to enhance regular search engines. It's hard to say how useful this is; Google has intentionally damaged their search engine and it may simply not be worth the GPU cycles compared to traditional approaches, but it's an idea being widely explored right now.


The only thing that can solve the misinformation from a bad LLM is the misinformation from a good LLM... with a gun.


>The way around that is that is for LLM-based tools to run a regular search engine query in the background and feed the results of that in alongside the prompt.

Hardly better, as soon those "search engine results" would be AI slop themselves, including actual published papers (phoned-in by using AI, and "peer reviewed" by using AI from indifferent reviewers)


This used to be a problem but it's been unheard of for a while


Happens frequently still with GTP 4o (and now 5) and Claude. Makes up "biographies" unrelated to the actual people, gives me bullshit non-existing API options and cli flags, etc.


That's not the same as URLs but I see how you could generalise it to include what you did.


People say this, but then never link to chats.

And then on the rare occasion they do link to a chat, their prompt is something like:

"Tell about a person of history who was important for their work inthe time of their existence and give quotes of what they said that made them important when they were speaking include notes and other stories about them and give details about their life who they married and their kids and who their parents were and other big things they did do in their lives"

Instead of downvotes, please prove me wrong.


It feels so common actually that I would not even bother sharing them. ChatGPT gives me hallucinated URLs all... the... time. Like several times a day. You can fix it by prompting it to Google the subject or research it otherwise it just vomits garbage.


LLMs are not databases, they are text processors that need to be trained on ungodly amounts of data and can sort of kind of double as a database, though a very fallible one. Inject ground truth and you're cooking; don't and it's a crap shoot. (not saying anything new here, but it bears repeating).


Instead of writing this rebuttal you could have just generated a hallucination and posted it.


If you haven't noticed several by now, then posting some wont do anything, it would be like trying to prove someone we went to the moon


P. Give me quotes from Aristophanes describing the Peloponesian war as unfortunate.

A. Yes — Aristophanes, the Athenian comic playwright, lived through almost the entire Peloponnesian War (431–404 BCE) and his plays are full of bitter humor about Greeks destroying each other. He often portrays it as an unfortunate, fratricidal conflict.

Here are a few key passages (in translation):

Acharnians (425 BCE)

Written during the war’s early years, it shows a farmer sick of the fighting: • Dicaeopolis (the main character) laments: “This war, which has come upon us all, is nothing but Greeks fighting Greeks.”

He later makes a private peace treaty with Sparta for himself, mocking the idea that the wider war must continue when ordinary people suffer.

Peace (421 BCE)

Produced right before the “Peace of Nicias”: • Trygaeus, the hero, flies up to heaven to free the goddess Peace: “It is monstrous that Greeks should destroy one another, while barbarians stand by laughing.”

This captures Aristophanes’ view that the war was not only ruinous but also shameful, as it weakened Greece before foreign enemies.

In sum: Aristophanes consistently portrayed the Peloponnesian War as a tragic absurdity — Greeks killing Greeks while the true enemies (Persians/“barbarians”) watched and benefitted. His comedies combine satire with a deep sense that the conflict was fratricidal and unnecessary.

P. You said Dicaeopolis (the main character) laments: “This war, which has come upon us all, is nothing but Greeks fighting Greeks.” Where is that quote?

A. Good catch — let’s pin this down carefully.

In Aristophanes’ Acharnians (425 BCE), the character Dicaeopolis indeed complains that the Peloponnesian War is a fratricidal war among Greeks, but the exact wording depends on the translation.

Where it appears • Early in the play (lines ~100–120), Dicaeopolis is waiting for the Assembly to begin. He grumbles about the war, saying essentially that it is:

“A war of Greek against Greek, when we should be standing united against the barbarians.”

Examples from published translations • Benjamin Rogers (1924 translation): “I am heartily tired of these Grecian wars, of Greeks fighting against Greeks.”

• Alan H. Sommerstein (Loeb Classical Library, 2002): (lines 116–118) “It’s not right — Greeks at war with Greeks, while the barbarian looks on, laughing at us.”

----

Needless to say, that quote is nowhere in the Acharnians - nor anything roughly equivalent. Not only that, in the later response it attributes to the play "Acharnians" the quote it initially attributed to the play "Peace".


Definitely not unheard of. Claude gives broken links to documentation at least once a week.


Unless you're arguing that people should kill themselves to save their 100W of power, the future energy use of every alive human is already committed and doesn't matter.

Not to mention that the energy should also include all the extra energy spent on making converting energy into a form that is usable by humans (ie. food). There is probably at least an order of magnitude.


Did you use an LLM to do that arithmetic?


The issue is that now we're over-utilizing prompts everywhere. Every search engine query. Every keypress in an AI editor. Every new website makes queries in the back. Analyze emails. etc

So it's not just about "the one query you ask ChatGPT about what you should write your mum to say you're not coming for Thanksgiving"

It's rather that an AI query is 0.24Wh, but that we are now using thousands of those per users per days, and that we globalize it at the scale of the planet, so 7 billion users... and this becomes huge


>And unlike the human who spent multiple hours writing that article, an LLM would have linked to the original study:

This is why journalists are nearly universally hostile towards AI.


Press wire to blogspam as a service! There goes 90% of the "content" on news sites that seem to pride themselves on stretching out a sentence of info into 4 paragraphs.

I love good journalism because it's adhd crack; in-depth spilling the tea but you have to really dig deep to find it nowadays.


Attempts to reduce human experience down to a single number always seem so wrong-headed. A common one is when articles say "billions of dollars of productivity wasted to X". As if you can just replace any human activity for any human activity at any time and get the same results. I know companies would like to imagine that every employee is a fungible cog, but it seems pretty dismal as a world view.


Thanks, I was scouring the article looking for the original study, could not believe that they didn't have it linked.


I bet the human had LLM help. I bet it didn't take hours to put together.


Great, now consider the human's marginal power consumption for a task instead of their baseline consumption.

Unless your point is that we can kill a bunch of humans to save energy...?


Prompt: what will the best phone be in 2025? Wasted 0.24w/h and 5 drops of water.


Consider this your second upvote from me, for the second paragraph.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: