Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a great example of why LLMs are terrible about summaries.

LLMs statistically see that summaries are related to the words of the original text. Where they fail is emphasis.

This article has a thesis about 'What is Truth?' near the beginning (Epistemic Trust section), how it's related philosophically to what LLMs do, and finishes with example experiments showing how LLMs and 'Truth' (as per one philosophical definition) are different.

-------

ChatGPT was seemingly unable to see who this is the core of the argument and was unable to summarize the crux of this article.

If anything, ChatGPT is at best a crowdsourced kind of summarizer (which is related to some definition of truth). But even at this job it's quite shitty.

------

This summary from ChatGPT is.... Hallucination. I'm not seeing how it's relevant to the original article at all.

Now as per the original articles argument: if you see ChatGPT as a crowdsourced mechanism and imagine how the average internet argument for ChatGPT would go, yeah, it's a summary. Alas, that's not what people want in a summary!!!

We don't want a hallucinated summary of an imaginary ChatGPT argument. We want an actual summary of the new discussion points this article brought forward. Apparently that's too much for today's LLMs to do.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: