Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The empirical argument

> We can ask a question: how long (in nanoseconds) does it take to access a type of memory of which an average laptop has N bytes? Here's GPT's answer:

"Here's what GPT says" is not an empirical argument. If you can't do better than that (run a benchmark, cite some literature), why should I bother to read what you wrote?



The empirical argument actually states that memory access is O(n^1/2)

https://www.ilikebigbits.com/2014_04_21_myth_of_ram_1/3_fit.... "The blue line is O(√N)."

This has been rehashed many times before, and the best blog post on this topic is here: https://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html


Thanks for the good links. I think we generally have become so accustomed to the scaled up von Neumann strategy that we don´t see how much efficiency and performance we leave on the table by not building much smaller memory hierarchies.

Shameless plug here, where I explore possible gains in efficiency, performance and security by scaling out rather than up; (no subscription) https://anderscj.substack.com/p/liberal-democracies-needs-a-...


That's really neat, I hadn't seen the black hole argument before, that's really cute


Empirical arguments do not apply to asymptotics anyway.


The article started really well, and I was looking forward to the empirical argument.

Truly mind-boggling times where "here is the empirical proof" means "here is what chatGPT says" to some people.


its Vitalik what do you expect? Do you think Bernie Madoff was speaking objectively when talking to his potential clients?


VB is a genius, no sweat (are you familiar with his work?). Madoff isn't in the same league at all, and it's disingenuous to imply otherwise.


Better than "according to Google" (pre ai) which I saw cited too many times.

I have a feeling that people who have such absolute trust in AI models have never hit regen and seen how much truth can vary.


In no way is it better than "according to Google".


Maybe when google actually did searches. A coworker today was unable to find a very straightforward quoted text on google, on duckduckgo the first few hits were exactly what we were looking for.


> why should I bother to read what you wrote?

The better question is: Why should you bother to read what the author didn't bother to write?


The cool thing about "here's what GPT says" is that you can make GPT says whatever you want!

https://chatgpt.com/share/68e6eeba-8284-800e-b399-338e6c4783...

https://chatgpt.com/share/68e6ef4a-bdd0-800e-877a-b3d5d4dc51...


Why would it not run with your provided hypothesis? It even added an explicit hint that this is a hypothetical scaling exercise, and that real hardware does not scale like that.

But generally, sure, you can make LLMs say many false things, sometimes even by just asking them a question in good faith, and it certainly casts some doubt on a blog post quoting an LLM as a source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: