Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is either GPT or a berry bad take.


Given a sufficiently advanced AI, I don’t see why it won’t be able to dogfood itself. Humans do it all the time by building knowledge and learning off it.


Humans have to act on the knowledge they generate and live (or not) with the consequences of it. If AI never has to test any of the knowledge, they get no feedback on what's good or bad.

If you read online that you can eat a Tide Pod, you'll find out pretty quickly whether that information was correct or not, and you'll write your own report (or maybe your surviving family will) about how that worked out.

AI will only scrape and generate random iterations with no testing. If it reads 50 posts saying to eat tide pods, and 50 posts saying not to, then it will generate randomly to eat it or not, depending on how the RNG falls. It will never be able to randomly generate enough posts to create accurate information without anyway to test it.


Humans need not act on every bit of knowledge or information they generate. Our brains often generate scenarios for us and let us play through them, eg shower thoughts and shower arguments. These are all arguments and ideas that can and will never play out in reality but allow us to refine and process our thoughts.

We also generate knowledge that is utterly irrelevant to our capacity to survive, such as when we create art, play games, read books for pleasure and so on.

We also generate knowledge that has no application until one is found, eg Riemannian geometry, abstract algebra and so on.

Mathematics in particular allows a system (be it machine or human) to iteratively improve its knowledge to all computable proofs by simple application of inference rules, up to undecidability and everything incompleteness entails.

The assertion that you are making regarding a model’s capacity to understand is rather naive. A system with a forward predictive model and sufficient knowledge will figure out that consuming tidepods is bad.

I don’t need to eat tidepodes or somebody to tell me not to, exactly because I am advanced and I can dog food my own knowledge.

Your view of what an AI might be capable of is very limited.

A system with a sufficiently good inference system and some kind of curiosity will generate knowledge and evaluate it, be it logically or based on its model. That’s not unique to humans and humans are not logical either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: