Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would encourage you to read the code it produced. Its not like a simple "solve maze" function. There are plenty of "smart" choices in there to achieve the goal given my very vague instructions, and as a result of it analyzing why it failed at first and then adjusting.




I don't know how else to get my point across: what I am trying to say is that there is nothing "smart" about an automaton that needs to resort to A* algorithm implementations to "solve" a problem that any 4-year old child can solve just by looking at it.

Where you are seeing "intelligence" and "an existential crisis", I see "a huge pattern-matching system with an ever increasing vocabulary".

LLM's are useful. They will certainly cause a lot of disruption of automation on all types of white-collar work. They will definitely lead to all sorts of economic and social disruptions (good and bad). I'm definitely not ignoring them as just another fad... but none of that depends on LLMs being "intelligent" in any way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: