Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fact is that no neural network can solve sudoku puzzles. I think it's hilarious that AI proponents/detractors keep worrying about existential risk when not a single one of these systems can solve logic puzzles.


I didn't say anything about existential risk, and I'm going to assume you meant LLM since training a NN to solve sudoku puzzles has been something you could do as an into to ML project going years back: https://arxiv.org/abs/1711.08028

To me the existential risks are pretty boring and current LLMs are already capable of them: train on some biased data, people embed LLMs in a ton of places, the result is spreading bias in a black box where introspection is significantly harder.

In some ways it mirrors the original stochastic parrot warning, except "parrot" is a much significantly less loaded term in this context.


Then I don't know what you're arguing about. If you think LLMs are useful continue using them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: