This is what I mean when I say "inverse-anthropomorphization" crowd is increasingly emotion over facts.
My reply to you was predicated on a compilation of centuries of scientific study on the subject of creativity. Your knee-jerk reply is to proclaim it's bad at sudoku while going out of your way to place artificial constraints on it.
Touting its inability to solve sudoku in-context feels like a slightly hamfisted way of saying it's a probability based model operating on tokens but like I said before, there are plenty of us who already understand that.
We also realize that you can find arbitrary gaps in any sufficiently complex system. You didn't even need to rely on such a specific example, you could have touted any number of variations on common logic puzzles that they just fall completely on their faces for.
Gaps aren't damning until you tie them to what you want out of the system. The LLM can be bad at Sudoku and capable of creativity in some domain. It's more useful to explore unexpected properties of a complex system than it is to parade things that the system is already expected to be bad at.
The fact is that no neural network can solve sudoku puzzles. I think it's hilarious that AI proponents/detractors keep worrying about existential risk when not a single one of these systems can solve logic puzzles.
I didn't say anything about existential risk, and I'm going to assume you meant LLM since training a NN to solve sudoku puzzles has been something you could do as an into to ML project going years back: https://arxiv.org/abs/1711.08028
To me the existential risks are pretty boring and current LLMs are already capable of them: train on some biased data, people embed LLMs in a ton of places, the result is spreading bias in a black box where introspection is significantly harder.
In some ways it mirrors the original stochastic parrot warning, except "parrot" is a much significantly less loaded term in this context.
My reply to you was predicated on a compilation of centuries of scientific study on the subject of creativity. Your knee-jerk reply is to proclaim it's bad at sudoku while going out of your way to place artificial constraints on it.
Touting its inability to solve sudoku in-context feels like a slightly hamfisted way of saying it's a probability based model operating on tokens but like I said before, there are plenty of us who already understand that.
We also realize that you can find arbitrary gaps in any sufficiently complex system. You didn't even need to rely on such a specific example, you could have touted any number of variations on common logic puzzles that they just fall completely on their faces for.
Gaps aren't damning until you tie them to what you want out of the system. The LLM can be bad at Sudoku and capable of creativity in some domain. It's more useful to explore unexpected properties of a complex system than it is to parade things that the system is already expected to be bad at.