How much of it doesn't? We're deterministic? (aren't we less deterministic than LLMs?) All of our training is auditable? (there's a wealth of unknown experiences in each person writing code, to say nothing of the unknown and irrelevant experiences in our evolutionary background).
Maybe you can argue we don't use statistical completion and prediction as a heavy underpinning to our reasoning, but that's hardly settled.
Nah-- you will have to try harder to make an argument that really focuses on how LLMs are different from the alternative.
But any argument seeking to dunk on LLMs needs to not also apply equally to the alternative (humans).