> I never thought a computer would pass the turing test in our lifetime
Are we talking about the single non-peer reviewed study that show that a random person might only be about 1/3 in guessing correctly that a GPT 4.5 text is a computer and not a human?
Learning to recognize the artifacts, style and logical nonsense of an LLM is a skill. People are slowly learning those and through that the turing results will natural drop, which strongly imply a major fault in how we measure turing completeness.
Are we talking about the single non-peer reviewed study that show that a random person might only be about 1/3 in guessing correctly that a GPT 4.5 text is a computer and not a human?
Learning to recognize the artifacts, style and logical nonsense of an LLM is a skill. People are slowly learning those and through that the turing results will natural drop, which strongly imply a major fault in how we measure turing completeness.