Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"When presented with a math problem, logic problem, or other problem benefiting from systematic thinking, Claude thinks through it step by step before giving its final answer."

... do AI makers believe this works? Like do think Claude is a conscious thing that can be instructed to "think through" a problem?

All of these prompts (from Anthropic and elsewhere) have a weird level of anthropomorphizing going on. Are AI companies praying to the idols they've made?



They believe it works because it does work!

"Chain of thought" prompting is a well-established method to get better output from LLMs.


LLMs predict the next token. Imagine someone said to you, "it takes a musician 10 minutes to play a song, how long will it take for 5 musicians to play? I will work through the problem step by step".

What are they more likely to say next? The reasoning behind their answer? Or a number of minutes?

People rarely say, "let me describe my reasoning step by step. The answer is 10 minutes".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: