Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes! Prompts are super finicky.

You have to create a prompt/function that for a wide set of inputs, generates a token sequence that will perpetually expand in a manner that corresponds to an externally observed truth.

Way too often it feels like you have to shove a universal decoding sequence into a prompt.

“Talk your steps, list your clues, etc.”

Just trying to luck into a prompt that keeps decompressing the model/ generating the next token that ensures the next token is true.*

I recall there was a paper with a relevant title recently… https://arxiv.org/abs/2309.10668

Basically - LLMs don’t reason , they regurgitate. If they have the right training data, and the right prompt, they can decompress the training data into something that can be validated as true

——-

* Also this has to be done in a limited context window, there is no long term memory, and there is no real underlying model of thought.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: