Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The OpenAI chat completion endpoint encourages the second-person prompting you describe, so that could be why you see it a lot.

You're talking about system prompts specifically right? And I'm assuming the "encouragement" you're referring to is coming from the conventions used in their examples rather than an explicit instruction to use second person?

Or does second person improve responses to user messages as well?



There is an essay "An Ethical AI Never Says "I"" that states that explains the issues of first person answers

* https://news.ycombinator.com/item?id=35318224 / https://livepaola.substack.com/p/an-ethical-ai-never-says-i


Thanks - this gets to some of the same things I’m trying to understand in this thread.


For the most part. It’s the system prompt + user/assistant structure that encourages second-person system prompts. You could write a prompt that’s like

System: Complete transcripts you are given.

User: Here’s a transcript of X

But that, to me, seems like a bit of a hack.

One related behavior I’ve noticed with the OpenAI chat completions endpoint is that it is very trigger happy on completing messages that seem incomplete. It seems nearly impossible to mitigate this behavior using the system prompt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: