Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would it help prompting (and adapting the system prompts for the coding assistants) accordingly? Like:

> Do not assume the person writing the code knows what they are doing. Also do not assume the code base follows best practices or sensible defaults. Always check for better solutions/optimizations where it makes sense and check the validity of data structures.

Just a quick draft. Would probably need waaaaaay more refinement. But might this help at least mitigating a bit of the felt issue?

I always think of AI as an overeager junior dev. So I tend to treat it that way when giving instructions, but even then...

... well, let's say the results are sometimes interesting.



Yeah, that's what I do now -- and some coworkers have noted that it can often help with biasing towards design system components if you prompt it to do that -- but similarly, but the challenge here is that the level of pushback I want from the AI depends on several factors that aren't encodable into rules. Sometimes the area of the code is exactly the way it should be, and sometimes I know exactly what to do! Or it's somewhere in between. And it's a lot of work to have a set of rules that plays well here.


A central issue is that specifying what you require is difficult. It's hard for non-programmers to specify what they want to programmers, it's hard for people (programmers or not) to specify what they want to AIs, it's hard to specify exactly what requirements your system has to a model checker, etc. Specifying requirements isn't always the hardest part of making software, but it often is and it's not something with purely technical solutions.


To me prompting an AI (and learning to better specify what I actually want) really helped me learn to better "prompt" humans.

I think my biggest learning from using AI is being able to clearer think about and communicate what I need/want/Desire and how to put it into enough context, so that the other party can form a better understanding themselves.

Not that it always works - but I feel I am getting better.


Yes. The biggest issue with LLMs is their tunnel vision and general lack of awareness. They lack the ability to go meta, or to "take a step back" on their own, which given their construction isn't surprising. Adjusting the prompts is only a hack and doesn't solve the fundamental issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: