Negative instructions do not work as well as positive ones. If you tell the LLM "don't do this" you only put the idea of doing that into it's context. (Surprisingly the same goes for human toddlers.... the AI is just in it's toddler phase).
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.
I realized I was doing it wrong when Cloudflare launched their own prompt spec for their workers implementation. Their proposed pattern is slightly different though: "You need to do this, but you did that. Please fix (with this [optional])". I might try a hybrid approach the next time.
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.