I think the author's definition of "creating" is just too narrow. A gardener can get tremendous satisfaction from watching their plants grow from the bed of soil that they prepared, even if there is not as much weeding or watering to do later on in the growth cycle. A parent can get tremendous satisfaction from watching their child continue to grow and develop, even after the child is no longer an infant who requires constant care and attention.
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
Cue me, cursing the AI with a choice selection of names, when my AI code writer of choice decides to change core files that I had explicitly told it not to touch earlier in the chat.
Negative instructions do not work as well as positive ones. If you tell the LLM "don't do this" you only put the idea of doing that into it's context. (Surprisingly the same goes for human toddlers.... the AI is just in it's toddler phase).
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.
I realized I was doing it wrong when Cloudflare launched their own prompt spec for their workers implementation. Their proposed pattern is slightly different though: "You need to do this, but you did that. Please fix (with this [optional])". I might try a hybrid approach the next time.
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).