Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> precise prompt

If there was such a thing you would just check in your prompts into your repo and CI would build your final application from prompts and deploy it.

So it follows that if you are accepting 95% of what random output is being given to you. you are either doing something really mundane and straightforward or you don't care much about the shape of the output ( not to be confused with quality) .

Like in this case you were also the Product Owner who had the final say about what's acceptable.



The above is saying more precise not completely precise. The overall point they're making is you still are responsible for the code you commit.

If they are saying the code in this project was in line with what they would have written, I lean towards trusting their assessment.


I am not doubting 95% acceptance rate all. I've pure vibecoded many toy projects myself.

> in line with what they would have written,

point i am making is that they didn't know what they would've written. they had a rough overall idea but details were being accepted on the fly. They were trying out bunch of things and see what looks good based on a rough idea of what output should be.

In a real world project you are not both product owner and coder.


To be clear I did not have a 95% acceptance rate. I'm saying that in the final published repo, 95% of the lines of code were written by AI, not by me. I discarded and refactored code along the way many times, but I did that by also using the AI. My end goal was to keep my hands off the code as much as possible and get better at describing exactly what I wanted from the AI.


> if you are accepting 95% of what random output is being given to you

I am not, and don't expect to be able to do that for many years yet. The models aren't that good yet.

I would estimate that I accepted perhaps 25% of the initial code output from the LLM. The other 75% of output I wasn't satisfied with I just unapplied and retried with a different prompt, or I refactored or mutated it using a followup prompt.

In the final project 95% of the committed lines of code in the published version were written by AI, however there was probably 4x as much discarded AI generated code along the way that was also written by AI. Often the first take wasn't good enough so I modified it or refactored it, also using AI. Over the course of using the project I got better at providing more precise prompts that generated good code the first time, however, I rarely accepted the first draft of code back from Kiro without making followup prompts.

A lot of people have a misguided thought that using AI means you just accept the first draft that AI returns. That's not the case. You absolutely should be reading the code, and iterating on it using followup prompts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: