I don't understand the argument, I haven't said humans act like that, what I said is how you have to treat LLMs if you want to use it for things like that.
If you're somehow under the belief that LLMs will (or should) magically replace a person, I think you've built the wrong understanding of what LLMs are and what they can do.
I interact with tools and with people. When with people, there's a shared understanding of the goal and the context (aka, alignment as some people like to called it). With tools, there's no such context needed. Instead I need reproducible results and clear output. And if it's something that I can automate, that it will follow my instructions closely.
LLMs are obviously tools, but their parameters space is so huge that's it's difficult to provide enough to ensure reliable results. With prompting, we have unreliable answers, but with agents, you have actions being made upon those reliable answers. We had that before with people copying and pasting from LLMs output, but now the same action is being automated. And then there's the feedback loop, where the agent is taking input from the same thing it has altered (often wrongly).
So it goes like this: Ambiguous query -> unrealiable information -> agents acting -> unreliable result -> unreliable validation -> final review (which are often skipped). And then the loop.
While with normal tools: Ambiguous requirement -> detailed specs -> formal code -> validation -> report of divergence -> review (which can be skipped) . There are issues in the process (which give us bugs) but we can pinpoint where we did wrong and fix the issue.
I'm sorry, I'm very lost here, are you responding to the wrong comment or something? Because I don't see how any of that is connected to the conversation from here on up?
>>> But you have to consider the task just like if you handed it over to a person with absolutely zero previous context, and explain what you need from the "requirements gathering", and how it should handle that
The most similar thing is software. Which is a list of instructions we give to a computer alongside the data that forms the context for this particular run. Then it goes to process that data and gives us a result. The basic premise is that these instructions need to be formal so that they became context-free. The whole context is the input to the code, and you can use the code whenever.
Natural language is context dependent. And the final result depends on the participants. So what you want is a shared understanding so that instructions are interpreted the same way by every participant. Someone (or the LLM) coming in with zero context is already a failure scenario. But even with the context baked in every participant, misunderstandings will occur.
So what you want is formal notation which removes ambiguity. It's not as flexible as natural language or as expressive, but it's very good at sharing instructions and information.
If you're somehow under the belief that LLMs will (or should) magically replace a person, I think you've built the wrong understanding of what LLMs are and what they can do.