I consider it a tool. Tool multiplies performance. Tho from research it appears that multiplication is nonlinear, from "a complete greenhorn makes an app that would otherwise take them weeks to just learn skill", thru "low double digits improvements just from saving time on writing boilerplate/looking up common problems libs" all the way to "the time wasted on trying to make LLM do it is more than just doing it".
If you can't use your tools properly (i.e. in this case, have backups) you will hurt yourself. And trying to blame it on tools that have NO guarantee in the first place
> However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.
Yeah the article is ridiculous. Im not trying to defend but rather extrapolate. In particular about the “bro you are not working by chatting with ChatGPT” point.
If we consider it a tool, then why is it not work?
And to be clear I’m not even sure what I think. I’m throwing the question out there because I’m curious about what other devs think out here.
If you can't use your tools properly (i.e. in this case, have backups) you will hurt yourself. And trying to blame it on tools that have NO guarantee in the first place
> However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.
is frankly unprofessional.