>The code ChatGPT generates is often bad in ways that are hard to detect. If you are not an experienced software engineer, the defects could be impossible to detect, until you/ChatGPT has gone and exposed all your customers to bad actors, or crash at runtime, or do something terribly incorrect.
I wonder about this a lot, because there's a future here where a decent amount of software engineering is offloaded to these AIs and we reach a point, in the near future, where no one really knows or understands what's going on. That seems bad. Put another way, suppose that your primary care doctor is really just using MedAI to diagnose and recommend treatment for whatever it is you went in to see him about. Over time, these sorts of shortcuts metastasize and the doctor ends up not really knowing anything about you, or the other patients, or what he's really doing as a doctor ... it's just MedAI (with whatever wrongness rate is tolerable for the insurance adjusters). Again, seems bad. There's a palpable loss of human knowledge here that's enabled by a "tool" that's allegedly going to make us all better off.
I wonder about this a lot, because there's a future here where a decent amount of software engineering is offloaded to these AIs and we reach a point, in the near future, where no one really knows or understands what's going on. That seems bad. Put another way, suppose that your primary care doctor is really just using MedAI to diagnose and recommend treatment for whatever it is you went in to see him about. Over time, these sorts of shortcuts metastasize and the doctor ends up not really knowing anything about you, or the other patients, or what he's really doing as a doctor ... it's just MedAI (with whatever wrongness rate is tolerable for the insurance adjusters). Again, seems bad. There's a palpable loss of human knowledge here that's enabled by a "tool" that's allegedly going to make us all better off.