I mean, that is true for any interpreted language. That's why have type checkers, LSPs, tests and so on. Still not bullet proof, but also not complete time bomb like some commenters make it out to be. Hallucinations are not an issue in my day to day, stupid architecture decisions and overly defensive coding practices, those more so.
Right, that's why good language design is still relevent in 2025. e.g. type checking only saves you if the language design and ecosystem is amenable to type checking. If the LLM can leverage typing information to yield better results, then languages with more type annotations throughout the code and ecosystem will be able to extract more value from LLMs in the long term.
Can you elaborate a bit here? In my experience, most code I come into contact with isn't nearly defensive enough. Is AI generated code more defensive then the median?