You say that as if it's a justification, not an observation.
For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.
And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".
We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.
Be a little more discerning, and think for yourself before you lose the ability to.
Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.
My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.
I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.
LLMs have been trained on likely all published books, it IS more qualified than me.
If he continues to write and gets good enough should he seek a human editor sure.
But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.
There is something deeply disturbing about your attitude towards making mistake.
You think you shouldn’t give advice because your feedback is not valuable and may even cause your son to give up writing, but you have so far given no reason why AI wouldn’t. From the entire ChatGPT “glazing” accident I can also argue that the AI can also give bad feedback. Heck most mainstream models are fine tuned to sounds like a secretary that never says no.
Sorry if this sounds rude, but it feels like the real reason you ask your son to get AI feedback is to avoid being personally responsible for mistakes. You are not using AI as a tool, you are using it as an scapegoat in case anything goes wrong.
Having something else help doesn’t preclude reading with them - it also may have better advice. Very rarely is anyone suggesting an all or nothing approach when talking about adding a tool.