Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.


I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.

Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom


Have you looked for:

- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.

- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.

Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.


Good feedback, we live a somewhat unusual lifestyle. We are digital nomads that live on a sailboat. I think some of that is possible and I will recommend he look for some online writing groups but the places we generally sail to are countries where schools/libraries aren’t going to have those types of things. It is challenge enough flying him back to the US to take AP exams


The open question is will someone who learns this way actually develope taste and mastery. I think the answer is mixed because some will use it as a crutch but it will also be able to give them a little bit of insight beyond what they could learn by reading and inquisitive minds will be able to grow discerning.


Large Language Model, not Large Fact Model.


This is very sad.


Why? Seems like a good idea, relying on the LLM to write for you won’t develop your skills, but using it as an editor is a good middle ground. Also there’s no shame in saying an LLM is “better” than you at a task.


Creative expression is also about relationships with other people and connecting with an audience. Treating it like product optimization seems hollow and lonely. There's friction to asking another person to read and give feedback on something you wrote, but it's the kind of friction that helps you grow.


Art is fundamentally a human activity. No amount of artistic work can be delegated to a machine, or else the art is dehumanised.


This seems like it would ban drawing tablets, musical instruments, and a lot of other things which seem silly to ban.


In this particular instance the medium is not the message. or the art.


It's not sad, it's using modern tools to learn. People that don't embrace the future get left behind.


You say that as if it's a justification, not an observation.

For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.

And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".

We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.

Be a little more discerning, and think for yourself before you lose the ability to.


Dan,

Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.

My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.

I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.

LLMs have been trained on likely all published books, it IS more qualified than me.

If he continues to write and gets good enough should he seek a human editor sure.

But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.


There is something deeply disturbing about your attitude towards making mistake.

You think you shouldn’t give advice because your feedback is not valuable and may even cause your son to give up writing, but you have so far given no reason why AI wouldn’t. From the entire ChatGPT “glazing” accident I can also argue that the AI can also give bad feedback. Heck most mainstream models are fine tuned to sounds like a secretary that never says no.

Sorry if this sounds rude, but it feels like the real reason you ask your son to get AI feedback is to avoid being personally responsible for mistakes. You are not using AI as a tool, you are using it as an scapegoat in case anything goes wrong.


> LLMs have been trained on likely all published books, it IS more qualified than me.

It has also be trained on worthless comments on the internet, so that’s not a great indicator.


> But I never want me to be a reason he backs away from something because my feedback was wrong.

Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will


Exactly, I would rather read his stories and discuss them with him. My advice on anything outside of pure opinion is invalid


Having something else help doesn’t preclude reading with them - it also may have better advice. Very rarely is anyone suggesting an all or nothing approach when talking about adding a tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: