What will hurt artists is, when in 10 years, all publishers are demanding that the vividness score (TM) be at least a 95% “because that’s what drives sales”.
Which is what will happen if the authors don’t proactively stop it from happening. Look at how the music industry has evolved over time.
How his this different from all the vampire novels that hit the shelf after the success of Twilight? Publishers alway preferred the money makers, just the measure changed.
Nowadays writers can at least publish their books without the need of publishers and I think some like the help of the bad Silicon valley stuff that made writing, publishing and interacting with the readers easier.
I'm on your site if it's about automatic content creation and style copying but text analysis is not the real danger. Especially when the usefulness of such statistics isn't even given.
Or it could help me find terser books I like, people will still have preferences and if the author tries to pander to only the largest market segment I'd argue that's on them.
> How is this different to the current process, other than feedback is slower (if forthcoming at all) and less specific?
Let me rephrase your question: "how is it different to the current process, other than <the fact that it is different>?" :-). I would say that the answer lies in the question.
Sounds as though your view of the AI is purely positive, in that case. That's fair enough. The answer for other people may well not lie in the question (e.g. for all the people who don't like this development), but it did for you!
My point was that it is different: when humans read a book, they don't train a machine learning model. They can't read as many books as a machine, at the same speed, and they can't remember nearly as much as what a machine can.
Humans and computers are fundamentally different, and it matters. You can't conclude that because it works for one, it will fork for the other.
Right. Yeah I did not express myself clearly, sorry :). You were saying "how is it different other than X and Y?", and I wanted to say that X and Y are already enough for me to consider them different.
I am actually on the side that LLMs are a big problem for copyright, and I don't want my code and blog posts to be used in their training dataset without my consent. To me, at this scale, it's not fair use. IMO it's a bit like if Facebook said that it is fair use to leverage metadata about their users, because "someone who sees you in a public space talking to a friend knows that you are talking with that person, and it is the same for Facebook on social media". My problem is not that Facebook knows that I sent a message to a friend now, but rather that they know who writes to whom and when, at scale.
Similarly my problem is not that somebody could read my blog post, learn from it, and write another blog post. My problem is that LLMs automatically train on all written material they want on the Internet, at scale, and without acknowledging that all that material has a lot of value (and is copyrighted).
I think fair use should somehow consider the scale.
the difference is that an machine analysis is necessarily limited and can't account for all the factors that make a text interesting. so it is possible that this analysis rejects texts that would not be rejected by a human.
it is objective but potentially biased. and it could even be discriminating if the input for this tool isn't diverse enough. but these are the issues that can go wrong with any use of technology, and we have seen many examples of that happening. however i don't think that is problematic if writers use it to analyse their own texts in comparison. it is however a serious issue if publishers use it to decide what to accept
Again, I don't particularly care about whether this is allowed to exist, I'm just here to laugh at the mindset that lead to it being created. But sure, I can see this being used in harmful ways.
> It seems the project was about analyzing books, not about producing new books. How is that hurting the authors?
"Vivid books are really in this year, we're gonna have to ask that you aim for a Vividness(tm) of 85 or above."
"US books have 15% more adjectives, clearly this is proof of our superior detail-oriented work ethic!"
"What does the rise in Emotion(tm) have to say about the decline of society?"
So if I understand you correctly, you're saying that we should not create "metrics" for anything because said metrics could be misused by clueless people?
Ok, so who decides what's OK to analyze or not? Is there some obvious moral line I fail to see, that everyone would immediately agree on?
It seems the project was about analyzing books, not about producing new books. How is that hurting the authors?