It's less about paint a picture yourself, arguably there is little to no value there. OpenAI et al, sell the product of creating pictures in the style of their material. I see this as a direct competition to Studio Ghibli's right to produce their own material with their own IP.
I agree with this. I don't know how to create artistic styles by hand or using any creative software for that matter. All the LLM tools out there gave me the "ability" and "talent" to create something "good enough" and, in some cases, pretty close to the original art.
I rarely use these tools (I'm not in marketing, game design, or any related field), but I can see the problem these tools are causing to artists, etc.
Any LLM company offering these services needs to pay the piper.
I have a thought that whilst LLM providers can say "Sorry" - there is little incentive and it will expose the reality that they are not very accurate, nor can be properly measured.
That said, there clearly are use cases where if the LLM can't a certain level of confidence it should refer to the user, rather than guessing.
This is actively being worked on my pretty much every major provider. It was the subject of that recent OpenAI paper on hallucinations. It's mostly caused by benchmarks that reward correct answers, but don't penalize bad answers more than simply not answering.
E.g.
Most current benchmarks have a scoring scheme of
1 - Correct Answer
0 - No answer or incorrect answer
I don't think users understand the risks. I'm broadly accepting of the protection of end users through mechanisms. Peoples entire lives are managed through these small devices. We need much better sandboxing to almost create a separate 'VM' for critical apps such as banking and messaging.
The whole notion of "Vibe Coding" was to accept the output regardless and prompt forward. Anything else is moving the goalposts. If you can't accept the outputs and you need an in-depth knowledge of code then these LLMs are not ready for this task.
I run Gitea too - Seeing what is happening over at GitHub solidifies my decision.
Not too concerned over my public facing repos, Amazon and OpenAI seem to love 'em!
I have the ultimate control over my private repos (nothing juicy). I can't say I trust Microsoft not to do something I don't like at any point in the future.
Edit: I should say I wish phabricator got more love, that was a great tool!
If you are trying to get facts out of an LLM you are using it wrong, if you want a fact it should use a tool (eg we search, rag etc) to get the information that contains the fact (Wikipedia page, documentation etc) and then parse that document for the fact and return it to you.
These tools are literally being marketed as AI, yet it presents false information as fact. 'using it wrong' can't be an argument here. I would rather then tool is honest about confidence levels and mechanisms to research further - then feed that fact back into 'AI' for the next step.
Thing is, even with users that don't use the quota, these AI companies are still losing money. This isn't a case of the small users paying for large.
The true costs of AI are yet to unravel.