Among non-programmers, you always hear about some fool that fell in love with an AI girlfriend or whatever, but you never hear about the people who open chatgpt up once, tried some things with it, said to themselves "huh, that's kind of neat" and then lost interest a day or two later, having conceived of no further items to which AI could provide assistance.
> having conceived of no further items to which AI could provide assistance
For me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.
I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
I actually hear about this fairly often. In quite a few of my college classes, there's a large focus on AI (even outside the computer science department). I find it surprising the amount of non-technical people who don't even think to use it, or otherwise haven't interacted with it except when required.
I find it surprising how many non-technical friends and family constantly anthropomorphize LLMs, regularly bringing up instances where they "asked AI" about this or that and it "told them" whatever. I'm tired of trying to explain that they are merely statistical sequence generators, don't have a mind, are occasionally completely out to lunch, and ultimately cannot be trusted. This is usually a losing battle. The sheer bullshit that "AI tells them" is often astonishing or ridiculous, but a lot of the time it's given undue weight and trusted anyway. The future is bleak.
I agree. There's also something to be said in it being another level of abstraction, only linguistic instead of technical, but failing to understand that they are "random" is a recipe for disaster.
There is a reputation going on around I hear in real life conversation that it just doesn’t work, gives incorrect info, gets in the way. Multiple people saying they are forced to use it for work and wish they didn’t, or even worse, coworkers blindly follow it when it is wrong and then they need to be explained that they are misinformed and the llm is wrong. I think the google ai preview really poisoned the well; people cite that one specifically often.
To what end is business moving today? The incentives of business are already divorced from the incentives of our species. Climate change is a direct result of this.
Guns, swords, and bombs are weapons. The same, attached to fancy computers that can use them autonomously are weapon systems. At least that's how I've always hears the terms used.
I guess I'm more surprised by the intensity of the backlash this generates here. I agree with you that mandating (weak) OS APIs like this the right approach, but that alone wouldn't warrant the severe reaction this is getting right?
A big chunk of the problem with this kind of legislation for me is that it inherently indicates a failure to govern to me. I disagree with the premise of the solution, but even more so this is trying to legislate a specific engineering solution for our current systems rather than any form of financial, objective guidance, or have reasonably actionable and enforceable consequences.
While laws that target engineering decisions are sometimes reasonable, they are always accompanied with specific guidance from a credible academic based institution (e.g. mechanical and civil engineering use private licensing bodies and develop specific curriculum and best practices).
The only time this law will ever be enforced is punitively for other crimes against major actors who are extremely limited in number. It is unenforceable for Linux, trivial for Apple, Microsoft, and Google to add to their OS. Presumably easy to spoof, the law describes it as minimal but once again, there isn't a specification so who knows. Websites won't be liable, they're getting a sweetheart deal here.
In practice what this law does is absolve abusive platforms an from any responsibility. It adds extra meaningless work and overhead for legitimate adult platforms while opening themselves up to new potential legal challenges, and ultimately doesn't replace the responsibility its removing.
This doesn't make children safer. This doesn't make the internet safer. This kind of legislation makes it easier to abuse children online by removing responsibility from platforms that are known to be dangerous to them yet profit from their presence the most.
It's considered offensive to the strongly freedom-loving FOSS community, and it's basically legally-required tech debt, which is annoying to all maintainers
Code is speech. Open source projects are an exercise in speaking publicly. This law mandates particular speech in your otherwise Free as in freedom code.
How are you not outraged? People are missing the above forest for the "oh but it's a tiny little easy API and I don't see any downsides" trees.
Seems pretty reasonable to get annoyed at a law that at best will be useless and at worse dangerous, while it will directly dictate features into the tools we all use everyday. All for no gain for anyone but maybe Meta and some other big companies.
Exactly. It's the beachhead of corporations upon which they will pry even more mandatory metadata to market to, silence, and control people.
Also, not every jurisdiction defines adult and/or legally able to use social media the same way. Parents need to parent at the level 0 social layer than push this off to everything at technical layers and everyone else.
It's a moral panic ruse in legislative form for greed and power.
I don't. I pretty much don't like talking in general, especially if I'm alone. Accordingly, no voice assistants; I don't think I've ever triggered one except accidentally.
I've seen at least a couple of services that promise to do just that come through this site.
Some people think these just pass the info through to an LLM to ask it "where was this image taken" which reminds me a little of when the police ask psychics for help.
Of course you can tell. If someone suddenly submits a mountainous pile of code out of nowhere that claims to fix every problem, you can make a reasonable estimate that the author used AI. It's then equally reasonable to suggest said author might not have taken the requisite time and detail to understand the scope of the problem.
This is the basis of the argument - it doesn't matter if you use AI or not, but it does matter if you know what you're doing or not.
reply