There's tons of utter garbage commercial software. There's commercial software with intentionally built in backdoors and information stealing. Most of it gets zero accountability, nor do the sites that distribute it, nor the ad networks that find viewers for it.
Just like there's basically no reputational harm anymore for leaking all your users details for most leaks
No, it's not whataboutism. You claimed that this was a problem unique to open source. Pointing out that the same results manifest in non-FOSS software isn't whataboutism, it's a direct contradiction of your claim.
That's simply not true. That's a very modern interpretation of the second amendment.
The /NRA/ lobbied for some of the first gun control laws in SanFran, because the Black Panthers were open carrying. Even in the days of the 'wild west' there were plenty of towns and cities with enforced gun control ordinances.
Just as we have anti-defamation laws despite freedom of speech being a constitutionally protected right, you can have gun control laws despite carrying guns being a constitutionally protected right. I don't think that makes what I said untrue. Such laws (in both cases) are kept on a very short leash, both from a legal and a cultural perspective.
Many, many people think the tactics they're trained in are exactly the problem - a few hours at most training in deescalation, little to no training in recognising or dealing with the mentally unwell, hundreds of hours of training in putting a bullet inn somebody
How do you get it ready for the prime-time without using it and finding the problems? This is exactly the sort of experiment that finds problems - low stakes, fun to tell stories about, and gives engineers a whole lot of reproducible bugs that they can work on.
The people who lose their prod database to AI bugs, or the lawyers getting sanctioned for relying on OpenAI to write court documents? There's also good - their stories serve as warnings to other people about the risks.
The select few lawyers on the right cases probably will be the only ones coming out ahead on this after the dust has settled.
The issue is that unpaid average people are being used, or rather forced, to act as QA and Beta Testers for this mad dash into the AI space. Customer Service was already a good example of negative preception by design, and AI is just being used to make it worse.
A production database being corrupted or deleted causing a company to fail sounds good on paper. But if that database breaks a bank account, healthcare record, or something life altering for a person who has nothing to do with the decision of using it the only chance they have for making it right is probably going to be the legal system.
So unless AI advancement really does force the legal system to change the only people I see coming out ahead from the mess we are starting to see is the Lawyers who actually know what they're doing and can win cases against companies that screw up in their rush to go to AI.
As we see these beta products get piloted in the real world... and fail spectacularly over and over... it argues for more time with the QA team. A few weeks ago CoPilot couldn't tell you how many times the letter B appeared in the word "blueberry."
A pause wouldn't work for those goals, but I think we could maintain plenty of research and experimentation without the whole bubble thing. Maybe 10% of current money-funnel levels, plus or minus a factor of two.
reply