Hacker Newsnew | past | comments | ask | show | jobs | submit | notsydonia's commentslogin

Exactly. I also wonder what the end game is. If creating content becomes a loss-making exercise, people will logically stop and the LLMs will have less and less to content to 'train on.' And as even large news corps are increasingly deploying internal LLMs, the deadening banal style of LLMs, A.I. over-view etc will inevitably drive readers away. I use Perplexity for search in place of Google and it surfaces good links most of the time. But what do tech and media companies - even spotify - think they will do when the artists, reporters and creatives stop feeding them? Or readers don't want to read banal summaries of everything?


Had such a bad experience with Sonoma, multiple micro-glitches that stole time and focus, that I dropped back to Ventura and have been avoiding updates ever since. The weird thing is that despite numerous bugs being logged all over the place, the 'new features' updates from Apple OS didn't seem to address any of them. They were mostly weird gimmicky new add-ons that don't mean anything to a power-user.

I might be interested in trying Tahoe if they'd undone whatever the awful policy is that puts a tonne of unwanted apps and desktop pics etc into your desktop that cannot be removed. I don't want Apple News, the clock in the menu bar and even Airplay - I purchased the computer, why can't I have what I want on it without compulsory apps from Apple?


You can remove the desktop widgets, no idea about turning off the clock though…


I tried some Woo-Commerce? plugin to accept Bitcoin on my sites a few years ago and Paypal baulked at it or made it incredibly complicated to the point that I gave up. Stripe said to wait until their cryptocurrency was released.

The main issue at the time seemed to be that it couldn't really work as a medium of exchange. Eg; $100 paid on the Monday might be $80 or $150 by the time it actually cleared. Maybe I was really inept but I got sick of the complexity and bailed, even though I liked the idea of having these options.

I wouldn't try Paypal's offering as they don't have any kind of quality support for merchants.


Great piece. There was a point last decade where literally every person I encountered, upon hearing about my site, would start badgering me to "get an app." If asked what this hypothetical app could do that the site didn't, the answer was that it would just be good to have an app.

Now in 2025 my biggest app-pain is being in the already useless live support chat for a phone co or utility company and they keep insisting that I'll get actual support if I download their stupid app. Again, they can't cite a reason - it's just "better." For data-brokering, sure - for the user, barely ever.


It's also a huge danger as the system FB uses to tag and categorize photos is clearly flawed. example: Meta took a business page I ran that had over 150K followers offline because of a photo that violated their 'strict anti-pornography' etc etc policies. The picture was of a planet - Saturn - and it took weeks of the most god-awful to and fro with (mostly) bots to get them to revoke the ban - their argument was that the planet was 'flesh-toned' and that their A.I. could not tell that was not actually skin. The image was from NASA via a stock library and labelled as such.


Google had banned (years ago) my secondary Google a/c that at best I used once in a few months - never even browed from a browser with that a/c logged in, never ever used it for anything other than Gmail - I doubt YT etc was even activated on that. The reason given was a kind of porn that I can't bring myself to type the name of. I didn't even think of appealing - I was so fucking scared and ashamed without ever indulging in that.

But that was when I bought my domain and mail hosting service and few months later I had moved my email to my domain almost everywhere.

Years later Google also killed my primary Gmail (i.e what was primary email earlier) Google Play a/c (for lack of use; true I had never published an app) and didn't refund the $25 USD even though I had finished all the tasks needed to keep the a/c alive 3 days before deadline and I had also requested them to tell me "how to add the bank a/c" to get the refund (asked at least 5 times over a span of 40 days) - because they kept telling me "add the bank a/c for refund" and never telling me "how" or sharing an article or page that told me how. I could never find out how.

They kept the $25 - not even appeals were allowed/entertained. I got "final.. no further response" and that was it, literally no further response on it.

I stop to think sometimes why.. just why we gave these trillion dollar companies this much power - the likes of Apple, Google, AMZN, Meta, MSFT.. why?? Now we literally can't fight them - not legally, not with anything else. It seems we just can't.


> They kept the $25 - not even appeals were allowed/entertained. I got "final.. no further response" and that was it, literally no further response on it.

It's the kind of thing I'd send to the small claims court out of spite.


It doesn't happen in my country and the charge is from sometimes back. I kept looking for records of that transaction but could not find it.


Not even, I’d reverse the charge with my bank/credit card company.


Venus, in her naked glory, I could understand at a stretch, but Saturn?


Somebody liked Saturn enough to put a ring on it.


Some Saturn photos you can find on duckduckgo look like a woman's breast if you're AI enough


If you won’t take to arms for Venus, when will you?



Looks like someone took arms to me.


The AI isn't even wrong, naked planets are bannable: https://en.wikipedia.org/wiki/Sailor_Saturn


Just don't show Uranus!


Can’t really see much nakedness through all those CO2 clouds.


Mystery is the font of desire.


Thank God you didn't use Uranus.


One reads completely ridiculous cases like the one you describe, and shakes their head at those who preach the notion of creating ever more thickets of AI "powered" bots as a prima facie interface for our social services, customer support and other institutional interaction needs.

Idiocies like this are why AI should absolutely never (at least at any present level of technology) be an inescapable means of filtering how a human is responded to with any complaint. Truly, fuck the mentality of those who want to cram this tendency down the public's throat. Though it sadly won't happen thanks to sheer corporate growth inertia, companies that do push such things should be punished into oblivion by the market.


I worked on a project where one of the services was a model that decided whether to pay a medical bill.

Before you start justified screams of horror, let me explain the simple honesty trick that ensured proper ethics, though I guess at cost of profit unacceptable to some corporations:

The model could only decide between auto approving a repayment, or refer the bill to existing human staff. The entire idea was that the obvious cases will be auto approved, and anything more complex would follow the existing practice.


Mmmmhm, which means the humans now understand that they should be callous and cold. If they're not rubber stamping rejections all the time then the AI isn't doing anything useful by making a feed of easy-to-reject applications.

The system will become evil even if it has humans in it because they have been given no power to resist the incentives


> humans now understand that they should be callous and cold

Were humans working on health insurance claims previously known for being warm and tend to err on the side of the patient?


> Were humans working on health insurance claims previously known for being warm and tend to err on the side of the patient?

I know that in the continuously audited FEP space, human claims processors were at 95%+ accuracy (vs audited correct results).

Often with sub-2 min per claim processing times.

The irony is that GP's system is exactly how you would want this deployed into production. Fail safe, automate happy path, HITL on everything else.

With the net result that those people can spend longer looking at more difficult claims. (For the same cost)


All you have to do is take an initial cost hit where you have multiple support staff review a case as a calibration phase and generate cohorts of say 3 reviews where 2 have the desired denial rate and 1 doesn't. Determine the performance of each cohort by how much in agreement they are and then rotate out whose in training over time and you'll achieve a target denial rate.

There will always be people who "try to do their best" and actually read the case and decide accordingly. But you can drown them out with malleable people who come to understand if they deny 100 cases today then they're getting a cash bonus for alignment (with the other guy mashing deny 100 times).

Technology solves technological problems. It does not solve societal ones.


I am not disagreeing, and I am not arguing for AI.

I am just saying that the perverse incentives already exist and that in this case AI-assisted evaluation (which defers to a human when uncertain) is not going to make it any better, but it is not going to make it any worse.


Actually it may, even if only slightly. Because now as the GP says, the humans know the only cases they're going to get are the ones the AI suspects are not worthy. They will look more skeptically.

I totally agree that the injustices at play here are already long baked in and this is not the harbinger of doom, medical billing already sucks immense amounts of ass and this isn't changing it much? But it is changing it and worse, it's infusing the credibility of automation, even in a small way, into a system. "Our decisions are better because a computer made them" which doesn't deal at all with how we don't fully understand how these systems work or what their reasoning is for any particular claim.

Insofar as we must have profit-generating investment funds masquerading as healthcare providers, I don't think it's asking a ton that they be made to continue employing people to handle claims, and customer service for that matter. They're already some of the most profitable corporations on the planet, are costs really needing cutting here?


>"Our decisions are better because a computer made them"

This is the root of the problem, and it is (relatively) easy to solve: make any decision taken by the computer directly attributed to the CEO. Let them have some Skin in The Game, it should be more than enough to align the risk and the rewards.


The bot should have let ~5% of auto-accepted claims through to the humans. And then tracked their decision.


Actually the real issue for the humans was that it would mean possible reduction in employment which is why we had union block deployment for a time until a deal was brokered.

It helps, as you can suspect from "union" comment, that it wasn't an american health care insurance company.


How hard would it be tweak that model so that it decides between auto-paying and sending it to a different bot that hallucinates reasons to deny the claim? Eventually some super smart MBA will propose this innovative AI-first strategy that will boost profits.


Funny enough, the large AI companies run by CEOs with MBAs (Alphabet and MSFT), seem to be slow-playing AI. The ones promising the most (Meta, Tesla, OpenAI, Nvidia) are led by strict technologists.

Maybe it’s time to adjust your internal “MBAs are evil” bias for something more dynamic.


In what way is MSFT "slow-playing" AI?


They are slow-playing the promise of what AI can, should, and will accomplish for us.

Nadella said this yesterday at YC’s AI Startup School:

== “The real test of AI,” Nadella said, “is whether it can help solve everyday problems — like making healthcare, education, and paperwork faster and more efficient.”

“If you’re going to use energy, you better have social permission to use it,” he said. “We just can’t consume energy unless we are creating social and economic value.”==

https://www.thehansindia.com/tech/satya-nadella-urges-ai-to-...


Thanks. I agree w the things Nadella said there. But it rings pretty hollow, given how hard every MSFT product is pushing AI. What would it look like if they weren't "slow-playing" it?


That’s fair. I was looking more at the promises of what it can/will do than integrating it into products. The MBA CEOs seem more focused on solving business problems and the tech CEOs are more focused on changing the world.

This is all an aside from the original point, which was that I think it is unfair to pin the proliferation and promises made about AI on some cabal of MBAs somehow forcing it. The people building the tools are just as at fault, if not more.


Right, I can't sustain for a moment the idea that the guy who fumbled Recall like a stack of wet fish dipped in baby oil is actually a wise sage full of caution. I permit myself one foolish idea a day and that's not going to be the one for any day of the week.


Indeed, it is often the case that what powerful people say is very different from what they do.


Harder than just automatically rejecting every claim.


Nice that you're mentioning it. I've seen this piece today from Bloomberg, "Call Center Workers Are Tired of Being Mistaken for AI."

https://archive.ph/rB2Rg


They were probably using a bloom filter in the backend


Do you have a link to the picture, unmodified?


I enjoyed reading this but it also made me think I must be a bit weird. Depending on what I'm working on and where I'm at, I keep notes in Apple notes or obsidian, extended descriptions on bookmarks, physical sticky notes, an actual journal and pages files on desktop - barely any of it is tagged and I'd call it 'notes' rather than a 2nd brain but i go through it all every eight-12 weeks, cull what now seems irrelevant and try to act on the rest of it. I should probably learn how to actually use obsidian properly but I still don't get the 'second brain' terminology.


Also, leaving aside my previous two points on this and speaking as a person who consumes the internet, I don't want the apparently outmoded 'list of blue links' to be replaced by one A.I. overview.

As is well documented, the overviews can 'hallucinate' and less well-documented, they're bland. I'd rather have my search query met with an array of links, offering a variety of takes that I can then sift through.

This is especially vital for research which is why I now use Kagi and also Perplexity, as in the latter provides quality links. I may be wrong but I believe it was started by former Google execs and uses some of the natural language processing mechanisms that made legacy Google so good.


Google can link to whomever they want but stealing content and then saying it is not theft because they mix it up with content from other sites they've stolen from is not fair competition. It's more like the logic of car thieves who say that they took apart the porsche they stole and it's parts are now used with the parts from all the other cars they stole - therefore did they really take the now unrecognizable porsche? The court still deems it theft. If Google left creators and site owners content alone and created their own content to use in A.I. overviews - ie: becoming publishers - it would be annoying and spark innovation as sites de-googled but this is not that.


It IS incredibly unfair and it's also unethical, given that they're already scraping publisher content to feed their own A.I. Unless I missed it, this article didn't have details on how the revenue - if any - would be handled but I presume there would also be some 'ticket-clipping' on the part of Google there as well.

Ie: sorry about acquiring your content and then no longer linking to you but in case anyone does ever find your pathetic indie site, we're now offering an array of solutions so that you can ditch your long-standing but now no longer needed e-commerce solutions. You have made...$177 in micro payments this week minus the transaction fees of $17.77 but cannot claim it yet as you have not met our your ad revenue threshold. Need help? Go around in circles with our chat bots until you die - we'll keep your money regardless.


The person who devised this was most likely one Illia Vitiuk, head of the SBU cyber-sec department. Before that position, he was an MMA fighter. In a 2023 interview he says he was inspired by "James Bond films and a life of adventure..." Also apparently stood down then reinstated recently over some unexplained transactions, family finance situation.

https://www.npr.org/2023/09/06/1196975759/ukraine-cyber-war-...


Nobody cares. What a star.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: