Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a great framing. I also appreciated this similar idea in Matthew Butterick's blog:

> Before too long, we will not dele­gate deci­sions to AI systems because they perform better. Rather, we will dele­gate deci­sions to AI systems because they can get away with every­thing that we can’t. You’ve heard of money laun­dering? This is human-behavior laun­dering. At last—plau­sible deni­a­bility for every­thing. [1]

While in a corporation there's still a person somewhere who could be held accountable, AI diffuses this even more.

[1]: https://matthewbutterick.com/chron/will-ai-obliterate-the-ru...



> Before too long, we will not dele­gate deci­sions to AI systems because they perform better. Rather, we will dele­gate deci­sions to AI systems because they can get away with every­thing that we can’t.

Here's Frontier Airlines announcing proudly to investors that they will do exactly that.[1] See page 44.

Today: Call center. Avenue for customer negotiation.

Tomorrow: Chatbot efficiently answers questions, reduces contacts and removes negotiation.

[1] https://ir.flyfrontier.com/static-files/c7e0a34d-3659-49cc-8...


This is the infuriating part of dealing with Amazon customer service these days. Anything that doesn’t fit into the box of what the chatbot will do is met with “I understand your frustration, …” and it’s like, no, there is no understanding here!

Just like call center trees have escapes to real service by pressing 0 ad nauseum or swearing loudly, these AI service agents will have ways to get to real people and they’ll be documented in the usual places.


How many times have you read about someone locked out of their <insert major tech company> account and having no recourse except taking it to HN and hoping a human on the inside reaches out?

Imagine if that was every corporation, and some of them had zero humans on the inside.


It is not a good future, and at least in Amazon’s case [email protected] works as an escalation route.

My gmail account with the same username got banned in 2007ish in the middle of college, I definitely feel the pain of no recourse. I lost everything - calendars, todo lists, email and had no way to get the underlying data or do anything about it.

My hope is that at the end of the day a corporation is made of people (legally, this is why they have 1st amendment rights in the USA) and this will prevent the scenario you imagine because the money of those people will be at risk.


Whilst it's interesting to imagine every company being run by the most aggressively rent seeking MBA-like AI, I don't think that will ever be the entire market.

I would guess it all comes down to risk. The future won't be uniform, it'll be poorly defused, just like then present.

People will make a value judgement on the types of corporations they want to deal with. The sort of company that no has employees will by necessity have to be very cheap compared to those that provide interactive human service.

Dealing with AI firms will be like dealing with an anonymous Chinese eBay seller. You'll effectively have no legal recourse as such and they won't discuss things but complaining might get you a replacement item or a full refund.

If it doesn't the product will have to be cheap enough to write off as a bad buy.

Let's hope cheap AI powered corps don't suck all the profit out of the market and we loose all firms capable of providing real services.

Or we end up paying for human service that's been quietly outsourced to AI much like we do for many big name manufactured goods.


Why would "the markets" provide that option? AI corporations will be phenomenally successful predatory value-extractions machines. Shareholders will vastly prefer them over more humane corporate structures.

This already happens. It's why PayPal, YouTube, Amazon, and the rest can shut down small-fry business accounts with no come-back. They're monopolies and they don't care. Because they don't need to.

The next stage will be shareholders using AI to make their investment choices. This will - automatically, with no recourse - drive money towards AI corporations while starving human-run businesses of investment.

Essentially AI is just an amplifier of existing economic and political trends - a programmable automated economic predator. Because many of those trends are dysfunctional and insane, we're going to have a lot to deal with.


> Why would "the markets" provide that option?

They'll provide it if enough people want something that's made exclusively by humans and not AI.

In a world of ubiquitous McDonalds some people still want artisanal food even if it costs more.


Depends on whether you view humane business practices as a cost-center of a value-center though - in pure economic terms.

This is an unsolved question, because when you get down to it our businesses are still run by humans, and humans haven't changed much in the last 10,000 years from when we lived in tribes in Africa and were predated by large carnivores.

Look at the utterly insane culture which develops in places like LinkedIn and tell me that all executives are making calculated decisions with no ego, just efficiency. They're not - they're clearly not.

The fear of AI capitalism is the fear that it'll do to everyone what currently is done to the working class, given the opportunity. That's a realistic fear! But it's not guaranteed, because one of the significant arguments to stop abusing workers is that when you do, they're more efficient and more productive. Consider the delta between what the data says about WFH home and the complete conviction some managers have that despite this they've got to drag people back into the office (Blizzard Entertainment is a good example of manifestly creating problems because you won't let a bad idea go on this front).


The Chinese eBay seller brought to mind the concept of the Chinese Room[0] and how it is a reflection of current LLM’s. Perhaps also influenced by a recent read of Blindsight.

[0] https://en.wikipedia.org/wiki/Chinese_room


> we will dele­gate deci­sions to AI systems because they can get away with every­thing that we can’t.

This is one of the main reasons I quit the Facial Recognition industry; it is being used to delegate decisions, to remove responsibility of those decisions from those that need to be held accountable.

I worked as principal engineer of one of the top 5 enterprise FR systems globally, and the number of end-users fraudulently abusing the software blew my mind. Case in point: police called for a street crime, police ask the victims what celebrity their culprit looks like, police then put images of that celebrity into their FR software to collect suspects, followed by ordinary innocents who happen to look like celebrities being called into lineups and harassed by the police. And this practice is widespread!!!

That is just one example of how incredibly stupid people using our software will use our software, potentially harming large numbers of innocents.


Unfortunately even having humans in charge doesn't mean those humans will be punished for malfeasance. When was the last time you've seen an exec personally pay for their ad conduct?


"Did you ever expect a corporation to have a conscience, when it has no soul to be damned, and no body to be kicked at?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: