Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI is entering an era of corporate control (theverge.com)
21 points by gmays on April 5, 2023 | hide | past | favorite | 12 comments


I like how states are introducing more AI bills. Not all bills are created equal. Are those bills going to benefit society at large or cater to special and corporate interests?


At this stage I just assume most AI laws being passed are red flag laws similar to how when cars were new places passed laws about people having to walk and to wave flags ahead of cars to warn that a car was coming.

In 20 years most of those laws coming into being now will likely be looked at as ridiculous.


Welcome to the "laboratory of the states". Where national and international organizations have to comply with dozens of different rule sets, serving thousands of different interests.

It sounds like a dumb idea, but when you really think about it, it's a moronic idea.


> Welcome to the "laboratory of the states". Where national and international organizations have to comply with dozens of different rule sets, serving thousands of different interests.

Are you advocating for one of the following: (or something else?)

1) no new laws are passed, so national and international organizations can operate with new technology unimpeded, even if laws are behind the times

2) national laws are passed instead of local laws, even though there might not be agreement on what these laws should be for a new and developing situation

I've often thought that the "laboratory of the states" is a good approach to new issues. Ideally consensus eventually develops, perhaps leading to federal laws (following agreed upon procedures).

Or should we go the other direction and have a one world government, because one size fits all? If states can't pass their own laws, why should nations be allowed to do so? We'll just have one group of leaders make decisions for everyone on the planet and we'll all be better off, especially large companies that don't want to bother trying to bribe so many different congresses and parliaments and statehouses and city councils, etc. /s

I admit to being dismayed by how often I hear that we should defer to the best interests of national and international organizations, as if bigger is automatically better.


Part of my objection to the "laboratory of the states" is that we don't learn anything from 50 bad laws. I suppose we learn 50 ways not to do it, but that's not really narrowing it down. Companies don't have the resources to bribe local legislators; they also lack the resources to educate them.

And part is that the Internet crosses state boundaries too efficiently. As you note, national boundaries aren't all that much less porous, but there is at least some national control even in countries without a Great Firewall.

States simply lack any tools for implementing Internet regulation. Either everybody ends up following the most restrictive rules, or everybody ignores all of the state rules until a national framework is established (as was done with sales taxes).

Of course all of that is just pointing out what's wrong, without suggesting a better solution. This particular comment was really about a much bigger issue: state laws being used abusively, to restrict individual freedom. This is a bigger political issue in the US, independent of "AI".

Setting aside my fist-shaking, you did ask if I had thoughts on the right thing to do. And unfortunately, the best I can come up with is this: any regulations will be outdated even if they could somehow be thoughtfully considered.

And I haven't yet seen anything that actively needs to be regulated. Regulations are written in blood, and there hasn't been any blood yet. There will be, I'm certain, but I'm equally certain that it will happen no matter what regulations are passed, at any level.

Actually, there has been blood. An AI talked somebody into killing themselves. But I don't think there is any regulation that is going to solve that shy of a permanent ban, and I don't think that's going to be considered.

So much as I hate to say this, I'm afraid we're going to have to white-knuckle our way through this for a while.


I appreciate your thoughtful reply.

> This particular comment was really about a much bigger issue: state laws being used abusively, to restrict individual freedom.

I agree that is a bad thing. Ideally the constitution serves to protect people's rights, but that depends on the courts.

Personally I don't think it helps that corporations are considered people (and what happened to limited liability being given in return for a public good, as opposed to purely motivated by profit with no regard for the commons?), and political money is considered free speech (media does a good job of using dollars to influence voters).

> thoughts on the right thing to do

It's a big topic, and there are considerations like the first amendment (is free speech for corporations a good thing?) and the interstate commerce clause.

I don't think it's unreasonable for states to outlaw things like selling self-driving cars to the public at this time (this needs to be safe for other drivers and peds, and a Tesla can't pass a drivers license test yet). Or outlawing facial recognition in public (stores and advertisers will continue to live). Or banning the use of facial recognition by police departments (or at least require warrants and reporting/oversight). Maybe we should make it illegal for an AI to masquerade as a human. Or maybe AI can be regulated as a munition, like encryption was a couple decades ago (although AI is dangerous to people, as opposed to governments).

We seem to be in a position of handling so many issues at such a large scale. International companies. National laws and regulatory bodies. Global communications. Some serious trust busting might help, but our politicians seem to have other priorities.

> I'm afraid we're going to have to white-knuckle our way through this for a while.

Theoretically we can outlaw anything, as long as we collectively agree. We've outlawed plants, nudity, sleeping in the wrong spot, certain sexual acts between consenting adults in their bedrooms, etc. We could have laws white listing new things, instead of assuming that the public won't be hurt. I like the ability for folks to do their own different things. I'm glad there's groups like the Amish out there, for example.

I do agree with your statement above. I don't assume that there is a magic fix for this, and most of us are going to be along for the ride.


We need less laws not more; these laws will hamper progress and favor special interests.


> We need less laws not more; these laws will hamper progress and favor special interests.

Who needs less laws? Do AI companies need less laws? Retail businesses? The police? Or the general public? How come?

By "less laws" do you mean we should pick one law for everyone, or do you mean we shouldn't have laws about this stuff at all?

What do you mean by progress? New technology? An increase in personal income? Gross national happiness? Stock market returns?

Who are the special interests being favored by these local AI laws? FAANG? People who don't want to be tracked? Stalkers? School systems?

Were you thinking of some specific local AI laws when you wrote your comment?

It's my impression that there are a variety of AI related local laws out there. It's a trendy new technology, ripe for profit. There's new power to grab. Many people are wary. Even more are unaware.

Without any detail I can't tell anything about your position on this, if you are against local facial recognition bans or in favor of nationwide minority report tracking systems, etc.


Great! Firstly, it's a totally natural part of technological process that long term and questionable research topics live in academia as they develop and only become private enterprises when they're mature enough, so that seems fine. Secondly, yes lots of money is being spent to create these language models but again - that's natural! This is like when it cost a million dollars to sequence a genome, that was fine, it's an emerging technology and as we get better at it the cost will drop precipitously. Finally, about large corporate giants and AI ethics, I think we're actually in a good position- the key players are established and as a result they understand the reputational risk of putting out a product that does something terrible, people have actually learned from Facebook's mistakes. So all-in-all I think we're in a pretty good position.

Oh and all the talk of AGI is bollocks and we shouldn't worry about it.


The growing dominance of industry players in AI development raises concerns about unchecked progress and misuse. As AI demands massive resources, academia has been left behind. This shift underscores the need for increased regulatory oversight and collaboration to address corporate self-regulation and to strike the right balance between AI innovation and responsible deployment.


Agreed, but right now they're only pushing papers, and what for? So companies can then move to the most favourable state?

This should be done at a federal level, and the US needs to be the global driving force behind it, ensuring all countries adopt and adhere the same principles. We need the equivalent of the International Atomic Energy Agency to manage this going forward.

Could this be even more dangerous than a nuclear bomb?


Is this LLM generated? Sure looks like it. A bit stiff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: