Business models based on conflicts of interest, obfuscation of important aspects, manipulation, surreptitious use of information, etc. will always create damage.
Conflicts of interest generate damage.
That is what AI products and services need to avoid.
Facebook is a scrapbooking company that requires very few resources to host tiny posts, …
… wrapped in a megacorp that harnesses vast computing resources in order to manipulate its users into watching ads for hours, making maximal unnecessary purchases, instead of occasionally perusing their friends and families posts.
All the dysfunction, the constant emotional hooks, the rage addictions being fed, the casino aspects of trashy doom scroll content, etc. grow like weeds into an ever more complex jungle, because … conflicts of interest.
It is easy to forget that only 1% of the company’s activity, or maybe 1% of that, is actually needed to host our tiny posts. An extremely basic task.
I like the article's push for regulation, as if that isn't just an excuse for the existing big players to erect a moat around their technology to protect it from competition.
IMO we have already got it wrong and put the cart before the horse. Political theater and existing leaders in the space talking their book.
Regulate what exactly?
Does regulation include scikit-learn? If it is obvious that it does not include scikit-learn then what exactly makes it obvious?
Is it GPU use so we are regulating GPU use? Are we just regulating language models?
Is HN defined as social media?
How can we possibly get things "right" when we don't even bother to define what we are trying to regulate?
It seems obvious that whatever regulation is passed there will be loopholes big enough to drive a truck through that the regulation will be 100% useless. Either that or we go insane and try to regulate linear regression.
This is the fundamental problem with most laws and regulations. Language is limited and attempting to clearly define exactly what the law covers is nearly impossible, especially when it has to cover hundreds of millions of people.
I wish lawmakers would first consider whether a purposes law is even complete and clear before deciding if they agree with it. If a law is full of vague descriptions, logical holes, or do much nonsense that the average person couldn't read and understand it there's no point in considering the bill's goals or intent.
I'm inclined to think taxing AI used to displace human labor is likely to be far more effective. But I'm not aware of any bills experimenting with this right now.
The mistake was the existence of "social media" in the first place. The internet was supposed to connect all of our computers, and let anyone serve content. Internet "providers" selling "Internet Access" instead of connectivity made sense in the days of dialup, but that entrenched directionality removed most of the power of the internet from the hands of the people, turning us from internet Citizens to "users" or "consumers".
The best way to avoid a repeat of this history is to run your AI locally, and retain control.
How? It started with all the wrong steps. Everything around how openai promoted it is toxic. That company is worse than facebook. If you want to get ai right then free openai out of its misery.
The fud, the dumbing down to a bad scifi level, the spam everywhere for their product, their call for legislation in their own favour (largely aided by the bad scifi image they’ve create about ai) while trying to seal the entry for small players, total disregard for privacy and copyright, and so on.
I think saying that we got social media "wrong" today is not unlike looking at 1920s and 1930s cars and saying that we got automobiles wrong. We've only barely figured out what components social media is even made up of at this point.
The counter that I would offer is what are the time-based consequences of us getting something wrong? How quickly do the wheels fall off so to speak?
With cars, which we can say came to be around the 1920s, we didn't start to care about safety or environmental concerns until maybe the 1960s/70s. In that sense, we did some damage and corrected our trajectory 40-50 years later and have been improving them further ever since.
I would say social media evolved much more quickly and the consequences accumulated faster. Specifically around protecting kids (from the beginning we should have had protections in place to keep kids off social media or in a protected version; likewise, parents shouldn't have been allowed to plaster pictures of their kids online.. effectively monetizing them).
So social media went wrong and it went wrong pretty quickly. The odds of us fixing it before a lot more damage is done are low because social media companies also have a lot more power and lobby our government in ways auto could only dream of.
The pace of AI adoption/use will likely match or exceed social media and the consequences of "getting it wrong" will accumulate that much more quickly. Not saying we turn it off, but we should have protective frameworks in place early on or we will do some irreversible damage in excess of social media and at a quicker pace.
I'm afraid it is already moving in the wrong direction now when you look at the plugins of ChatGPT Plus: lots of companies that want to sell there services :-(
Isn't AI more like a personal instructor etc and not a social
media? Weird comparison imo.
We didn't get search engines too wrong and there are good search engines too as well as corporate ones.
Conflicts of interest generate damage.
That is what AI products and services need to avoid.
Facebook is a scrapbooking company that requires very few resources to host tiny posts, …
… wrapped in a megacorp that harnesses vast computing resources in order to manipulate its users into watching ads for hours, making maximal unnecessary purchases, instead of occasionally perusing their friends and families posts.
All the dysfunction, the constant emotional hooks, the rage addictions being fed, the casino aspects of trashy doom scroll content, etc. grow like weeds into an ever more complex jungle, because … conflicts of interest.
It is easy to forget that only 1% of the company’s activity, or maybe 1% of that, is actually needed to host our tiny posts. An extremely basic task.