Banning everything because we don't like change isn't a great plan either.
The modern society is a product of the Industrial Revolution, which had many nasty side effects (destroyed the traditional structure of the society, poisoned the air and the water, led to large swaths of land being damaged by mining etc.)
Yet if, say, Prussia or France decided to ban industry early on because of the very real harms observed in England, they would have been conquered mercilessly by their foes later.
Technological bans often come at a price of stagnation. Facebook is at least a bit under Western control and can be regulated. Chinese AI labs won't be, and we will only see their results when smart missiles start raining on our heads.
Who said anything about banning everything though? That smells like a strawman.
Fools rush in is a warning against not diving headfirst into doing whatever you feel like doing just because you can, it's not advocacy for "banning everything".
Look around, we are living in a veritable vetocracy.
There is no need for anyone to advocate banning everything. It is a result of aggregate efforts of a diverse set of activists each trying to ban something.
You dislike AI training. Someone dislikes high speed rail, or possibly any new rail at all. Someone dislikes Starship. Someone dislikes new housing or rezoning of their city. Someone dislikes new biotech. Someone dislikes new factories. Someone dislikes new wind turbines. Someone dislikes new nuclear power plants. And nowadays, the fear of harm and doing harmful things is in vogue, so the standard line of defense is to argue potential harms, and given that no activity in the world is completely harmless, the result is stasis and a paper war over everything.
It is as if there was a group of activists who dislike individual letters. Even though Alice only hates H and Bob only hates Q, if enough people do this, the entire alphabet is going to be banned soon.
There is a certain irony in the fact that the Internet, a revolutionary technology in itself, has enabled various reactionary folks to organize themselves to stop their $Nemesis, for whatever value of $Nemesis. Where once were isolated naysayers, a pressure group is now easily put together.
> when they can’t even outline a real harm that the ban is intended to avoid.
Unsurprising when { something } (for various values of something) has never been done before.
The wise course of action is to not rush full bore into, say, handing out sticks of dynamite, dumping hundreds of thousands of gallons of waste, allowing third parties in other countries to digitize citizens, etc. etc. until time has been taken to look into pootential harms.
This is very much in the eye of the beholder. In my opinion, we already suffer from an abundance of suffocating safetyism, which may be a consequence of aging of our societies.
"time has been taken to look into pootential harms."
And I am also very skeptical towards the idea that a theorizing committee can predict future reliably. Which includes future harms and rewards of some technology that has barely started to develop.
The experience from the past is that we cannot tell such things in advance.
"They also don't have to be jumped into head first w/out checking for snags and clearance either."
So new technologies should be at least put on hold until someone checks them for snags and clearance to an extent satisfying for ... whom? You? An average person? A committee? The parliament? Some regulatory body?
Again, what you think of as wisdom seems like obsessive safetyism to me. As someone else here argues, you shouldn't have the right to stop or delay random things just in case, unless you can demonstrate some concrete harm.
The world shouldn't be a carefully padded kindergarten for adults. Heck, even current kindergartens for kids are too safetyist by far. Shout out to Lenore Skenazy and her free-range parenting.
Laws and regulations have downsides, too. And if you locally regulate some dual-use tech out of existence, it is well possible that you may lose a subsequent war.
In this particular instance, we are talking about AI analyzing pictures, and looking at the Russo-Ukrainian war which is developing into a massive drone-fest, you cannot ignore the military ramifications thereof.
It is possible to jam a remotely controlled drone, it would be much harder to neutralize a drone that is capable of detecting targets reliably on its own.
> a drone that is capable of detecting targets reliably on its own
Yes, I guess in the future a lot pf people will be killed by autonomous swarms of drones that just don't like the way they look, sometimes malfunction, give plausible deniability for war crimes, etc... ("who knows what the drones did, there was no communication. Also, the drones cannot possibly kill anyone who isn't a terrorist. Also, surely it was the drones of the other side.)
That's actually a great example for something I'd like to be banned. It would require international cooperation and arms control agreements, which unfortunately currently doesn't seem likely to happen.
I don't know what part of the west you are pointing at, but that's pretty much how it works here in Australia.
That's why are farmers are getting Parkinson's, and cars are too big for the roads, and there is too much sugar in everything, and social media platforms can broadcast whatever they want with no repercussions.
I was just pointing out that's how the west works. sheesh. I'll just go back to cooking my dinner in a Teflon pan, on my artificial stone benchtop, in my asbestos clad kitchen, while smoking cigarettes, and something about lead paint.
How about we empower panels of experts in their fields to make decisions based on rational and informed consensus and then revise those decisions every so often, so that we don't have to personally choose to ban everything or nothing?
Western society today, but it didn't start this way.
The nanny state is really only a couple decades old.
Government has realized it can take significant powers if it can convince a subset of the populace that it is the only way to protect them from some type of harm.
So I live in a nanny state, and I hate the fact that my freedom is restricted. Gun ownership of any kind is all but prohibited, for example.
However I really wish the discourse around this stuff would shift a little bit. I want my government to protect me from huge tech companies and the massive infringement on our privacy, community and culture. Like the government should serve the people, both by not restricting the freedoms of individuals too zealously, but also by protecting people from genuine threats that they can’t or won’t protect themselves from, such as social media companies exploiting network effects that make it basically impossible for users to make alternative choices.
If you think otherwise please explain it to me because I really, genuinely want to understand.
Most new inventions have good outcomes and bad outcomes.
The benefit of letting them play out for a while is that you can see how things ultimately turn out, and cherry pick the aspects that are worth keeping, decide what aspects are worth keeping with changes, and outlaw aspects that are harmful.
It’s a natural experiment where you keep the wins, cancel the loses, then repeat the process, making societies iteratively better.
If you just pre-emptively ban everything before any significant real world harm is apparent, then you lose the ability to be selective and keep the wins.
This is Europe’s big mistake. They’re too eager to ban on gut feel, which means they ban too much, which places their industries at a disadvantage relative to other countries where iterative innovation can continue.
I dont disagree with what you’re saying in principle. However the Facebook experiment has been going on for decades, with multiple examples of unethical behaviour and a clear lack of any respect for privacy. Same can be said about basically any large tech company
I just can't get my head around how anyone can think that posting pictures to social media in the first place reflects a desire for privacy or how using my photos as training data is a worse invasion of privacy than allowing millions of people to see them.
It is possible that the 'nanny state' is both good in some ways and bad in others. You and I benefit in uncountable ways every day from the protections granted by such a system, without realizing it.
Nuance is tough. It is easier to find a simple thing to blame and advocate against it. But nothing is simple -- governments are composed of people and those people have inside them conflicting motivations, each person's different than the other -- and most of them are not malicious. The same with business, and with the public.
Let's look at issues on their own and not try to categorize everything as part of a black and white choice.
Society benefits from allowing new inventions then banning those that turn out to be harmful, once the harm is clear.
The result is that you keep the good inventions and discard the harmful ones.
What people are proposing here is to preemptively ban new inventions even when there’s no evidence of harm. It’s not a good idea because while it will stop the bad inventions, it will also stop the good ones.
There is a concept of 'risk'. Making any choice, yea or nay, comes with risks, and they should be evaluated and acted up as per the risk tolerance of people at the time. You are saying that we should disregard any rational evaluation of potential risks on one side and only choose one option all the time. Is that a logical position to hold?
No, I’m saying we shouldn’t be so quick to jump to banning things without a clear reason.
This thread is full of people suggesting Facebook be banned from behaving this way based on little more than gut-feel. They aren’t even capable of giving a hypothesis for harm this could be harmful.
And to think if Facebook just let people opt-in we wouldn't even be having this conversation. Harmful or not, did Facebook, who is bringing in billions in profit every three months, really have to do it this way?
That's dangerous, because mitigations after the fact can be very far reaching. Not only dangerous for users, but also companies. So lets imagine it for a moment, that something like this needs to be mitigated:
Who used any resulting model to create anything at all? In which cases did something too similar to a real person result from the usage? How do we track all that down and delete it? Then we will need some IT guys coming over to the conpany in question going through their stuff, to ensure that all remnants are deleted, which only happens after looking at possibly millions of logs, to know who got those too similar results in the first place.
Making right after the fact always has a potential to be magnitudes more work than not allowing someone to use some personal data.
> But public policy shouldn’t be based on what’s easiest, it should be based on what’s best for society, and that requires considering the trade offs.
Surely that is not simply grabbing people's personal data and using it for new purposes without consent.
> Right now there’s no evidence that anything Facebook are doing causes real world harm, so the trade off favours allowing it.
If I had been living under a rock, I might have claimed this. However, FB is complicit in genocide, so naah, not so sure about that claim. I would rather we have a very close eye on them.