Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fixing the stable door after the horse has bolted?

Yep. Great plan.



It’s a public company.

If it ever actually turns out to be a problem, we’ll just make a law that requires them to delete the data / model.


It’s a foreign public company.

If Australia make a law requiring them to delete the data Facebook will just use an offshore copy, since they’ve already exported it.

Facebook/Meta is not a company with a moral compass. If it can get away with something, regardless of the harm it causes, it will.


How is that any different to banning it upfront and Facebook continuing to do it anyway?

Australia’s leverage against a US company is the same whether they try to enact a ban now or later.


Delete it after it's already been publicly released, like Llama?


I guess f** affected people and kids, right?

https://www.bbc.com/news/articles/cpdlpj9zn9go.amp


Banning everything because we don't like change isn't a great plan either.

The modern society is a product of the Industrial Revolution, which had many nasty side effects (destroyed the traditional structure of the society, poisoned the air and the water, led to large swaths of land being damaged by mining etc.)

Yet if, say, Prussia or France decided to ban industry early on because of the very real harms observed in England, they would have been conquered mercilessly by their foes later.

Technological bans often come at a price of stagnation. Facebook is at least a bit under Western control and can be regulated. Chinese AI labs won't be, and we will only see their results when smart missiles start raining on our heads.


Who said anything about banning everything though? That smells like a strawman.

Fools rush in is a warning against not diving headfirst into doing whatever you feel like doing just because you can, it's not advocacy for "banning everything".


Look around, we are living in a veritable vetocracy.

There is no need for anyone to advocate banning everything. It is a result of aggregate efforts of a diverse set of activists each trying to ban something.

You dislike AI training. Someone dislikes high speed rail, or possibly any new rail at all. Someone dislikes Starship. Someone dislikes new housing or rezoning of their city. Someone dislikes new biotech. Someone dislikes new factories. Someone dislikes new wind turbines. Someone dislikes new nuclear power plants. And nowadays, the fear of harm and doing harmful things is in vogue, so the standard line of defense is to argue potential harms, and given that no activity in the world is completely harmless, the result is stasis and a paper war over everything.

It is as if there was a group of activists who dislike individual letters. Even though Alice only hates H and Bob only hates Q, if enough people do this, the entire alphabet is going to be banned soon.

There is a certain irony in the fact that the Internet, a revolutionary technology in itself, has enabled various reactionary folks to organize themselves to stop their $Nemesis, for whatever value of $Nemesis. Where once were isolated naysayers, a pressure group is now easily put together.


People are talking about banning something, when they can’t even outline a real harm that the ban is intended to avoid.

If that’s the threshold, then we are genuinely talking about banning everything by default.


> People are talking about banning something,

That'd be inglor_cz above.

> when they can’t even outline a real harm that the ban is intended to avoid.

Unsurprising when { something } (for various values of something) has never been done before.

The wise course of action is to not rush full bore into, say, handing out sticks of dynamite, dumping hundreds of thousands of gallons of waste, allowing third parties in other countries to digitize citizens, etc. etc. until time has been taken to look into pootential harms.


"wise"

This is very much in the eye of the beholder. In my opinion, we already suffer from an abundance of suffocating safetyism, which may be a consequence of aging of our societies.

"time has been taken to look into pootential harms."

And I am also very skeptical towards the idea that a theorizing committee can predict future reliably. Which includes future harms and rewards of some technology that has barely started to develop.

The experience from the past is that we cannot tell such things in advance.


The experience from the past is that we cannot unfuck a goat.

New technologies don't need to be "banned" forever, what a daft idea.

They also don't have to be jumped into head first w/out checking for snags and clearance either.


"They also don't have to be jumped into head first w/out checking for snags and clearance either."

So new technologies should be at least put on hold until someone checks them for snags and clearance to an extent satisfying for ... whom? You? An average person? A committee? The parliament? Some regulatory body?

Again, what you think of as wisdom seems like obsessive safetyism to me. As someone else here argues, you shouldn't have the right to stop or delay random things just in case, unless you can demonstrate some concrete harm.

The world shouldn't be a carefully padded kindergarten for adults. Heck, even current kindergartens for kids are too safetyist by far. Shout out to Lenore Skenazy and her free-range parenting.


> we will only see their results when smart missiles start raining on our heads.

I'll remember to say that whenever anyone wants to apply any law or regulation to anything I'm doing


Laws and regulations have downsides, too. And if you locally regulate some dual-use tech out of existence, it is well possible that you may lose a subsequent war.

In this particular instance, we are talking about AI analyzing pictures, and looking at the Russo-Ukrainian war which is developing into a massive drone-fest, you cannot ignore the military ramifications thereof.

It is possible to jam a remotely controlled drone, it would be much harder to neutralize a drone that is capable of detecting targets reliably on its own.


> a drone that is capable of detecting targets reliably on its own

Yes, I guess in the future a lot pf people will be killed by autonomous swarms of drones that just don't like the way they look, sometimes malfunction, give plausible deniability for war crimes, etc... ("who knows what the drones did, there was no communication. Also, the drones cannot possibly kill anyone who isn't a terrorist. Also, surely it was the drones of the other side.)

That's actually a great example for something I'd like to be banned. It would require international cooperation and arms control agreements, which unfortunately currently doesn't seem likely to happen.


Nope, the required amount of mutual trust among nations would be unrealistically high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: