The thing is that people can make harmful art themselves. Photoshopping people's faces on nudes and depicting graphic violence has been a thing since digital photography if not painting in general. I mean, look at all the gross stuff which is online and was online way before these Neural Networks.
The issue with these neural networks isn't the content they create, it's that they can create massive amounts of content, very easily. You can now do things like: write a Facebook crawler which photo-shops people's photos on nudes and sends those to their friends; send out mass phishing emails to old people with pictures of their grand-kids bloody or in hostage situations; send out so many Deepfakes for an important person that nobody can tell whether any of their speeches is legitimate or not. You can also create content even if you have no graphic design skills, and create content impulsively, leading to more gross stuff online.
Spam, misinformation, phishing, and triggering language are already major issues. These models could make it 10x worse.
Where today it takes some far-from-Jesus deviant artists a whole day to draw a picture of Harry Potter making out with Draco Malfoy, with the power of AI, billions of such images will flood the Internet. There's just no way for a young person to resist that amount of gay energy. It's the apocalypse fortold by John the Revelator.
> It's the apocalypse fortold by John the Revelator.
I literally read a chapter of Inhibitor Phase where there's a ship called "John the Revelator" less than an hour ago. I haven't otherwise seen that phrase written down for years.
Spooky (and cue links to the Baader-Meinhof Wikipedia article).
> Spam, misinformation, phishing, and triggering language are already major issues. These models could make it 10x worse.
Or 10x better, as the barriers to entry for doing this kind of thing right now aren't high enough to make it not happen... they are only high enough to make it sufficiently hard to pull off that people can feel comfortable assuming that most of the content they see is legitimate; in a world where nothing is necessarily legitimate I'd expect you'd see a massive shift in peoples' expectations.
The issue with these neural networks isn't the content they create, it's that they can create massive amounts of content, very easily. You can now do things like: write a Facebook crawler which photo-shops people's photos on nudes and sends those to their friends; send out mass phishing emails to old people with pictures of their grand-kids bloody or in hostage situations; send out so many Deepfakes for an important person that nobody can tell whether any of their speeches is legitimate or not. You can also create content even if you have no graphic design skills, and create content impulsively, leading to more gross stuff online.
Spam, misinformation, phishing, and triggering language are already major issues. These models could make it 10x worse.