Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deepfakes start ringing in voters' ears (aipoliticalpulse.substack.com)
32 points by xalindsay on Oct 6, 2023 | hide | past | favorite | 47 comments


All roads of AI advancement seem to be converging on dystopian outcomes.

It is not just elections, but the complete end of privacy which also means the end of free societies which I wrote about as well here.

https://www.mindprison.cc/p/ai-end-of-privacy-end-of-sanity

I still haven't seen any proposals that have any high confidence for counter balance.

If AI solves all disease, is it still worth it if we are living in nice gilded cages?


You’re telling me you got a nice gilded cage?


lol, not so much right now.

But everyone who is worried about AI existential risk wiping out humanity, I'm not sure they have considered that an all powerful AI could easily create a mind virus to control everyone. Why not, billions of resources at your disposal until not needed. You would be perfectly happy doing the AI's bidding as it rewires your brain to get a dopamine boost for your obedience.


This is a genuine possibility but before that will happen, the humans that have always been trying to control us will have ever more powerful tools, and generally I think that concern should be front of mind over self aware AGI taking over. The Murdochs and Zucks and Musks of the world control huge platforms which can easily tip sentiment. Better AI tools to track opinions and guide their viewers wherever the platform owners want is a more immediate concern, and hand-wringing over all-powerful AI which is a long ways away is a great way to avoid addressing these more human and more immediate issues.

We do not actually know if an all powerful AI would try to manipulate humanity (it absolutely might, but we do not know), but we know for a fact there are a lot of people right now already trying to do that, and AI tools will make it easier for them to do so.


Yes, I agree. The real concerns are already here. Long term risk is impossible to evaluate.

I'm more concerned about the societal implications for AI used as "intended".

Such as I've elaborated on FYI

https://www.mindprison.cc/p/ai-and-the-end-to-all-things


You're telling me you got a cage? What's the monthly payment?


There's been a very recent step change in "AI"

We would do to remember thw world before. The abuse of digital technology for surveillance, fraud, impersonation, spamming, hacking, tracking, behavioural prediction and influence..... an entire world of shit, was already doing just great.

It would be a shame to fixate on "AI" as a Hollywood villain, which would be to miss the real problem. For the past 40 years digital technology has been on a trajectory to dehumanise and dominate. Let's think about why, and who is doing the deeds.

   "The fault, dear Brutus, is not in our stars but in ourselves"


Your kind of arguing against yourself. You start with "Human nature is to use technology to dominate others" (agreed) but then conclude "don't worry about this massive step up in ability to surveil and control populations, rather let's focus on the elites being nicer"


The paragraph starting with "it would be a shame..." has their entire point laid out in simple, unambiguous terms. There's no mention of human nature or calls to not worry.


But that is the point. It only emphasizes the concern. It is where I draw upon the insight to foresee where this technology will lead us.

AI is simply a power upgrade for all the institutions who wish to control thought and social engineer society to their liking.


which is why open-source AI, and affordable access to hardware, is so important. groups with access to powerful AI systems will crush those without. if competing institutions can build out equally powerful, competing AIs, it prevents any single actor from gaining too much advantage.

I remember hearing a lot about "democratizing" AI a few years ago, but now I just hear about "alignment," which is sort of the opposite. competing institutions, by definition, aren't aligned in their goals, so how could their AIs be?


I agree it is important, but open source has not sufficiently challenged the institutions in the area of information dissemination and social connection so far.

In other words, google, still owns search, instead of like presearch, and rest of big tech own social media. The alts represent still an insignificant portion of users.

I hope open source AI can have greater success, but the users are motivated by convenience and not freedom or privacy.

Yes, alignment has so many obvious problems that are simply hand waved away currently. One of the best simple quotes on the subject "Demonstrably unfriendly natural intelligence seeks to create provably friendly artificial intelligence."


> I hope open source AI can have greater success, but the users are motivated by convenience and not freedom or privacy.

Agreed, but that's actually helped open-source AI too! Stable Diffusion beat DALL-E 2 because you could actually run it, you didn't have to be a vetted influencer or whatever OpenAI required at the time. And it's hung on because you can fine-tune it to make porn, or celebrities, or even just anime. The least convenient model is the one you don't have access to (or that can't/won't do what you're asking of it.)

That's a great quote on alignment.


>It is not just elections, but the complete end of privacy which also means the end of free societies which I wrote about as well here.

This is only true if you accept that most discussions are had via online forums and social media.

I wouldn't be surprised if we all just simply moved away from online communication rather than letting society collapse.


But how many times, in how many countries, will voters be fooled into electing disasters before they stop falling for deepfakes?


Have you considered that privacy is doomed to end, eventually? If we can eventually ask an AI to produce a perfect extinction-tier virus genome, open source that, and someone can upload it to a nucleotide synthesizer in their garage, it seems like we either eliminate privacy or die.

It's a far-off (?) and abstract, "in the limit" question, and I'm probably as horrified as you are, but our ancestors would probably be horrified by how their sacred cows are eviscerated on a daily basis by modern societies. :p (Death is a neat "mental refresh" of the human mind in the abstract when viewed that way, allowing it to adapt to new realities.)


> Have you considered that privacy is doomed to end, eventually?

Yes. All matters of a free society seemingly will be lost. It is not directly AI. I've seen this coming long before as the progression of technological power simply points to that outcome. I don't see a way out of the dystopian trap. AI is just an accelerator that arrived a bit sooner than expected.


>All roads of AI advancement seem to be converging on dystopian outcomes.

All the people who thought this was mind-numbingly obvious got shouted down by people who "know better". Yet here we are.


"Because voice calls are classified as a Title II telecommunications service (vs. Title I services like text messages and broadband), carriers can’t and don’t listen in. Specifically, the classification means they can’t filter or block calls based on suspicious content (or, hypothetically, AI audio watermarks)."

This is the catch-22 of privacy: it's great that AT&T can't snoop on my calls, but that means malicious robocall attacks can only be detected and prevents using signals other than content, which makes it tremendously more challenging.


I wouldn't think the problem was that hard to solve if there were tighter guidelines around how phone numbers were allocated and a simple reliable method of reporting fraud to a central agency. Though I'm not sure we know the real reason scams/unsolicited marketing now seem to make up the majority of all phone calls - it certainly wasn't the case 20 years ago.


Can’t the classification be implemented on device?


A landline phone obviously wouldn't have such capabilities. As far as smartphones, I doubt Apple / Google would build classifiers into their default phone applications for obvious privacy reasons.


Maybe the NSA can block them for you


The best solution I can think of would be to start mass producing audio deepfakes of politicians that are clearly ridiculous, to the point where people know that it’s fake. Flood the market with that, and people’s trust will degrade, and therefore the nefarious deepfakes will be taken less seriously.


I keep seeing this fantasy on this website that people exposed to many deepfakes will suddenly become diligent skeptics and learn to seek out original sources. More likely people will seek out and believe the deepfakes which confirm their preconceived biases while railing against any soundbite that doesn't fit their narrative - real or fake - as a deepfake.


Danger.

Yes, it degrades trust, but no, this does not result in enlightenment. It results in a depoliticized populace that let the rulers get away with almost anything. See: Russia. Domestic Russian propaganda isn't designed to be believable, it's designed to make people lose trust in everything and cynically withdraw from politics, turning democracy into dictatorship.

See also: terrorism. It does not "shake society awake," it sends society running towards the nearest strong-man archetype that promises safety, which is the exact opposite of "awake."


Trey Parker and Matt Stone (of South Park fame) made a youtube channel for Sassy Justice at https://www.youtube.com/watch?v=h_jrebvmPlk and https://www.youtube.com/watch?v=9WfZuNceFDM . And the Queen of England at https://www.youtube.com/watch?v=IvY-Abd2FfM


That would mean even more ringing in voters' ears (i.e. getting more phone calls). Which doesn't solve the problem – it makes it worse.


Sorry for some mild spoilers, but this is a plot point in Neal Stephenson's latest novel Termination Shock.


Also Fall, to a more extreme extent. In fact it’s kind of been a theme he hinted at in several of his more cyberpunkish novels - going back to Snow Crash, and even shows up in Anathem - he’s often sort of gestured at usable networks having to be built on top of a substrate of bot-generated untrustable falsehood and lies. The cyber network almost needs the noise as some sort of fuel.

In Fall he talks more about how that kind of layered infosphere arises - and he’s not optimistic about it. The ‘flood the zone with the worst possible bot generated trash’ defense is applied directly on behalf of Maeve in Fall as a response against the post-truth impossibility of publishing the truth in the face of conspiracy narratives. It doesn’t work. The bots - and others like them - destroy the value of the internet as we know it. Everyone retreats to AI-mediated (‘edited’) information bubbles.

I think it’s sort of the current that runs through most Stephenson futurism.


I'm kinda shocked politicians haven't started doing this themselves.


That's a dangerous game. There is nothing a fake could say, no matter how ridiculous, that wouldn't convince somebody that the real person said it. Maybe they believe the politician already kicks puppies, maybe they hear what they think is a "reasonable" (yet still wrong) interpretation instead, or maybe they're just stupid. Doesn't matter, because now they're going to play telephone with their friends and family about the stupid thing the politician didn't actually say.

Even if it works, what's the upside? Now your brand is associated with negativity and people are thinking "maybe they didn't say that, but why take the risk? X is a safer bet".


Probably something for domestic intelligence agencies to do with their local politicians. It does seem like a reasonable defense. Release the ridiculous deepfake covertly so it's assumed to have been done by some rando for the lulz and then release a statement a few days later talking about the dangers in a world occupied by deepfakes.

It's kinda got me curious: what recording would be so obviously fake?


Bhutanese shadow garden grown dark evil pack

https://www.youtube.com/watch?v=A4hR3NQmCZU


Wait, that wasn't real?


I've seen a flood of voice cloning shit-posts, e.g. Trump, Biden and Obama at Six Flags, or all those remixes of The Missile Knows meme. The existence of audio deepfakes is well-known among Gen Z, at least, but perhaps less to older generations who aren't so online.


Also consider the case of false flag scandals. Audio recordings used to be reasonably hard evidence for journalists and police, and its now barely more reliable than word of mouth.


We're already living in a post truth society. Are better quality fakes really going to change anything?


The real politicians and their scripts are so ludicrous as it is. A deep fake might be an improvement.


No, they don’t. Vocal impersonators are a thing since forever


Without AI, you need someone to be the impersonator.

With AI, anyone can be an impersonator.


Honestly while I have major problems with a lot of AI, specifically the political implications I don't think are actually all that bad. Specifically because if you talk to the extremists, they didn't become that way by way of convincing fake news: They became that way by extremely unconvincing fake news that suited their pre-existing prejudices. And I'm not going to both sides this in an attempt to look unbiased: there are certainly weird and shitty corners of the left, but by and large, misinformation trades on the right. Even many people on the right are aware of this, it's not a secret.

People who are racists didn't need to hear Obama call his healthcare bill Obamacare to oppose it on those grounds, even as it benefited them: and we know that because Obama never called it that. Republicans did. People who are anti-trans due to the widespread (and wrong) belief that trans-people abuse children didn't need to see trans people abusing children, they needed to be told it was happening by a figure they trusted. People who hear the call phrases like "cultural marxism" have no idea what the hell that means, beyond "the things I don't like" and the thing that it's being attached to, if it isn't within their Things I Don't Like set currently, will be added to it with little thought or consideration for why it's there.

Why all this happens is a larger topic that we don't really have the space here for and I'm probably already skirting HN's rules on politics pretty hard so I'll digress and just say: bigots are bigots, they were bigots before they heard whatever talking head they like say whatever really inappropriate and disgusting thing that they are now parroting, and now they're just a bigot that has that new thing to say. And humans are excellent at motivated reasoning: once they've decided a thing, like that Democrats eat babies to attain eternal life, you will be hard pressed to make them understand that they don't. It's probably why a lot of these same people also trend very well in religion: there's a subset of humanity that just really likes the idea of nice sounding bullshit that confirms whatever bullshit they heard before, and if you're willing to feed it to them, you can parlay that into attention and eventually, money.

While I certainly oppose the use of AI tools being used to generate photorealistic evidence of Biden consuming babies on the weekend, and it will almost certainly be used for that, I have a hard time seeing that as really changing the field at all. There is a huge, huge section of the populace that would already believe that and having it rendered in perfect HD video would... certainly be upsetting, but in the age we're in, if you're aware that AI video can be created that way, to that quality, you would already assume this is a forgery, unless you were motivated enough to resist that thought. At which point it doesn't matter, it could be VHS quality that looked like it was taped next to a rare earth magnet and transmitted through a fun house mirror room, and Biden had seventeen fingers: you'd still believe it.


> Specifically because if you talk to the extremists, they didn't become that way by way of convincing fake news: They became that way by extremely unconvincing fake news that suited their pre-existing prejudices.

If it all it did was agree with pre-existing prejudices, that's not how they got that way.

And, IME, post political extrenists didn't get that way from fake news at all, but the problem with convincint fake news (whether AI is involved or not) isn’t that it creates extremists or otherwise influences ideology, but the reactions it creates in people applying their non-extremist ideology through the lens of false factual premises.


Extremists falling for their own outrage bait is a common occurrence.


With all due respect, the entire notion of radicalization would seem to counter-indicate your point.


The entire “notion of radicalization” isn't centered in radicalization by fake news.

Ideological indoctrination isn’t primarily based on news, true or false. It doesn't work from views on fact to get people to adopt extreme values.


I used to worry about the use of deepfakes in elections, but in a country (USA) where you can see and hear the truth straight out of the horse's (Trump) mouth and lots of people still support him, I just don't care anymore. We are at an inflection point, and democracy here is about to be something from the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: