Odd, although I'm not really very familiar with the appearance of either of them I somehow thought her face looked more fake than his did. I blame makeup unless they used some kind of filter.
Give it a couple more years to iron out the kinks and throw in more resources, and even fewer people may be able to spot anything off (I already cannot - even knowing that it's fake, there's no way I'd ever think "wow this is a fake video").
This is either going to send a shockwave to society in the trust department (i.e. we'll have to absolutely distrust everything and everyone will have to adapt immediately), or we're in for a very, very rocky road where different people will walk around living in different realities where different things have happened (this is already the case, but more so when you can send a conspiracy theorist videos that -prove- everything they've been saying, and look completely real).
Or maybe people will just start requiring more evidence that something actually happened instead of believing everything they find on Tiktok (or the wider internet), which would actually be a good thing.
Whats in it for the proverbial me to do that? I can see something and accept it if it aligns with my world view, or spend time and effort trying to disprove something that might actually be true - and how can I know the proof isn't the fake?
9/10 people will just accept it, probably 7/9 wont have much other option without any practical ability to authenticate the hundreds of things they see a day.
Before Photoshop, people believed in manipulated static images (as seen by all the fake UFO "photographs"). After Photoshop, it required "video evidence" to make people believe in UFOs again. Once it's possible to easily fake video material, the same will happen as with doctored static images. People will stop believing in video without additional evidence that the video is actually real.
PS: People being manipulated by propaganda is really old news. The only thing that the internet has changed is that all the village idiots who easily fall for propaganda discovered that each village has its idiot and they started to communicate and coordinate. But that's unrelated to deep fake and was already a problem before.
ok well once you don't believe in video evidence or photographic evidence then there is no evidence that could convince you of something other than actually experiencing it yourself. In which case people will just believe what they want to believe.
Usually there are other witnesses, other video sources. The more material exists, the harder it is to make the fake believable. You have to consider the sources. Who published the video, what are their motivations, who pays them.
Really, that's "internet user 101", I can't believe we're having such a discussion on HN :)
Or more likely, people will require essentially no evidence for something that confirms their bias, and an unsurmountable unreasonable body of evidence for something that contradicts it.
Video evidence may not be trusted as evidence in court in the future. Will we require something more perhaps? Or will the word of AI keen on spotting something's off be what we trust?
Has forensic analysis been defeated by a deep fake? At least in the courts cases won't depend on the jury being able to 'tell by the pixels' and instead experts will get called in. Whoever that ends up being, if they can do their job and authenticate a video at least as well as they already can with photoshops we should be fine.
Photoshop didn't cause society to collapse and photoshop-for-video won't either.
That's kind of encouraging. The lying experts and underfunded defense attorneys out there didn't bring about the end of days with the appearance of photoshop, and it's unlikely that they will now either. Which isn't to say that we shouldn't try to fix those problems...
I'm already somewhat hopeful that if deepfakes can be reliably detected juries will be automatically skeptical of unauthenticated videos and prosecutors will view getting their videos authenticated as an easy way to strengthen their case. I already suspect close to every picture I see has been edited and altered somehow.
Maybe we will only trust video from cameras with embedded cryptographic functionality when the cryptographic checks verify it hasn't been tampered with?
(And even then sometimes wonder if a hardware hack was involved)
Historically, the average person could verify very little. To this day I've never once seen China with my own eyes. Does it even exist? Probably... how do I know? We all decide on others we consider trustworthy enough for the context.
If some major news org said they contacted Tom Cruise and asked him if the video was real and he said it wasn't him, then as an average person I'd probably believe them because I have at least some degree of trust that they'll either tell the truth or get called out on the lie and ultimately what do I care either way.
It's extraordinary claims that require extraordinary evidence and "Paris Hilton and Tom Cruise do rich people things on camera" doesn't meet that threshold. If someone posted a video of the president eating a live baby I'd probably be more skeptical of my sources.
With the distribution channels centralized, how do you call out a lie by a "major news org", if the gvt (Google, Youtube, FB, Twitter…) decide to play along and suppress your message?
The EU Commissar of Truth, Vera Jourova, announced that "the era of the Wild West for free speech is over". She also cited "gentlemen's agreement" with the "big boy" platforms where legislation is lacking for now. Chilling.
To the degree that the censorship machine fails, it fails for technical reasons, not for a lack of appetite.
A resigned shrug and "ultimately what do I care" is increasingly common. It might be a self-correcting problem, on the evolutionary scale.
> If someone posted a video of the president eating a live baby I'd probably be more skeptical of my sources.
Isn't that because you've lived a life grounded (mostly) in physical reality? With free access to (mostly) uncensored information?
We might be surprised what's considered "extraordinary" with respect to a "sceptical threshold" in the future, especially once the pre-internet generation dies out. And digital information with digital influencers (AI or not!) beholden to a few centralized platforms shaping the consensus reality.
You're spot on, I think. This is already happening, and disintegrates society. There are countries where an analogue version of this is already implemented: fake parliament with fake opposition, fake government, fake media, fake prosecutions, fake health care system, fake education system, fake state functions. By fake I mean: it looks like a real one, but it's real goals and purposes are very different from what is officially advertised.
It's scary to think about how AI will boost the already way too effective politics of "fake".
I live in Hungary, where I have this feeling of "fake state" getting stronger and stronger every year. I'm sure there are other similar countries.
One recent example: our education system has been neglected for long. Now, that we have an inflation of ~25% (inflation of food is around 50%), teachers literally can't make ends meet. They started to fight for themselves, and instead of taking the problem seriously, the government fights back with its power. (E.g. by firing or silencing teachers who demonstrate.) Teachers are leaving for other jobs in huge numbers. The buildings of even some of the best schools in the country are in catastrophic shape, on the brink of causing major damage to those inside. All this, because it is not a real priority to have good schools. This is only advertised, but it is a lie. The whole education system is gradually shifting into a mode of "baby sitting" kids while the parents work.
Another example is the prosecution system. Interestingly, they are very quick and effective in investigating the smallest wrongdoing if it helps those in power. If the investigation would hurt those in power, they very quickly abandon the investigation with funny and obiously fake reasons. Again: the prosecution system looks like a real one, but it's not. It has purposes different from what is officially advertised.
The closest advisor of the prime minister openly said this week, that "if you control the media, you control the thoughts of people". This, sadly, seems to be true. It really seems that the point of the government is not to run the country decently, but only to fake it. And it works.
We will eventually find black-market "off-the-grid" data centers where you can run AI algorithms away from government-mandated watermarks, and generate untraceable content to topple politicians and such.
Faking identity is just one side of it, I actually think we are still relatively safe, when we get to a point where AI can also fake both the location and actions that take place to levels that are indistinguishable from the real thing, then I think we’re in a spot of trouble.
Given what happened to humanity in the past decades, I think this is completely shatter trust in everybody and everything. Once the first large news outlet mistakes a deepfake video for a real one and drops a big headline on it, it's only going to get worse.
or maybe we'll get back to living like 50-70 years ago, when there was minimal amount of video footage around. You trusted what you could see and touch.
A bunch of people will believe it for some amount of time, and given the very low cost of producing it, it’ll be enough. If there is one thing we learned from the recent increase in mainstream extremism fueled by social networks is that fake rumors don’t have to be especially reasonable to be effective; they just need to be generated and broadcasted at a sufficient rate (selection will do the rest of the work). One interesting question is: is one protected by freedom of speech when generating fake videos clearly intended to harm another person’s image?
I dunno, I don’t think people need to be tricked into thinking certain things. It’s usually what they want. I don’t think promoting “the facts” or worrying about fake videos is going to change anything directionally.
I think that will be a problem of its own. e.g. In Atlanta police recently shot and killed a protester and other protesters are saying it was murder while the police are saying the protester shot first.
Right now, I hope the truth will be revealed by police releasing body camera footage. In the near term future that footage will satisfy nobody and we'll all be left wondering what really did happen.
probably only in some corner cases. There's always context and connection to real world events that cameras capture, so a fake bank robbery or a guy in two places or what have you doesn't make any sense. In a way that's already how we deal with potentially staged or doctored video evidence. You always need to figure out if a piece of evidence corresponds to the rest of your evidence.
To pick the example from the other user, if video footage of a shooting matches neither ballistics, eye witness accounts or other evidence on site it'd be very easy to spot a fake, without even technically analzying the video itself.
It might be easier to invalidate celebrity/political deepfakes as there would be no independent corroborating witnesses, but we’d expect there to be some.
But deepfakes used by governments, police or citizens to frame innocent civilians would be really scary. There wouldn’t be any witnesses, but we wouldn’t necessarily expect there to be any either.
My dystopian techno-political prediction is that the 2024 presidential election in the US will have major impact from deepfakes.
With current tech, deepfakes can be really good. The linked one is pretty much there, it's super convincing. When I know it's fake, something seems a bit off about how "Cruise's" head is in relation to his body when he walks through the doorframe but that's it. If I wasn't actively looking for anything fake, I would not have registered that either.
Whoever runs for POTUS in '24, it's certain that both candidates will be well-known people with lots of video material for training a model. It's also certain that many groups and individuals will be strongly vested in the outcome of the election, and some of them will have the resources to produce deepfakes at least as convincing as "Tom Cruise and Paris Hilton" here - this is no longer something that requires hardware worth millions to run for two months straight.
There will be fake videos of the candidates doing/saying highly scandalous stuff. Going on a racist rant, expressing corrupt intent, promising illegal things, etc. And those videos, if somewhat intelligently produced, will have a big impact once they air, no matter if they can be definitively proven to be fake later.
The neck is kinda off in the first second and 25 seconds in and seems to spill over the collar unnaturally, and the head seems too big. But otherwise very convincing. If someone did not tell me it was fake, I probably would not have noticed.
Yes, same here, once I knew it was a fake I started looking and the head looks glued on... without the tip, I might have unconsciously notice something was off, but would not be able to put my finger on it.
I'm not sure what point you're trying to make as nobody is arguing there isn't a (black) market for it.
The difference between the illegality of porn and of deepfaked porn is that the former is usually based on a purely moral argument (e.g. "porn is sinful" and allowing sinful behavior "corrupts people with sin", leading to more - and worse - sinful behavior) whereas the latter is based on a lack of consent.
Consent is the corner stone of most societies (it's what allows for contract law and thus the "social contract" to begin with). Note that deepfaked porn is illegal in every country in which regular porn is illegal, so clearly the concern isn't with the porn aspect of it. It's categorically more similar to revenge "porn" or CSAM in that the subjects of it don't (or can't) consent to its distribution (or even creation). Also note that in countries where deepfaked porn is illegal but regular porn is not, deepfaked porn produced with the consent of the actors and those lending their faces would usually be legal. Consent matters.
It's anyhow a moral argument, just based on different values.
For example can you imagine to have sex with someone? Do you need their consent? How is it so much different to produce a deepfake for your own consumption only?
I personally agree that consent matters, but I see many ways to contrast that argument
It's ironic and revealing that nobody actually consents to a social contract then. Closest most come is voting, and perhaps that implicitly validates the laws. But that's not really consent.
So maybe this consent assumption is not so strong!
Oh, I agree that states aren't consensual in practice, neither is capitalism. That's why I'm an anarchist.
But philosophically the justification for a state's existence is the mythical social contract and most modern states require some level of consent for contracts within their legal systems to be legitimate (e.g. can't be made under the legal definition of duress, which usually at least means a contract is invalid if the other party is literally holding a gun to your head).
I get the impression legality comes second to profits in the porn industry.
AI becomes so commoditised so quickly. It doesn't feel like we are a long way from text-to-video available in open source. Then all it takes is someone with a big stash of training data to train it, and voila.
Deepfake porn seems like more of a cottage industry.
Looks great. I think it's definitely possible to tell that it's fake if you already know that it's not a real video, but I probably wouldn't have noticed anything out of the ordinary if I weren't already primed to spot subtle issues in their faces.
I find it hilarious that every time humankind invents a new unexpected technology, the first use cases are always entertainment. Other examples than deepfakes or ChatGPT:
Or maybe that's the first time other people (most people) hear about them? I'm quite sure the first usage of batteries was not "used on parties to run current through people for fun". Probably some boring scientists or engineers used them for other purposes long before that.
My bold prediction. Please hold me to this in 2 years /s..
Some video storage service or ‘authentication service middleman’ is going to make a mint with blockchain authenticity proofing. They’ll provide a seal of authenticity that the feed being viewed came direct from a camera. Feed will be checked during viewing as well to verify its ‘camera direct’ state.
Had the same idea a few years back. You don't need blockchain though, good old fashioned CAs will work. Use standard digital signatures on the keyframe data but that does necessitate a new video format standard. Good luck with that. You could probably DIY by overlaying a QR code signature on the video keyframe as a stopgap until an official format exists.
Why would they need a blockchain for that? In either case you’d have to trust the company that their inputs, which are external to the network itself, are authentic.
It's really impressive, and even scary, but I get some uncanny valley from Tom's head, it at times gives off a 3D impression that it's popping out of the screen.
As long as I immediately notice something's off I am not going to be impressed. I notice the neck is weird right away. A few seconds later I noticed the head is out of proportion in relation the body. I'm not the best at noticing picture deepfakes, but this one is obvious.
I can't quite articulate what it is, but there's a certain "glimmer" when his head turns a few times, but other than that this is shockingly convincing. If I wasn't primed to expect a deepfake I'd probably not even notice or question it.
The implications of this really make me want to get into Deepfake-countering technology.
Can anyone recommend some reading or other resources? What companies are working on this?
https://www.hitc.com/en-gb/2022/10/21/paris-hilton-fans-conf...
Still super impressive but it’s not two deep fakes at the same time.