A makeup influencer I follow noticed youtube and instagram are automatically adding filters to his face without permission to his videos. If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.
The video shown as evidence is full of compression artifacts. The influencer is non-technical and assumes it's an AI filter, but the output is obviously not good quality anywhere.
To me, this clearly looks like a case of a very high compression ratio with the motion blocks swimming around on screen. They might have some detail enhancement in the loop to try to overcome the blockiness which, in this case, results in the swimming effect.
It's strange to see these claims being taken at face value on a technical forum. It should be a dead giveaway that this is a compression issue because the entire video is obviously highly compressed and lacking detail.
There are some very clear examples elsewhere. It looks as if youtube applied AI filters to make compression better by removing artifacts and smoothing colors.
This seems like such an easy thing for someone to document with screenshots and tests against the content they uploaded.
So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers)
You obviously didn't watch the video, the claims are beyond the scope of compression and include things like eye and mouth enlargement, and you can clearly see the filter glitching off on some frames.
This is an unfair analysis. They discuss compression artifacts. They highlight things like their eyes getting bigger which are not what you usually expect from a compression artifact.
If your compression pipeline gives people anime eyes because it's doing "detail enhancement", your compression pipeline is also a filter. If you apply some transformation to a creator's content, and then their viewers perceive that as them disingenuously using a filter, and your response to their complaints is to "well actually" them about whether it is a filter or a compression artifact, you've lost the plot.
To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way. Regardless of whether they are technical, they are living in a world increasingly intermediated by technology, and we should be listening to their feedback on it. They have to live with the consequences of our design decisions. If we believe them to be non-technical, we should extend a lot of generosity to them in their use of terminology, and address what they mean instead of nitpicking.
> To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way.
I'm not critiquing their opinion that the result is bad. I also said the result was bad! I was critiquing the fact that someone on HN was presenting their non-technical analysis as a conclusive technical fact.
Non-technical is describing their background. It's not an insult.
I will be the first to admit I have no experience or knowledge in their domain, and I'm not going to try to interpret anything I see in their world.
It's a simple fact. This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum
From a technical standpoint it's interesting whether it's deliberate and whether it's compression, but it's not a fair criticism of this video, no. Dismissing someone's concerns over hair splitting is text book "well actually"ing. I wouldn't have taken issue to a comment discussing the difference from a perspective of technical curiosity.
This is going to be a huge legal fight as the terms of service you agree to on their platform is “they get to do whatever they want” (IANAL). Watch them try to spin this as “user preference” that just opted everyone into.
That’s the rude awakening creators get on these platforms. If you’re a writer or an artist or a musician, you own your work by default. But if you upload it to these platforms, they own it more or less. It’s there in the terms of service.
One of the comments on IG explains this perfectly:
"Meta has been doing this; when they auto-translate the audio of a video they are also adding an Al filter to make the mouth of who is speaking match the audio more closely. But doing this can also add a weird filter over all the face."
I don't know why you have to get into conspiracy theories about them applying different filters based on the video content, that would be such a weird micro optimization why would they bother with that
Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.
What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.
Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.
Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.
Neural compression techniques reshape the image itself.
If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.
The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.
Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.
If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.
I don't think that's actually what's up, but I don't think it's completely ruled out either.
That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.
A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.
It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.
If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.
> If any engineers think that's what they're doing they should be fired.
Seriously?
Then why is nobody in this thread suggesting what they're actually doing?
Everyone is accusing YouTube of "AI"ing the content with "AI".
What does that even mean?
Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. All because "AI" is "evil" and "big corp" is "evil".
Use occam's razor. Videos are expensive to store. Google gets 20 million videos a day.
I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches.
Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.
To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.
I largely agree, I think that probably is all that it is. And it looks like shit.
Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.
As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.
That doesn't include all of the transcoding and alternate formats stored, either.
People signing up to YouTube agree to Google's ToS.
Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.
https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
The screenshots/videos of them doing it are pretty wild, and insane they are editing creators' uploads without consent!