Hacker Newsnew | past | comments | ask | show | jobs | submit | echelon's commentslogin

The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression

These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)

This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.


"My, what big eyes you have, Grandmother." "All the better to compress you with, my dear."

Why would data compression make his eyes bigger?

Because it's a neural technique, not one based on pixels or frames.

https://blog.metaphysic.ai/what-is-neural-compression/

Instead of artifacts in pixels, you'll see artifacts in larger features.

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.


Like a visual version of psychoacoustic compression. Neat. Thanks for sharing.

Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT. Noise reduction targets high-frequency details, which can look similar to skin smoothing filters.

The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.

I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.


I agree I don't really think there's anything here besides compression algos being tested. At the very least, I'd need to see far far more evidence of filters being applied than what's been shared in the thread. But having worked at social media in the past I must correct you on one thing

>Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs.

You can't know this. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost.


Activism fatigue is a thing today.

Whatever the purpose, it's clearly surreptitious.

> This level of AI paranoia is getting annoying.

Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.

So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?


This is an experiment in data compression.

What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.

Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.


> What type of compression would change the relative scale of elements within an image?

Video compression operates on macroblocks and calculates motion vectors of those macroblocks between frames.

When you push it to the limit, the macroblocks can appear like they're swimming around on screen.

Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.

The giveaway is that the entire video is extremely low quality. The compression ratio is extreme.


One that represented compressed videos as an embedding that gets reinflated by having gen AI interpret it back into image frames.

AI models are a form of compression.

Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.

Neural compression techniques reshape the image itself.

If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.


Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.

If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.


That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.

The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.

Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.

A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.

It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.


The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.

If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.

> If any engineers think that's what they're doing they should be fired.

Seriously?

Then why is nobody in this thread suggesting what they're actually doing?

Everyone is accusing YouTube of "AI"ing the content with "AI".

What does that even mean?

Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. All because "AI" is "evil" and "big corp" is "evil".

Use occam's razor. Videos are expensive to store. Google gets 20 million videos a day.

I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches.


Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.

To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.


https://c3-neural-compression.github.io/

Google has already matched H.266. And this was over a year ago.

They've probably developed some really good models for this and are silently testing how people perceive them.


If you want insight into why they haven't deleted "old garbage" you might try, The Age of Surveillance Capitalism by Zuboff. Pretty enlightening.

I'm pretty sure those 12 year olds uploading 24 hour long Sonic YouTube poops aren't creating value.

1000 years from now those will be very important. A bit like we are now wondering what horrible food average/poor people ate 1000 years ago.

I’m afraid to search… what exactly is a “24 hour long sonic Youtube poop?”

Totally. Unfortunately it's not lossless and instead of just getting pixelated it's changing the size of body parts lol

Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.

[1] https://github.com/vivianhylee/seam-carving


I largely agree, I think that probably is all that it is. And it looks like shit.

Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.


It could be, but if compression is codecs, usually new codecs get talked about on a blog.

> This is an experiment

A legal experiment for sure. Hope everyone involved can clear their schedules for hearings in multiple jurisdictions for a few years.


As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.

That doesn't include all of the transcoding and alternate formats stored, either.

People signing up to YouTube agree to Google's ToS.

Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.

It's a free service.


Its not the same when you publish something on my platform as when i publish something and put your name on it.

It is bad enough we can deepfake anyone. If we also pretend it was uploaded by you the sky is the limit.


That's the difference between the US and European countries. When you have SO MUCH POWER like Google, you can't just go around and say ItSaFReeSeRViCe in Europe. With great power comes great responsibility, to say it in American words.

"They're free to do whatever they want with their own service" != "You can't criticize them for doing dumb things"

Ye it is such a strange and common take. Like, "if you don't like it why complain?".

Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything.

I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)


> Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.

I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).


Rust's NonZero basically exists only to enable layout optimizations (e.g., Option<NonZero<usize>> takes up only one word of memory, because the all-zero bit pattern represents None). It's not particularly aiming to be used pervasively to improve correctness.

The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.


> By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.

I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.

I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.


Most of the herpesvirus family have associations with neurodegenerative disorders, also including HSV.

A lack of oral hygiene and gum disease is associated with nerodegeneration.

Lots of metabolic diseases have associations with nerodegenerative disorders. Insulin, kidney, liver dysfunction.

The gut microbiome...

Putting immune or metabolic stress on the brain can cause it to go into this disease state death spiral.


> A lack of oral hygiene and gum disease is associated with nerodegeneration.

It's important to remember that association/correlation is not causality. People who brush their teeth reliably are probably more likely to exercise and do other healthy behaviors, too (avoid smoking, ...).


That's been studied and the evidence suggests that there is some causation. Bacteria that cause your gum disease can get into the bloodstream and reach the brain, where they release enzymes that cause inflammation and can damage cells.

In particular this can seriously impair microglial cells which is something you really don't want to have happen if you value maintaining a well functioning brain.


There's a hypothesized mechanism, but again, no actual demonstration of causality. No one is RCTing brushing teeth, for obvious reasons.

Proposed mechanisms are better than statistical handwaving.

This gives researchers (the lab kind) something to investigate.

I respect this kind of science a lot more than statistical paper pushing.


It's also very possible to practice great oral hygiene and have bad gum disease. Gum disease seems to be carried by a potentially strong genetic risk factor.

As nice as this statistical thinking alone is, it can also slow things down.

There's a reason why this finding is valuable. It suggests a mechanistic hypothesis that bacteria are entering the bloodstream, heart, and passing the blood-brain barrier.

This is a very valuable line of investigation that can lead to a smoking gun for one class of casual mechanisms and potentially to preventative care or treatment.

If we blindly follow just the statistics, we'd never get any real science done.

Correlation does not imply causation. But when it gives you something to investigate, don't sit on it.


We're too early.

This is AI's "dialup era" (pre-56k, maybe even the 2400 baud era).

We've got a bunch of models, but they don't fit into many products.

Companies and leadership were told to "adopt AI" and given crude tools with no instructions. Of course it failed.

Chat is an interesting UX, but it's primitive. We need better ways to connect domains, especially multi-dimensional ones.

Most products are "bolting on" AI. There are few products that really "get it". Adobe is one of the only companies I've seen with actually compelling AI + interface results, and even their experiments are just early demos [1-4]. (I've built open source versions of most of these.)

We're in for another 5 years of figuring this out. And we don't need monolithic AI models via APIs. We need access to the AI building blocks and sub networks so we can adapt and fine tune models to the actual control surfaces. That's when the real take off will happen.

[1] Relighting scenes: https://youtu.be/YqAAFX1XXY8?si=DG6ODYZXInb0Ckvc&t=211

[2] Image -> 3D editing: https://youtu.be/BLxFn_BFB5c?si=GJg12gU5gFU9ZpVc&t=185 (payoff is at 3:54)

[3] Image -> Gaussian -> Gaussian editing: https://youtu.be/z3lHAahgpRk?si=XwSouqEJUFhC44TP&t=285

[4] 3D -> image with semantic tags: https://youtu.be/z275i_6jDPc?si=2HaatjXOEk3lHeW-&t=443

edit: curious why I'm getting the flood of downvotes for saying we're too early. Care to offer a counter argument I can consider?


This is AI's Segway era. Perfectly functional device, but the early-2000s notion that it was going to become the primary mode of transportation was just an investor-fueled pipe dream.

Just add a stick and sharing: the scooters are quite successful

I never said they weren't successful.

AI is going to be bigger than Segway / personal mobility.

I think dialup is the appropriate analogy because the world was building WebVan-type companies before the technology was sufficiently wide spread to support the economics.

In this case, the technology is too concentrated and there aren't enough ways to adapt models to problems. The models are too big, too slow, not granular enough, etc. They aren't build on a per-problem domain basis, but rather a "one-size fits all" model.


You want to build a world where roll back is 95% the right thing to do. So that it almost always works and you don't even have to think about it.

During an incident, the incident lead should be able to say to your team's on call: "can you roll back? If so, roll back" and the oncall engineer should know if it's okay. By default it should be if you're writing code mindfully.

Certain well-understood migrations are the only cases where roll back might not be acceptable.

Always keep your services in "roll back able", "graceful fail", "fail open" state.

This requires tremendous engineering consciousness across the entire org. Every team must be a diligent custodian of this. And even then, it will sometimes break down.

Never make code changes you can't roll back from without reason and without informing the team. Service calls, data write formats, etc.

I've been in the line of billion dollar transaction value services for most of my career. And unfortunately I've been in billion dollar outages.


"Fail open" state would have been improper here, as the system being impacted was a security-critical system: firewall rules.

It is absolutely the wrong approach to "fail open" when you can't run security-critical operations.


Cloudflare is supposed to protect me from occasional ddos, not take my business offline entirely.

This can be architected in such a way that if one rules engine crashes, other systems are not impacted and other rules, cached rules, heuristics, global policies, etc. continue to function and provide shielding.

You can't ask for Cloudflare to turn on a dime and implement this in this manner. Their infra is probably very sensibly architected by great engineers. But there are always holes, especially when moving fast, migrating systems, etc. And there's probably room for more resiliency.


Appears to be fixed now. Just lost 30 minutes of working.

If this is unwrap() again, we need to have a talk about Rust panic safety.


Time to rewrite Rust’s unwrap() in Rust obviously.

Does it make it worse or better if I say it's RSC?

https://www.cloudflarestatus.com/incidents/lfrm31y6sw9q


Well, technically RSC was messed up, and then the hotfix for the messed up RSC was itself messed up. I guess there’s a lot of blame to go around.

Now multiply that 'just' by the number of people affected.

This probably isn't new behavior, simply something we're witnessing for the very first time.

We haven't observed orcas predate moose or primates, but the former has plenty of supporting evidence and the latter has probably happened at some point.

In any case, zoonotic reservoirs likely slosh around a lot more than we think before spillover events.


2/3rds of all human pathogens originated from zoonotic spillover!

https://pmc.ncbi.nlm.nih.gov/articles/PMC8182890/


We also pass on diseases to animals so it is a never-ending cycle.

I agree. This has probably been happening for millions of years. Bats often live in dense colonies in caves and tree trunks with small exits making rat predation possible.

Some native Arctic peoples have traditions of killer whales eating people, although officially they have only killed people while in captivity. A person in a skin kayak would be easy prey for one.


> Content Supply has vastly overshot Demand

Content has never overshot demand.

I would drown myself in content if it were good and abundant. It's not. It's lackluster and middling.

Content is scarce because it is expensive to produce. The wrong people get put in charge of projects (or tastes/reception is hard to gauge, and experiences hard to engineer). We wind up with a lot of expensive garbage.

There is a dearth of sci fi and fantasy. A few dozen titles get created, and half of it is garbage. I have money to pay to watch something every night. It just doesn't exist and isn't good.

I'd pay to watch original content. Original ideas don't get funded because it's "too risky". Which is a consequence of the big budgets, massive personnel and time investments, etc.

I see a film every other year or so where I'm not questioning the character arcs, the pacing. Where I'm fully enveloped and transfixed. That doesn't happen frequently enough. Where every note is perfect. It's rare and fleeting, and that's sad.

We're in the Precambrian times. Great content is nigh non-existent. There's a whole lot of "acceptable" and "good enough". But rarely anything sublime that steals away your brain for the rest of the day, forcing you to ruminate.

I want to live in a world where content fits my preferences like a glove and is constantly surprising and delighting me. Unlimited intellectual stimulation and adventure. I know that pinnacle can be reached eventually, just not with our current limitations. This scarcity trough.


I agree with the glove fit bit, while at the same time thinking that we're at the next level of siloed bubbles. All aspects of your world, tailored to how you already think, including TV series/movies/etc.

No new ideas.

(Not saying this is your intent, and yes I do indeed watch what I like. I am not immune to the very thing I worry about)


Don't get me wrong - I might have poorly communicated my intent.

I want to be catered to and subverted. I want to see things I'm comfortable with and things that make me question everything I know. Things that make me deeply uncomfortable. The full range of experiences.

I just want it to be great and hit the notes in ways that leave me in awe.

This does happen with current media, but it's exceedingly rare. It's a combination of great writing, fantastic direction, unusual stories, phenomenal acting. The mood, set dec and DP, the pacing and editing. Everything lining up in a stroke of brilliance.

And what's funny is that when it happens, people tend to disagree or have differing opinions about it. It's deeply personal.

You know when something speaks to you.


I agree. And I don't think you communicated it poorly, it's just that I think it will be more and more difficult to get that full range. Most folk don't want that. Most even prefer siloed.

Yet perhaps I am too jaded on this. There will be lots of niche content...


> Content has never overshot demand.

>

> I would drown myself in content if it were good and abundant. It's not. It's lackluster and middling

There is only an amount of time per day you can dedicate spending in front of a screen outside of work hours


If the poster just wants sci fi (and especially if that doesn’t include sci fi horror without space ships) I could see them only getting one or two good movies per year.

Me, I’d need two or three more lifetimes to get through my probably-good list of movies and tv, just for single watches… but I’m up for just about any genre.


Apple TV has a ton of great original sci-fi

What are some good sci-fis in your opinion?

HBO is destination television - it's the taste that Netflix lacks and so desperately needs.

WB and HBO together have the franchises that Netflix has been trying to build. DC, Harry Potter, Game of Thrones, Lord of the Rings (film + game rights - tv rights), West World, The Matrix, Mad Max, King Kong, all of Cartoon Network and Adult Swim.

What does Paramount or Hulu have? It's a lot of fluff on the same or even lower caliber than Netflix.

Amazon gives some good stuff away for "free". Apple has good shows, too.

Disney? Meh - they've got Andor and that's really it.

If whomever buys HBO also also buys A24, it's over. That's all I need.


Westworld... the show you can't watch on HBO anymore. Taste? Like what they just did to one of the best shows ever, Mad Men? HBO today (Or Max, or HBO Max, or whatever their branding of the day is) is not the HBO it was before David Zaslav got his hands on it.

Paramount/NBC/Peacock/Fox/ESPN have live sports, which are the only thing left worth paying for, everything else can be skipped or pirated.

ESPN comes with your Disney+ which also gives you Hulu

Peacock says they have sports, but then doesn't actually show all of the matches and instead tries to prop up USA and Telemundo numbers. Many times I have to watch a match in a language I'm not fluent even though I'm paying for Peacock specifically as they have the rights. Can't watch USA as I cut the cord years ago, so I'm left with hoping I can find the right spot for my OTA antenna to be able to tune in.


If anything, the gambling ads interspersed with sports can be skipped or pirated.

> What does Paramount or Hulu have?

Even less now that Taylor Sheridan has left for greener pastures.


Sheridan is staying with Paramount until 2029, and the shows he made for them will remain theirs. So, Sheridan will be still be elevating paramount subscriptions long into the future.

Westworld, the show that dont exist because they would had to pay royalties to actors and workers?

Screw them. Likr, literally choosing to remove the show to make an example of it.


Paramount has Star Trek, so it's a must-have for any Trekkie. And Disney has Star Wars, so it's a must-have for any nerf herder. :p

Nu trek definitely isn't must have for any Trekkie.

Netflix could have built many franchises by now but instead burns them all in season 1 or season 2 and makes slop on purpose (i.e. explain what you are doing while you are doing it for the people not watching directly, etc). They also just had the most successful franchise launch of all time -- Kpop demon hunters. The brand is apparently worth about 10 billion right now, and they bought the film and the rights from Sony for <20 million.

If they purchase HBO, I assume HBO will regress to the baseline that is Netflix content, not the other way around.


> Disney? Meh - they've got Andor and that's really it.

I like this post about how The Matrix, Lord of the Rings, Mad Max and Harry Potter are all valuable IP written by somebody that appears to have never heard of Marvel comics, Star Wars, Indiana Jones, The Simpsons, any Pixar film, Avatar, The X-Files, or The Bachelor.


> Disney? Meh - they've got Andor and that's really it.

Disney owns so much content, IP and nostalgia that they don't care much.


Pretty sure their shareholders care. Their market cap is at pre 2019 levels. Their earnings are back to 2014 levels.

https://companiesmarketcap.com/walt-disney/earnings/

Meanwhile, Netflix is up $300B since 2019. And Netflix’s earnings are about to surpass Disney’s:

https://companiesmarketcap.com/netflix/earnings/

And Netflix has 13,000 employees, while Disney has 233,000.


> And Netflix has 13,000 employees, while Disney has 233,000.

And Disney is significantly more than just a single streaming service struggling to get content.

Their Direct-to-Consumer business (aka Netflix equivalent) posted a net profit increase 9.5x year on year (from 143 million to 1.3 billion) and has more than half the number of Netflix subscribers (196 million vs. 300+ million) in significantly shorter time than Netflix. https://thewaltdisneycompany.com/the-walt-disney-company-rep...


Operating profit, not net profit. Net income (or profit or earnings) can only be calculated for the whole business.

> has more than half the number of Netflix subscribers (196 million vs. 300+ million) in significantly shorter time than Netflix.

I don’t find this impressive. Streaming has been the future for over a decade, and Disney has long had more, and more popular content than Netflix. So why is it taking them so long to catch up to Netflix? They should have surpassed Netflix a long time ago.

Disney even sells sports.


> Streaming has been the future for over a decade, and Disney has long had more, and more popular content than Netflix. So why is it taking them so long to catch up to Netflix?

Netflix started streaming 18 years ago. Disney+ appeared 6 years ago, and Disney didn't acquire Hulu (as part of 20th Century Fox) until 2019. Also, Disney+ appeared in the era of multiple streaming services, and IIRC didn't pull their content from Netflix until sometime after they launched Disney+. Netflix also didn't lose content from other big content distributors like WB until later.

To compare: in near-absence of any competition it took Netflix until 2021 (10 years) to reach 200 million subscribers. There's Hulu that was launched in 2007, but they were nearly absent outside of the US.

So Disney has streaming competition on all fronts, has gone through price increases etc., and still grows their streaming service.

---

Netflix buying WB is not really a desperation move, but it is a question of survival. Netflix has very little content of its own, and has trouble licensing relevant content from studios that are now its direct rivals: Disney, WB, Paramount etc.

They were all happily presented on Netflix, and then pulled nearly all their content to launch their own streaming platforms.

Netflix has survived by dumping enormous amounts of money into producing their own content, and licensing foreign content. But that is clearly not enough to maintain momentum, or to keep subscribers interested in the service. With WB they get their hands on a lot of IP that they can inject back into the service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: