I am happy to declare that I am myself, a Luddite in many ways. No problem with that. I like lots of technology, not gonna lie. I'm a programmer and have a PhD in math, but I think it's gone too far in many respects. And if I have to build a coalition into millions, I'll do it one step at a time.
Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.[5]
The outcome of the times. Likewise it is far more likely harmless AI (like adding dumb AI stuff to a family video) will have stronger suppression by current power systems than society suppressing critics of generative AI. Mostly because the latter boils down to cultural preferences and protectionism, not the real sort of harm that would build collective mass to threaten progress. And the former because people are heavily motivated these days by outrage and abstract future threats, well before the tangible evidence exists of widespread harm.
There's a term to describe this: creative destruction, literally.
We are at the cusp of a full scale commoditization stage of generative AI that will impact all aspects of the creative/software fields.
If you want to know what this creative destruction will look like, look no further than previous centers of innovation like Detroit, the emptying naval shipyards of Busan, the zombie game studios around Osaka as a sign of things to come.
TLDR: AI is going to destroy a lot of white collar, high creativity, high intellect jobs that isn't protected by a union or occupational collective associations which were all created to counter against creative destructions from taking people's livelihoods away.
Unfortunately, 10 years ago when I tried to create a union organization for software engineers/designers and creative workers, it was sabotaged by fellow software engineers who seem highly susceptible to psyops much more than any other group.
We might see a repeat of what happened in Japan after mid 90s, when much of the country's stable and ample jobs disappeared thanks to internet, globalized financiering backed by authoritarian labour market.
Instead this time its not a communist country working together with bankers rather its a small group of technology companies pushing out bankers and creating a sort of a dystopian AI dominated labour field where humans no longer dumpster dive for wages but any remaining labour industry that AI cannot infiltrate aka ppl literally switching careers to stay employed because their old jobs were outsourced to AI.
I didn't even talk about the impact on wages (spoiler: it will enrich the 0.1% while shunning the 99.9% to temporary gigs and unstable employment not unlike regions which have experienced similar creative destruction back in the 90s and early 2000s).
It's hard to see a future without some sort of universal basic income and increased taxation on billionaires who will no longer be able to hide their assets offshore without facing serious headwinds not unlike how Chinese billionaires fear the CCP.
I'm not so sure how much that is relevant to Meta Movie Gen. I've tried all the tools: Luma, Runway, Kling
Luma is by far the worst and relatively compared to Runway and Kling by far produces the worst quality and unstable video. Runway has that distinctive "photo in the foreground with animated background" signature that turns many off.
Kling and Runway share that same "picture stability" issue that is rampant requiring several prompts before getting something usable (note I don't even include Luma because its output just isn't competitive imho).
This Meta Movie Gen seems to make heavy usage of SAM2 model which gets me super excited as I've always thought that would bring about that spatiotemporal golden chalice we always wanted, evident by the prompt based editing and tracking of objects in the scene (incredible achievement btw).
Until I have the tool ready to try I will withhold any prejudgements but from my own personal experiences with generative video, this Meta movie gen is quite possibly SOTA.
I simply have not seen this level of stability and confidence in output. Resolutional quality aside (which already Kling and Runway are at top of the game), the sheer amount of training data that Meta must have at disposal must be far more than what Kling (scrapes almost the entirety of Western content, copyrights be damned) and Runway can ever hope to acquire, plus the top notch talented researchers and deep learning experts they house and feed, makes me very optimistic that Meta and/or Google will achieve SOTA across the board.
Microsoft on the other hand has been puttering along by going all in on OpenAI (above, below and beside) which has been largely disappointing in terms of deliverability and performance and trying to stifle competition and protect its feeble economic moat via the recently failed regulatory capture attempt.
TLDR: this is quite possibly SOTA and Meta/Google have far more training data then anybody in the existing space. Luma is trash.
They made 300 million revenue last month, apparently up 17x from last year[1]. To get a P/E ratio of 20, assuming (falsely) that their spending holds constant, they'd need ~4x more revenue
I hold costs constant at $8B and get x = 4.4. $8B is probably a slight overestimate of current costs, I just took the losses from the article and discounted the last year's revenue to $3B. Users use inference which costs money so, in reality, costs will scale up with revenue, which is why I note this is a false assumption. But I also don't know how much of that went into training and whether they'll keep training at the current rate, so I can't get to a better guess.
If OpenAI starts making a lot of money on each subscription -- implied by your assumption that revenues will 4.4x while expenses stay constant -- the competition will aggressively undercut OpenAI in price. Everybody wants to take market share away from OpenAI, and that means OpenAI has to subsidize their users or sell at break even to prevent that from happening.
Furthermore, training also gets exponentially more expensive as models keep growing and this R&D is not optional. It's absolutely necessary to keep current OpenAI subscribers happy.
OpenAI will lose money, and lots of it, for years to come. They have no clear path to profitability. The money they just raised will last maybe 18 months, and then what? Are they going to raise another 20bn at a 500bn valuation in 2026? Is their strategy AGI or bust?
"That meant OpenAI could provide a return for investors, but those profits were limited. OpenAI has also long been in talks to restructure itself as a for-profit company. But that is not expected to happen until sometime next year..."
I'm not justifying anything here but I think their revenues are expected to triple next year...now that doesn't mean they will of course. But why do you say they've flatlined?
is there anything that runs on WASM for scraping? the issue is that you need to enable flags and turn off other security features to scrape on your web browser and this is why its not popular but with WASM that might change
WASM runs in a sandbox. It can only talk to the outside world via Javascript so you can forget the idea that it might be a way to crack through browser security.
Maybe somebody will make a web browser with all of the security locks disabled. Sort of like the Russian commander in "Hunting for Red October" who disabled his missiles' security features in order to more effectively target the American sub but then got blown up by his own missile.
just because the internet datamining bots don't care doesn't mean they won't create hyper-accurate portraits of your personal life, that can and likely will be sold, hacked, or leaked.
you have 0 reason to think they'll respect your life, and less reason to help them make money off of your misery. at least with a throwaway and some marginal protections like a VPN and browser fingerprint obfuscation you can feel mostly secure.