OpenAI was built to influence the eventual value chain of AI in directions that would give the funding parties more confidence that their AI bets would pay off.
This value chain basically being one revolving around AI as substituting predictions and human judgement in a business process, much like cloud can be (oversimply) modeled as moving Capex to Opex in IT procurement.
They saw that, like any primarily B2B sector, the value chain was necessarily going to be vertically stratified. The output of the AI value chain is as an input to another value chain, it's not a standalone consumer-facing proposition.
The point of OpenAI is to invest/incubate a Microsoft or Intel, not a Compaq or Sun.
They wanted to spend a comparatively small amount of money to get a feel for a likely vision of the long-term AI value chain, and weaponize selective openness to: 1) establish moats, 2) Encourage commodification of complementary layers which add value to, or create an ecosystem around, 'their' layer(s), and 3) Get insider insight into who their true substitutes are by subsidizing companies to use their APIs
As AI is a technology that largely provides benefit by modifying business processes, rather than by improving existing technology behind the scenes, your blue ocean strategy will largely involve replacing substitutes instead of displacing direct competitors, so points 2 and 3 are most important when deciding where to funnel the largest slice of the funding pie.
_Side Note: Becoming an Apple (end-to-end vertical integration) is much harder to predict ahead of time, relies on the 'taste' and curation of key individuals giving them much of the economic leverage, and is more likely to derail along the way._
They went non-profit to for-profit after they confirmed the hypothesis that they can create generalizeable base models that others can add business logic and constraints to and generate "magic" without having to share the underlying model.
In turn, a future AI SaaS provider can specialize in tuning the "base+1" model, then selling that value-add service to the companies who are actually incorporating AI into their business processes.
It turned out, a key advantage at the base layer is just brute force and money, and further outcomes have shown there doesn't seem to be an inherent ceiling to this; you can just spend more money to get a model which is unilaterally better than the last one.
There is likely so much more pricing power here than cloud.
In cloud, your substitute (for the category) is buying and managing commodity hardware. This introduces a large-ish baseline cost, but then can give you more favorable unit costs if your compute load is somewhat predictable in the long term.
More importantly, projects like OpenStack and Kubernetes have been desperately doing everything to commodotize the base layer of cloud, largely to minimize switching costs and/or move the competition over profits up to a higher layer. You also have category buyers like Facebook, BackBlaze, and Netflix investing heavily into areas aimed at minimizing the economic power of cloud as a category, so they have leverage to protect their own margins.
It's possible the key "layer battle" will be between the hardware (Nvidia/TPUs) and base model (OpenAI) layers.
It's very likely hardware will win this for as long as they're the bottleneck. If value creation is a direct function of how much hardware is being utilized for how long, and the value creation is linear-ish as the amount of total hardware scales, the hardware layer just needs to let a bidding war happen, and they'll be capturing much of the economic profit for as long as that continues to be the case.
However, the hardware appears (I'm no expert though) to be something that is easier to design and manufacture, it's mostly a capacity problem at this point, so over time this likely gets commoditized (still highly profitable, but with less pricing power) to a level where the economic leverage goes to the Base model layer, and then the base layer becomes the oligopsony buyer, and the high fixed investment the hardware layer made then becomes a problem.
The 'Base+1' layer will have a large boom of startups and incumbent entrants, and much of the attention and excitement in the press will be equal parts gushing and mining schaudenfreude about that layer, but they'll be wholly dependent on their access to base models, who will slowly (and deliberately) look more and more boring apart from the occasional handwringing over their monopoly power over our economy and society.
There will be exceptions to this who are able to leverage proprietary data and who are large enough to build their own base models in-house based on that data, and those are likely to be valuable for their internal AI services preventing an 'OpenAI' from having as much leverage over them and being much better matched to their process needs, but they will not be as generalized as the models coming from the arms race of companies who see that as their primary competitive advantage. Facebook and Twitter are two obvious ones in this category, and they will primarily consume their own models, rather than expose them as model-as-a-service directly.
The biggest question to me is whether there's a feedback loop here which leads to one clear winning base layer company (probably the world's most well-funded startup to date due to the inherent upfront costs and potential long-term income), or if multiple large, incumbent tech companies see this as an existential enough question that they more or less keep pace with each other, and we have a long-term stable oligopoly of mostly interchangeable base layers, like we do in cloud at the moment.
Things get more complex when you look to other large investment efforts such as in China, but this feels like a plausible scenario for the SV-focused upcoming AI wars.
Apparently you don't need to be a large company to train GPT-3. EleutherAI is using free GPU from CoreWeave, the largest North American GPU miner, who agreed to this deal to get the final model open sourced and have their name on it. They are also looking at offering it as an API.
I think it's great they're doing this, but GPT-3 is the bellwether not the end state.
Open models will function a lot like Open Source does today, where there are hobby projects, charitable projects, and companies making bad strategic decisions (Sun open sourcing Java), but the bulk of Open AI (open research and models, not the company) will be funded and released strategically by large companies trying to maintain market power.
I'm thinking of models that will take $100 million to $1 billion to create, or even more.
We spend billions on chip fabs because we can project out long term profitability of a huge upfront investment that gives you ongoing high-margin capacity. The current (admittedly early and noisy) data we have about AI models looks very similar IMO.
The other parallel is that the initial computing revolution allowed a large scale shift of business activities from requiring teams of people doing manual activities, coordinated by a supervisor towards having those functions live inside a spreadsheet, word processor, or email.
This replaces a team of people with (outdated) specializations with fewer people accomplishing the same admin/clerical work by letting the computer do what it's good at doing.
I think a similar shift will happen with AI (and other technologies) where work done by humans in cost centers is retooled to allow fewer people to do a better job at less cost. Think compliance, customer support, business intelligence, HR, etc.
If that ends up being the case, donating a few million dollars worth of GPU time doesn't change the larger trends, and likely ends up being useful cover as to why we shouldn't be worried about what the large companies are up to in AI because we have access to crowdsourced and donated models.
I think calling this a "wild-ass guess" undersells it a bit (either that or we have very different definitions of a WAG).Very well though-through and compelling case.
My biggest question is whether composable models are indeed the general case, which you say they confirmed as evidenced by the shift away from non-profit. It's certainly true for some domains, but I wonder if it's universal enough to enable the ecosystem you describe.
This is neat, but almost no startups of any kind, even mid size corps, have such complicated and intricate plans.
More likely: OpenAI was a legit premise, they started to run out of money, MS wanted to license and it wasn't going to work otherwise, so they just took the temperature with their initial sponsors and staff and went commercial.
It gives cover to those who dismissed the concern pre-covid.
It's common for a pragmatic 'Cassandra' to point to the newest evidence as permission for people to finally wake up to the problem rather than berating them for not paying attention any earlier.
It's the 'movie' version of negotiating that plays to a populist crowd.
It intersects with other 'tough guy' identity tropes that are being (successfully) used in domestic political battles to gain power, and then are repositioned to be used in foreign relations with predictably terrible outcomes.
The problem is the audience of the negotiating stance isn't the counterparty, it's the domestic base who elected you because they were fed up with nuance and are suspicious of the counterparty.
This isn't just a Brexit problem, but is much more widespread.
I don't know how you'd be able to align incentives any worse.
> It's the 'movie' version of negotiating that plays to a populist crowd.
See also: "the art of the deal"
> The problem is the audience of the negotiating stance isn't the counterparty, it's the domestic base who elected you because they were fed up with nuance and are suspicious of the counterparty.
Not just the nuance, but the lack of dramatic change, a handful of examples of bad behavior by the party of nuance being equally corrupt or useless, and a constant stream of propaganda highlighting that.
I agree, we invented this stuff, and we should be able to fix it when it 'breaks.'
However, we see this constantly in any manmade system (system as defined by Systemics), where we can't control the emergent behavior of the system.
It's worse than that because we're talking about systems where change is necessarily political, so in-group/out-group dynamics dominate the perceived discussion ("Democrats think this, but Republicans say that").
This is made worse when one is pleased with the emergent outcome of a system and so ex post facto makes up justifications, but wouldn't feel comfortable arguing directly for the outcome publicly (see "States rights").
It's a problem I think about a lot, and there just aren't any easy answers to either combat bad faith participants or to be convincing that systems thinking needs to be given more weight in discourse.
Our available methods for engaging with the economic are, perhaps functionally, not able to answer for any problems. Walters Streeck describes economic theory as race horses hitched to a plow: The complexity of the models developed do nothing to advance the work to be done. The work of economic analysis has to re-orient according to Streeck - > "I am hungry for facts, not for concepts; concepts I access through facts and through the questions they raise, including the need to organize them into a coherent picture."
Funny enough, in the UK/islands I've found it to be much more common for the 'door close' button to work.
As the other replies have said, the button isn't for you, but it does have the psychological side effect of giving you a feeling of control over your experience, which can help quell anxiety. This is also the reason mirrors are common in elevators; the space feels much bigger and less claustrophobic, and you can waste a few seconds while you check yourself out in the mirror which makes the trip feel faster.
However, the button will usually work in a hospital, as they want the door to be open for a long enough period that people who are less mobile can enter before the door closes, but at the same time there are often urgent situations where seconds count, so you need to be able to get the door closing faster in an emergency.
I parsed the comment as 1) "this is not an original idea, here is the genesis" and 2) "look at this cool video." The parent comment was addressing 1 and you are addressing 2.
No, $2/month is the introductory rate for 3 months, then it goes back up to $34 or $39 a month, so if you're subscribing long-term, it still makes sense to go for the yearly package after your introductory price.
Funny enough my non-fintech failed (shuttered) company is listed here, but my more-or-less fintech company that's actually profitable and raised a bunch of money isn't listed.
Probably because we concentrated more on PR/Press on the first one...
OpenAI was built to influence the eventual value chain of AI in directions that would give the funding parties more confidence that their AI bets would pay off.
This value chain basically being one revolving around AI as substituting predictions and human judgement in a business process, much like cloud can be (oversimply) modeled as moving Capex to Opex in IT procurement.
They saw that, like any primarily B2B sector, the value chain was necessarily going to be vertically stratified. The output of the AI value chain is as an input to another value chain, it's not a standalone consumer-facing proposition.
The point of OpenAI is to invest/incubate a Microsoft or Intel, not a Compaq or Sun.
They wanted to spend a comparatively small amount of money to get a feel for a likely vision of the long-term AI value chain, and weaponize selective openness to: 1) establish moats, 2) Encourage commodification of complementary layers which add value to, or create an ecosystem around, 'their' layer(s), and 3) Get insider insight into who their true substitutes are by subsidizing companies to use their APIs
As AI is a technology that largely provides benefit by modifying business processes, rather than by improving existing technology behind the scenes, your blue ocean strategy will largely involve replacing substitutes instead of displacing direct competitors, so points 2 and 3 are most important when deciding where to funnel the largest slice of the funding pie.
_Side Note: Becoming an Apple (end-to-end vertical integration) is much harder to predict ahead of time, relies on the 'taste' and curation of key individuals giving them much of the economic leverage, and is more likely to derail along the way._
They went non-profit to for-profit after they confirmed the hypothesis that they can create generalizeable base models that others can add business logic and constraints to and generate "magic" without having to share the underlying model.
In turn, a future AI SaaS provider can specialize in tuning the "base+1" model, then selling that value-add service to the companies who are actually incorporating AI into their business processes.
It turned out, a key advantage at the base layer is just brute force and money, and further outcomes have shown there doesn't seem to be an inherent ceiling to this; you can just spend more money to get a model which is unilaterally better than the last one.
There is likely so much more pricing power here than cloud.
In cloud, your substitute (for the category) is buying and managing commodity hardware. This introduces a large-ish baseline cost, but then can give you more favorable unit costs if your compute load is somewhat predictable in the long term.
More importantly, projects like OpenStack and Kubernetes have been desperately doing everything to commodotize the base layer of cloud, largely to minimize switching costs and/or move the competition over profits up to a higher layer. You also have category buyers like Facebook, BackBlaze, and Netflix investing heavily into areas aimed at minimizing the economic power of cloud as a category, so they have leverage to protect their own margins.
It's possible the key "layer battle" will be between the hardware (Nvidia/TPUs) and base model (OpenAI) layers.
It's very likely hardware will win this for as long as they're the bottleneck. If value creation is a direct function of how much hardware is being utilized for how long, and the value creation is linear-ish as the amount of total hardware scales, the hardware layer just needs to let a bidding war happen, and they'll be capturing much of the economic profit for as long as that continues to be the case.
However, the hardware appears (I'm no expert though) to be something that is easier to design and manufacture, it's mostly a capacity problem at this point, so over time this likely gets commoditized (still highly profitable, but with less pricing power) to a level where the economic leverage goes to the Base model layer, and then the base layer becomes the oligopsony buyer, and the high fixed investment the hardware layer made then becomes a problem.
The 'Base+1' layer will have a large boom of startups and incumbent entrants, and much of the attention and excitement in the press will be equal parts gushing and mining schaudenfreude about that layer, but they'll be wholly dependent on their access to base models, who will slowly (and deliberately) look more and more boring apart from the occasional handwringing over their monopoly power over our economy and society.
There will be exceptions to this who are able to leverage proprietary data and who are large enough to build their own base models in-house based on that data, and those are likely to be valuable for their internal AI services preventing an 'OpenAI' from having as much leverage over them and being much better matched to their process needs, but they will not be as generalized as the models coming from the arms race of companies who see that as their primary competitive advantage. Facebook and Twitter are two obvious ones in this category, and they will primarily consume their own models, rather than expose them as model-as-a-service directly.
The biggest question to me is whether there's a feedback loop here which leads to one clear winning base layer company (probably the world's most well-funded startup to date due to the inherent upfront costs and potential long-term income), or if multiple large, incumbent tech companies see this as an existential enough question that they more or less keep pace with each other, and we have a long-term stable oligopoly of mostly interchangeable base layers, like we do in cloud at the moment.
Things get more complex when you look to other large investment efforts such as in China, but this feels like a plausible scenario for the SV-focused upcoming AI wars.