The ChatGPT site crossed 3B visits last month (For perspective - https://imgur.com/a/hqE7jia). It has been >2B since May this year and >1.5B since March 2023. The Summer slump of last year ? Completely gone.
Gemini and Character AI ? A few hundred million. Claude ? Doesn't even register. And the gap has only been increasing.
So, "just" brand recognition ? That feels like saying Google "just" has brand recognition over Bing.
ChatGPT usage from the main site dwarfs API Usage for both Open AI and Anthropic so we're not really saying different things here.
The vast majority of people using LLMs just use ChatGPT directly. Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper but that doesn't mean they register in the public consciousness.
>Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper
If there's an actual business to be found in all this, that's where it's going to be.
The consumer side of this bleeds cash currently and I'm deeply skeptical of enough of the public being convinced to pay subscription fees high enough to cover running costs.
Especially when Google is good enough for most people. Most people just want information not someone to give them digested info at $x per month. All the fancy letter writing assistants they get for free via the corporate computer that likely has Microsoft Word
If inference cost is so cheap and negligible, then we'll be able to run the models on an average computer. Which means they have no business model (assuming generosity from Meta to keep publishing llma for free).
I think they mean running inference. Either more efficient/powerful hardware, or more efficient software.
No one thinks about the cost of a db query any more, but I'm sure people did back in the day (well, I suppose with cloud stuff, now people do need to think about it again haha)
I just used ChatGPT and 2 other similar services for some personal queries. I copy-pasted the same query in all 3 of them, using their free accounts, just in case one answer looks better than the others. I got into this habit because of the latency: in the time it takes for the first service to answer, I've had time to send the query to 2 others, which makes it easier to ignore the first response if it's not satisfying. Usually it's pretty much the same though. We can nitpick about benchmarks, but I'm not sure they're that relevant for most users anyway. It doesn't matter much to me whether something is wrong 10 or 20% of the time, in both cases I can only send queries for which I can easily check that the answer makes sense.
I see other comments mentioning they stopped their ChatGPT Plus subscription because the free versions work well enough. I've never paid myself and it doesn't look like I ever will, because things keep getting better for free anyway. My default workflow is already to prompt several LLMs so one could go down, I wouldn't even notice. I'm sure I'm an outlier with this, but still, people might use Perplexity for their searches, some WhatsApp LLM chatbot for their therapy session, purely based on convenience. There's no lock-in whatsoever into a particular LLM chat interface, and the 3B monthly visits don't seem to make ChatGPT better than its competitors.
And of course as soon as they'll add ads, product placement, latency or any other limitation their competitor doesn't have, I'll stop using them, and keep on using the other N instead. At this point it feels like they need Microsoft more than Microsoft needs them.
They probably lose on each one, but it's the same with their competitors.
FWIW, regular folks now say "let me ask Chat" for what it used to be "let me Google that"; that is a huge cultural shift, and it happened in only a couple years.
> FWIW, regular folks now say "let me ask Chat" for what it used to be "let me Google that"
I have literally never heard that from anyone, and most everyone I know is “regular folk”.
I work in (large scale) construction, and no one has ever said anything even remotely similar. None of my non-technical or technical business contacts.
I’m not saying you haven’t, and that your in-group doesn’t, just that it’s not quite the cultural phenomenon you’re suggesting.
It just so happened to coincide with Google delivering terrible results. I used to be able to find what I wanted but now the top results only loosely correlate with the search. I’m sure it works for most people’s general searches but it doesn’t work for me.
Myspace and Digg dug their own graves though. Myspace had a very confusing UX and Digg gave more control to advertisers. As long as OpenAI dont make huge mistakes they can hold on to their marketshare.
The moat is bigger on MySpace and Digg though since you have user accounts, karma, userbases. The thing with chatbots is I can just as easily move to a different one, I have no history or username or anything and there is no network effect. I don't need all my friends to move to Gemini or Claude, I don't have any friends on OpenAI, it's just a prompt I can get anywhere.
Digg just wasn't big enough. Once these networks get to a certain size they're unkillable. Look at all the turmoil reddit went through, a hated redesign, killed 3rd party apps, a whole protest movement, none of it mattered. People bring up digg and friendster but that was 20 years ago when these networks were way smaller. No top 10 social network has died since then.
Reddit had a much better system for commentary, as opposed to just reacting to URLs.
Sure, you could comment on Digg, but it was a pain and not good for conversations, and that meant there was less to keep people around when it seemed like the company was started to put their finger on the scales for URL-submissions.
It wasn't a pain on Digg, and it was equally good at conversations.
Reddit did not win due to it's features, it won because Digg said it doesn't matter what the users think, we will redesign the site and change how it works regardless of the majority telling us they don't want it.
OpenAI's revenue isn't from advertising, it should be slightly easier for them to resist the call of enshittification this early in the company history.
OpenAI can become a bigger advertising company than Google.
When people ask questions like which product should I buy, ChatGpt can recommend products from companies who are willing to give money to it to have their products recommended by AI.
This will only work if they can ensure the product that they promote is, in fact, good. Google makes it very clear that what you are seeing is popular (or is a paid ad), but they don't endorse it. ChatGPT is seen as an assistant for many, and if they start making bad recommendations, things can go bad fast.
As model performance converges, it becomes the strongest moat. Why go to Claude for a marginally better model when you have the ChatGPT app downloaded and all your chat history there.
I actually pre-emptively deleted ChatGPT and my account recently as I suspect that they're going to start aggressively putting ads and user tracking into the site and apps to build revenue. I also bet that if they do go through with putting ads into the app that daily user numbers will drop sharply - one of ChatGPT's biggest draws is its clean, no-nonsense UX. There are plenty of competitors that are as good as o1 so I have lots of choices to jump ship to.
And some of them will be from poisoned data, not just an explicit prompt by the site-owner. A whole new form of spam--excuse me--"AI Engine Optimization."
Google search is free. I suspect OpenAI may have to start charging for ChatGPT at some point so they stop hemorrhaging money. Customers who are opening their wallet might shop around for other offerings.
While I recognize this, I have to assume that the other "big players" already have this same data... ie: anyone with a search engine that's been crawling the web for decades. New entries to the race? Not so much, new walls and such.
That gives the people who've already started an advantage over newcomers, but it's not a unique advantage to OpenAI.
The question really should be what if anything gives OpenAI an advantage over Anthropic, Google, Meta, or Amazon? There are at least four players intent on eating OpenAI's market share who already have models in the same ballpark as OpenAI. Is there any reason to suppose that OpenAI keeps the lead for long?
I think their current advantage is willingness to risk public usage of frontier technology. This has been and I predict will be their unique dynamic. It forced the entire market to react, but they are still reacting reluctantly. I just played with Gemini this morning for example and it won't make an image with a person in it at all. I think that is all you need to know about most of the competition.
I think Anthropic is a serious technical competitor and I personally use their product more than OpenAI, BUT again I think their corporate cautiousness will have them always +/- a small delta from OpenAI's models. I just don't see them taking the risk of releasing a step function model before OpenAI or another competitor. I would love to be proven wrong. I am a little curious if the market pressures are getting to them since they updated their "Responsible Scaling Policy".
From what I've seen, Claude Sonnet 3.5 is decidedly less "safe" than GPT-4o, by the relatively new politicized understanding of "safety".
Anthropic takes safety to mean "let's not teach people how to build thermite bombs, engineer grey goo nanobots, or genome-targeted viruses", which is the traditional futurist concern with AI safety.
OpenAI and Google safety teams are far more concerned with revising history, protecting egos, and coddling the precious feelings of their users. As long as no fee-fees are hurt, it's full speed ahead to paperclip maximization.
This has not been my experience. Twice in the last week I've had Claude refuse to answer questions about a specific racial separatist group (nothing about their ideology, just their name and facts about their membership) and questions about unconventional ways to assess job candidates. Both times I turned to ChatGPT and it gave me an answer immediately
Not to dispute your particular comment, which I think is right, but it's worth pointing out we're full steam ahead on paperclips regardless of any AI company. This has been true for some 300 years, longer depending how flexible we are with definitions and where we locate inflection points
Well, at this point most new data being created is conversations with chatgpt, seeing as how stack overflow and reddit are increasingly useless, so their conversation logs are their moat.
Google and Meta aren't exactly lacking in conversation data: Facebook, Messenger, Instagram, Google Talk, Google Groups, Google Plus, Blogspot comments, Youtube Transcripts, &tc. The breadth and and breadth of data those 2 companies are sitting on that goes back for years is mind boggling.
Getting to market first is obviously worth something but even if you're bullish on their ability to get products out faster near term, Google's going to be breathing right down their neck.
They may have some regulatory advantages too, given that they're (sort of) not a part of a huge vertically integrated tech conglomerate (i.e. they may be able to get away with some stuff that Google could not).
I don't know if this is going to emerge as a monopoly, and likely won't, but for whatever reason, openai and anthropic have been several months ahead of everyone else for quite some time.
I think the perception that they're several months ahead of everyone is also a branding achievement: They are ahead on Chat LLMs specifically. Meta, Google, and others crush OpenAI on a variety of other model types, but they also aren't hyping their products up to the same degree.
Segment Anything 2 is fantastic- but less mysterious because its open source. NotebookLM is amazing, but nobody is rushing to create benchmarks for it. AlphaFold is never going to be used by consumers like ChatGPT.
OpenAI is certainly competitive, but they also work overtime to hype everything they produce as "one step closer to the singularity" in a way that the others don't.
>Meta, Google, and others crush OpenAI on a variety of other model types, but they also aren't hyping their products up to the same degree.
They aren't letting anyone external have access to their top end products either. Google invented transformers and kept the field stagnant for 5 years because they were afraid it would eat into their search monopoly.
OpenAI is 80% product revenue and 20% API revenue. Anthropic is 40/60 in the other direction, but Mike Krieger is now CPO and trying to change that. Amazon is launching a paid version of Alexa. Google is selling their Gemini assistant (which is honestly okay) and NotebookLM is a great product. Meta hasn't built a standalone AI product that you can pay for yet.
The combination of the latest models in products that people want to use is what will drive growth.
Not sure why this has been voted down - X.ai has a 100K H100 cluster in Memphis, and Meta either has (by now) or is in process of acquiring 350K H100s!
Unlike the hyperscalers (i.e. cloud providers), Meta has a use for these themselves for inference to run their business on.
my 8 year old knows what ChatGPT is but has never heard of any other LLM (or OpenAI for that matter). They're all "chatGPT" in the same way that refers to searching the internet as "googling" (and is unaware of Bing, DDG or any other search engine).
I think it shows really well how OpenAI was caught off guard when Chat GPT got popular and proved to be unexpectedly useful for a lot of people. They just gave it a technical name for what it was, a Generative Pre-trained Transformer model that was fine tuned for chat style interaction. If they had any plans on making a product close to what it is today they would have given it a catchier name. And now they're kind of stuck with it.
Well they cant come up with version names that stand out in any way so I dont expect them to give their core product a better name anytime soon. I wish they would spend a little time this, but i guess they are too busy building?
Everytime i ask this myself, OpenAI comes up with something new groundbreaking and other companies play catchup. The last was the Realtime API. What are they doing right? I dont know
OpenAI is playing catch-up of their own. The last big announcement they had was "we finally built Artifacts".
This is what happens when there's vibrant competition in a space. Each company is innovating and each company is trying to catch up to their competitors' innovations.
It's easy to limit your view to only the places where OpenAI leads, but that's not the whole picture.
Up front: I have always hated Facebook, from a “consumer” perspective. Good on everyone who made money, etc. I dislike the entire entity, to say the least.
I can’t shake the thought that meta played an integral role in the open-source nature of the LLM movement. Am I wrong, I can’t help but think I’m missing something.
I used to think it was significantly better than most other players but it feels like everyone else has caught up. Depending on the use case they have been surpassed as well. I use perplexity for a lot of thinks I would have previously used chatgpt for mostly because it gives sources with its responses.
It's possible that it's only one strong personality and some money away but my guess is that OpenAI-rosoft have the best stack for doing inference "seriously" at big, big, scale e.g. moving away from hacky research python code and so on.
I'm not so sure about that. They have kind of opposite incentives to OpenAI. OpenAI starting without much money had to hype the AGI next year stuff to get billions given to them. Google on the other hand is in such a dominant position with most of the search market, much of the ad market, ownership of Deepmind, huge amounts of data and money and so on probably don't want to be seen as a potential monopoly to be broken up.
As others have said I would say first-mover/brand advantage is the big one. Also their o1 model does seem to have some research behind it that hasn't been replicated by others. If you're curious about the latter claim, here's a blog I wrote about it: https://www.airtrain.ai/blog/how-openai-o1-changes-the-llm-t...
Nothing which other companies couldn't catch up with if OpenAI would break down / slow down for a year (i.e. because they lost their privileged access to computing resources).
Engineers would quit and start improving the competition. They're still a bit fragile, in my view.
Not really sure since this space is so murky due to the rapid changes happening. It's quite hard to keep track of what's in each offering if you aren't deep into the AI news cycle.
Now personally, I've left the ChatGPT world (meaning I don't pay for a subscription anymore) and have been using Claude from Anthropic much more often for the same tasks, it's been better than my experience with ChatGPT. I prefer Claude's style, Artifacts, etc.
Also been toying with local LLMs for tasks that I know don't require a multi-hundred billion parameters to solve.
Claude is great except for the fact the iOS app seems to require a login every week. I’ve never had to log into ChatGPT but Claude requires a constant login and the passwordless login makes it more of a pain!
I also like 3.5 sonnet as the best model (best ui too) and it’s the one I ask questions to
We use Gemini flash in prod. The latency and cost is just unbeatable - our product uses llms for lots of simple tasks so we don’t need a frontier model.
One hypothetical advantage could be secret agreements / cooperation with certain agencies. That may help influence policy in line with OpenAI's preferred strategy on safety, model access etc.