It is a chicken and egg problem. As long as the majority of people who would maintain the social environment are avoiding the social environment, the healthy consensus/operating regime can never emerge.
In my experience the majority consensus is to maintain a quiet, generally polite environment on trains and buses.
But that's precisely the problem, it only takes a very tiny minority to change this. If one group, one person sometimes, in a carriage of 50 people decides to go against this, then that's that. It's not even particularly common, but it happens, it's random, and so it's just something that must be contended with.
Correct. But the golden question is, do what? The authorities don't care. Rules and laws are rarely enforced, and when they are enforced they're done so unevenly. If you decide to take matters into your own hands, it's much more likely that you will be punished by the law than the person you were correcting. So, what do you expect people to do?
It's not. Pass a law that continuing to be noisy or disruptive on a bus or train after a warning results in 10 years of prison time with no parole and consistently enforce it. The problem will solve itself without a chicken and egg problem. Problematic people can simply be removed from society to make for a good social environment. Adding more good people is not the only option and in fact only hides the problem instead of solving it.
This would involve incarcerating a lot of homeless people, which is expensive, and pro-homeless activists would see it as a human rights abuse and fight it.
Well “I badgered claude code for a month and got something that seems to work but I don’t remember how or why” doesn’t make for very compelling reading.
We are seeing the rift between actual hacking and vibe-building opening in real time. People always wanted to do this and get the attention. Now they can do it but it isn’t worth the attention.
But reproducibility should be the point. As a result of the structure it approaches an asymptote from one side or the other. I took it once and approached from green and my greenness was 77%, a second time it approached from blue and my blueness was 68%.
A test that allows an answer of neither would deliver more information (transition points and an error bar) without failing to identify a distribution in the population taking the test.
The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.
There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.
That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).
Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
> What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.
Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)
Don't open models and competition between frontier providers both serve as barriers here? If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else. They'd need a sufficient advantage to pull out far ahead of everyone else before others had a chance to react in a meaningful way. Otherwise the competitors that absorbed all your subscriptions would stack that much more hardware and continue to challenge you.
I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.
> If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else.
Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.
Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.
Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.
Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.
Either way, the way I see it, software industry as we know it is already living on borrowed time.
I don't understand where the unbeatable edge is supposed to come from here. Don't we already have this in the form of agents using tools? Right now it's CLI but it's not difficult to imagine extending that to a GUI coupled with OCR and image recognition in a way that generalizes.
So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.
So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?
If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.
I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.
Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.
So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.
> Open models can't compete
They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.
One of the most valuable software products in the world is Instagram. Tens of billions of revenue annually.
Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.
They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.
I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?
Instagram isn't a software product, though, except in the trivial sense. It's a portal to the actual service people care about, which is the social network. AI can't do anything about it now, because access to the Instagram social network is legally protected. As long as Meta has a right to control how Instagram is accessed, Instagram the app (or whatever else they decide) will remain at the edge. But beyond that, expect AI. It's happening already, plenty of companies and individuals use third-party software, business automation, and increasingly also AI to create, post, and monitor content. Expect all that to be eaten.
Same with any other social media, DRMed media streaming, and other non-software services, where actual client software does nothing useful and serves primarily as a toll gate.
It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
> Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.
That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.
If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.
Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.
Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.
You can blame million years of evolution for your bad life or you can change it right now living in the present moment. It’s fine if you don’t do it right now because later at a future present moment you can still make the choice to be happy. It might take some work but it will never be because of something that happened in the past. It will be something that you do right now. There are no exceptions or escape hatches
These cliches are just annoying to read at this point, everyone has heard this stuff a million times and yet...millions still suffer. If I'm being honest it just comes across as yet another form of bullying when socially well adjusted people say stuff like this to people worse off than them.
I can agree with you while still agreeing with parent poster that it's basically "git gud"-tier bullying.
Very very few orators can successfully pull off "just fix your problems bro" as anything beyond a generic kick in the pants for the people presently predispositioned to be motivated by one.
I regularly bully my close friends into being better people. It just so happens that I fell down the staircase of life much earlier than a lot of people do. I had to do most of my “midlife crisis” thinking in my early 20s because most of my family died and I had to come out as gay without any support.
Now that I’m in my 30s I have the joy of helping my friends along on this journey called life. Sometimes people just need a gentle nudge up the staircase. Sometimes they need to be carried against their will
I agree it can feel frustrating and inactionable but it's not bullying, it's a thoughtful well-meaning response. Actually if it makes you feel bad it's a signal it may be worth contemplating more.
That approach doesn't work for everyone. Everything you say could be correct, but if the person thinks their feelings are not being listened to, there is a chance they still won't take your advice.
One of my therapists said it was normal in her circle for people not to get onto someone's case if they're mentally unwell and have chores piling up, because it makes sense they don't have as much effort to give to all aspects of life. At the time I didn't understand this statement, because up until then my only contacts were people who, although they didn't go as far as "bullying" me into compliance, had told me in effect that how I felt about my life was irrelevant to whether or not I was fulfilling every single one of my adult responsibilities. What ultimately worked for me wasn't those contacts who said there were no excuses, but my therapist who decided not to frame my decisions in terms of "excuses".
For me this kind of thing hurts because:
1. There's not any room for compassion or slack. I'm not talking about people who take advantage of others' goodwill. Even if you try to help with this "no excuses" mentality, the other person could start to worry if the next inadvertent slip-up or setback counts as an "excuse" they'll be looked down upon for. This kind of thought will linger and reduce the effectiveness of the intervention.
2. Your feelings aren't listened to, or if they are it's only at a level superficial enough to obtain compliance. This is bad enough on its own. What might not be obvious is if the person has had a life marked by repeated instances of their feelings being shut down or not listened to, especially in childhood, this approach only backfires that much harder. These are emotional patterns that have been established in critical periods/over a long period of time that are being relieved at a much higher intensity than the average population. And most importantly, you can't know for sure if something like this applies until you get to know the person better, which is why a lot of one-off prescriptive advice towards strangers is ineffective.
3. The advice-giver is often successful/came out of hardship themselves, so by being looked down upon as irresponsible it gives the impression that you're being excluded from the in-group of mentally well/recovered people. Avoiding exclusion from a group is one of the biggest sources of strife today, as modern politics and social media indicate. And being mentally stable is often one of the most important groups to be included in for people who know they're depressed, so it hurts even more.
That’s all excuses. I’m not saying it’s right to bully someone who’s in the depths of depression. But the depression isn’t gonna fix itself and it certainly won’t fix itself because of something that happened in the past
i don't know what it takes to get out of depression, but "it isn't going to fix itself" doesn't contradict that the depressed person can't get out of it on their own. it's like telling someone stuck in a hole to stop whining because they are not going to get out of the hole as long as they do nothing. that's true, but they are also not in a position to see a way out, or may simply not be able to get out without help.
as i said, i don't know what it takes, but i do think that compassion, patience, and recognition of efforts and absence of any hint of blame by others are part of it.
reply