Hacker Newsnew | past | comments | ask | show | jobs | submit | awongh's commentslogin

I thought Jensen’s comparison to Huawei’s cell phone hardware infra (towers and networking) to be an interesting comparison- that shutting them out of a market was one of the causes of their current position in the market. It made them more dominant in the end.

The benchmarks don’t seem to say that language ability has gotten worse?

There are no real benchmarks of how "natural/idiomatic" output is in a multitude of languages.

"Multilingual benchmarks" are usually something like "How good is it at a multiple choice exam like the SAT in language X". This is a completely unrelated metric.


That's the thing with benchmarks, without evals and actual hands-on experience they can give you false confidence. Claude now sounds almost clinical, and is unable to speak in different styles as easily. Claude 4+ uses a lot more constructions borrowed from English than Claude 3, especially in Slavic languages where they sound unnatural. And most modern models eventually glitch out in longer texts, spitting a few garbage tokens in a random language (Telugu, Georgian, Ukrainian, totally unrelated), then continuing in the main language like nothing happened. It's rare but it happens. Samplers do not help with this, you need a second run to spellcheck it. This wasn't a problem in older models, it's a widespread issue that roughly correlates with the introduction of reasoning. Another new failure mode is self-correction in complicated texts that need reading comprehension: if the model hallucinates an incorrect fact and spots it, it tries to justify or explain it immediately. Which is much more awkward than leaving it incorrect, and also those hallucinations are more common now (maybe because the model learns to make those mistakes together with the correction? I don't know.)

Not disputing this might be true, but this seems like something that should be capturable in a multi-lingual benchmark.

Maybe it's just something that people aren't bothered with?


Basically everyone who experiments with creative writing is keenly aware of that (e.g. roleplayers), it's just the devs that have the experience training the models for it (Anthropic, DeepMind) aren't bothered doing this anymore since there's no money in it.

>this seems like something that should be capturable in a multi-lingual benchmark

Creative writing benchmarks just don't have good objectives to measure against. In particular, valid but inauthentic language constructions can't be captured well if your LLM judge lacks fidelity to capture it to begin with. Which is I think what typically happens.

An easy litmus test would be making a selected character in a story speak Ebonics or Haitian Creole or TikTok. Claude 3 Opus was light years ahead of any model in authenticity in using them, and it was immediately obvious in a side-by-side comparison with any model including Claude 3.5+. Nuances of Polish or Russian profanities/mat or British obscenities are always the hardest for any model (they tend to either swear like dockers or tone it down, lacking the eloquence), but Opus 3 was also ahead in any of those.


Btw samplers do in fact help with this. Random tokens deep in your output context are due to accumulated sampling errors from using shit samplers like top_p and top_k with temperature.

Use a full distribution aware sampler like p-less decoding, top-H, or top-n sigma, and this goes away

Yes the paper for this will be up for review at NeurIPS this year.


How is ubuntu support for touchscreens these days?

How does it compare to an ipad in terms of fidelity / responsiveness, and for native-feeling integration with ubuntu?

I am, naturally, a bit skeptical that touchscreen UI would be any good in linux.


> How is ubuntu support for touchscreens these days?

GNOME supports multitouch gestures, and the GTK4 toolkit is overall very touch-native. It strikes a nice balance between overpadded and touch-accessible, IMO: https://www.gnome.org/

(some of the newer Libadwaita widgets that GNOME is using: https://gnome.pages.gitlab.gnome.org/libadwaita/doc/main/wid... )

> How does it compare to an ipad in terms of fidelity / responsiveness

With Wayland, it's borderline identical.


> GNOME supports

I've heard that there's *support* -but is the experience of having a touchscreen on an ubuntu device actually usable and good?

For example some random GUI app you're likely to use on ubuntu is the experience not broken?

I guess Chrome is the first thing that comes to mind.


My only issue with Chrome on touchscreen was the lack of 1:1 scroll/zoom gestures. As a Firefox user it was something that I got used to, but I just updated Chromium and apparently that's been fixed now too.

Besides that, it all works about as well as you'd expect it to. You can drag the window around by the tab bar and tap-and-hold to pull up a context menu.


>With Wayland, it's borderline identical.

Come on lol. I have a couple steam decks and both are really clunky.

Most applications are not built using GTK4 nor Qt6 for that matter.

On my steam deck the keyboard never pops up by itself so I have to use a key combination and it feels like I am moving a ghost mouse around the place (rather than proper touch screen support)

I ran gnome on the deck for a while but anyway the on-screen keyboard provided by the gnome sucked so bad that I gave up (sucked as in, it groups all the keys around the center of the screen tightly together and very small)

I also have an M1 iPad Pro. No comparison because those issues simply don’t exist on iOS.


I don't know what to tell you. I'm running it on the desktop with a drawing tablet, Magic Trackpad and oodles of apps, and it's not noticeably different from the stability of iPadOS.

My touchscreen laptop is closing in on being a decade old (i7 6600u) and the worst thing I can say about the experience is that it VSyncs down to 30fps during more taxing animations (just like my iPad does).


There is a whole section on touchscreen annoyances from the Linux Surface project: https://github.com/linux-surface/linux-surface/wiki/Installa...

> Any gesture functionality is dependent on the software you are running. This includes both, the application (e.g. Firefox) and the desktop environment (e.g. GNOME). The driver can only provide a set of input coordinates to the applications. By default, the system will behave as if you've clicked at the point of a single touch, or mouse-button dragged when you single-finger drag.

I love Linux but no need to embellish the current state imo

I am glad that it is working really well for you though


> There is a whole section on touchscreen annoyances from the Linux Surface project

Taking a quick look, all of the things they list are basically reiterating what I've already said vis-a-vis Wayland:

  You should make sure that you are running a Wayland desktop session [...]

  It is important, that your applications run on Wayland as well.

> The driver can only provide a set of input coordinates to the applications. By default, the system will behave as if you've clicked at the point of a single touch, or mouse-button dragged when you single-finger drag.

Yeah no. All of this depends on everything up to the application.

A gtk2 application will have no support for anything. A GTK3 application running on xwayland will have poorer support as well. And anyway most applications just treat the touchscreen as an invisible pointer as it says there.

Just to give an example of some basic thing that doesn’t work reliably: you can’t reliably use a long press gesture. In most apps that will be equivalent to holding the left click (aka does nothing but a long click). On iOS you will get a contextual menu to select/format text or whatever. (You can find a real report of this issue here: https://www.reddit.com/r/kde/s/crLHZhHkuM - “how do I right click using the touchscreen?” from barely 6 months ago)

Your claim that this is an equivalent experience to an iPad is just false.

I’ve been around long enough to remember setting up TouchEgg, the situation is better now but still not equivalent at all.

Anyway originally I wanted to reply to provide balance to your take so casual readers wouldn’t install Linux on their tablets and expect iPadOS. I think that has been sufficiently achieved by this comment chain, readers can choose which side to take :-)

Cheers!


This is helpful :)

To go back to the app-by-app comment, I do know that there are like ubuntu tablet and touch setups.

Are there any browsers already setup to be more touch native, or specific browser builds that are more touch native already?



I remember TouchEgg too, I did use x11 for a few years. The experience back then was not comparable to an iPad, but the modern Wayland session is.

If you're going to fight over edge-case consistency, then at least be consistent. People build iPad apps with horrible custom widgets that block context menus too. They run "real" software in QEMU and iTerm that truly has no support for any of their default HIDs. Linux has more software to support, by nature it's going to have the larger number of inconsistent experiences. I don't think that's a fair basis of comparison, though.

Strictly speaking, I think KDE and GNOME's Wayland stacks are the closest equivalent to the Quartz Compositor on the market. I don't really know any other stack that comes close.


It supports them via libinput.

Everything around actually a Linux device with a touchscreen sucks.

Like on-screen keyboard will be inconsistent depending on the framework of the app.

comparing to iOS which was built from the ground up around that input method is simply not fair lol.


I love the pure nihilisitc audacity of calling the company “Authentic Brands”.

What does Expo / React native do?

Actively push you to use their build(and configuration!) service, and actively create/maintain friction for building and publishing production apps without it.

Wow, also this:

> The OpenSSL project does not sufficiently prioritize testing. [... ]the project was [...] reliant on the community to report regressions experienced during the extended alpha and beta period [...], because their own tests were insufficient to catch unintended real-world breakages. Despite the known gaps in OpenSSL’s test coverage, it’s still common for bug fixes to land without an accompanying regression test.

I don't know anything about these libraries, but this makes their process sound pretty bad.


This quote about testing is way worse:

> OpenSSL’s CI is exceptionally flaky, and the OpenSSL project has grown to tolerate this flakiness, which masks serious bugs. OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs. This bug was actually caught by CI — but because the crash only occurred when the CI runner happened to have an AVX-512 CPU (not all did), the failures were apparently dismissed as flakiness.


OpenSSL is (famously) an extremely terrible codebase.

It's likely that over the long-term the tech industry will replace it with something else, but for now there's too much infrastructure relying on it.


TLDR:

- fork of django

- it's opinionated

- typed

- comes with skills / rules / docs baked in

I'm not against this idea in principle, but I'm also not sure why that is better than what's already out there, except maybe you save some tokens by not vibe coding this yourself?

I do think in the future we'll see some novel libraries that are agent-optimized first. I'm not sure if this is it, though.

(edit: formatting)


The models are training on examples, and there are a lot of Django examples to learn from. Where is the advantage here? A surface for more potential bugs?

Its better because:

* this dev can merge what he want instead of being stopped by those evil django developers * it looks very cool on your cv * hence the function name changes and the tiny notion at the bottom that the project is "inspired" by another. Absolutely crucial!


I think it's clear to me that AI will be both things:

1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around

but- it will also be

2) new technologies and ideas enabled by a completely new set of capabilities

The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.

But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?


Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.

I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.

Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.

Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.

The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.


> I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.

The problem is that there are not many fields that are going to be immune to AI based cost cutting and there surely will not be enough work for all of us even if we all retrain.

If we all do, then it will create a n absolutely massive downward pressure on wages due to massive oversupply in other lines of work too

So there's really just no good way out


I also have contemplated just retraining now to try and get ahead of the curve, but I'm not confident that trades can absorb the shock of this - both in terms of supply (more unemployment) and demand (anything non-commercial will be hit by capital flight on the customer-side). I figure I will just try and make as much money on a higher wage as I can and hope for the best...

> AI pessimism is hard to understand

Well, it really isn’t. First, this entire post makes two assumptions: 1) that AI adds more value to the process than it removes and 2) that it’s sustainable.

It’s not pessimism to want to validate these first.

Are AI “gains” really transformative or simply random opportunities for automation which we can achieve by other means anyway?

Can the world continue to afford “AI as a service” long enough for the gains to result in improvements that make it sustainable? Are we dooming our kids to a hellishly warm planet with no clear plan how to fix it?

It’s not pessimism, just simple project management if you ask me.


> Are AI “gains” really transformative

They're transformative in the sense that will shrink the optimal team size, but I don't expect the jobs to actually go away unless these things both get substantially better at engineering (they're good at generating code but that is like 20% of engineering at best) and we have a means of giving them full business/human levels of context.

Really basic stuff gets a lot easier but the needle doesn't move much on the harder stuff. Without some sort of "memory" or continuous feedback system, these models don't learn from mistakes or successes which means humans have to be the cost function.

Maybe it's just because I'm burnt out or have a miner RSI at the moment, but it definitely saves me a bit of time as long as I don't generate a huge pile and actually read (almost) everything the models generate. The newer models are good at following instructions and pattern matching on needs if you can stub things out and/or write down specs to define what needs to happen. I'd say my hit rate is maybe 70%


> we have a means of giving them full business/human levels of context

Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

Imo, What most of the people that are not directly working in this space get wrong is assuming swes are going to be hit the hardest: There are some efficiency gains to be won here, but a full replace is not viable outside of AGI scenarios. I would actually bet on a demand increase (even if the job might change fundamentally). Custom domain made software is cheaper as it has ever been and there is a gigantic untapped market here.

Low complexity to medium complexity white colar jobs are done for in the next decade through. This is what is happening right now in finance: if models stopped improving now, the technology at this point is already good enough to lower operational costs to the point where some part of the workforce is redundant.


> Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.

I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.


> I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.

Yeah i misunderstood your point, i completely agree with what you are saying.

I honestly do not believe that strategy, decision making and other real life context dependent are going to be replaceable soon (and if it does, its something other than llms).

> I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.

Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.

Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.

LLM based solutions actually makes less errors than humans and adhere to the process better in many scenarios, requiring just an ok/deny from some human supervisor.

By delegating just the decision process to the operator, you need way less actual humans doing the job. Since operations workload is usually a function of other areas, efficiency gains result in layoffs.


> Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.

> Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.

Oh that's very interesting! Thank you for the insights!


> Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

This is exactly what people were saying a decade ago when everyone wanted data scientists, and I bet it's been said many times before in many different contexts.

Most corporations still haven't organised and structured their data well enough, despite oceans of money being poured into it.


> will shrink the optimal team size, but I don't expect the jobs to actually go away

If they've shrunk the team size, that means some jobs (in terms of people working on a problem) will have gone away. The question is, will it then make it cheap enough to work on more problems that are ignored today, or are we already at peak problem set for that kind of work?

Spreadsheets and accounting software made it possible to have fewer people do the same amount of work but it ended up increasing the demand of accountants overall. Will the same kind of thing happen with LLM-assisted workloads, assuming they pan out as much as people think?


Also I think over the past few years/decade the tech sector has lost any benefit of the doubt that everything that comes out of it is a "good thing".

Hard to understand, when essential human nature is so predictable? Sure, we will do novel things with it. But society in the main will use to it exploit labor. same as it ever was.

“Exploit labour” is just outdated Marxism. No self respecting economist believes this kind of rhetoric anymore but it only exists amongst west coded leftist.

It’s a sort of cynical fatalism to think everything is exploitation — directly coming from Marx.

It’s not exploitation to mutually agree on a deal. Most of population know this except Marxists!


Ah, peak HN pseudo-libertarianism

a) just hand wave away that there is a massive power and wealth differential involved in this "mutual agreement" b) dismiss all discussions which recognize that fact... as "outdated Marxism"

Plenty of mainstream economists are capable of seeing the real world which you are pretending doesn't exist.

Even Marx meant the word "exploit" in relatively value neutral terms, just recognizing that in any economy built on private property we exploit humans the same way we do any "resource". It's up to the reader whether they see that as having any moral connotation.


Massive power and wealth differential is just simply a reason to be jealous. It is precisely this concept of mutual agreement (capitalism) that brought most of humanity out of poverty.

>Plenty of mainstream economists are capable of seeing the real world which you are pretending doesn't exist.

Not really. There are total of zero economic policies made by analysing economy through Labour Theory of Value or whatever other crap Marxists believe.

The above poster used "exploit" in non value neutral terms. Marx tried very hard to be value neutral about it (but its clear what the intentions were) but his readers don't play that game.


are you under the impression life was better before capitalism?

That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.

yes it does, it says that trying to prevent technology in order to protect the interests of some special class up people at the expense of everyone else is dumb and shortsighted.

If if people actually listened to the people wailing "but what about the horse carriage business!!!" in the 20th century, it would have been a disaster.


Sure, but AI pessimism is allowed to be personal. Am I supposed to be optimistic that I feel I'm about to get shafted? Should I be less concerned that I need to provide for my family, because in the long term this is going to be a great step forward for humanity?

youre allowed to be personally pessimistic, but if you actually do anything to prevent it from happening i think that is incredibly selfish.

it would be like an oncologist trying to stop an anti-cancer pill because theyd be out of a job.


You are addressing something totally different to the original claim - which tried to say that capitalism is inherently exploitative on labour which is just outdated Marxism

To be frank, I thought trying to twist this into an argument about whether capitalism is inherently exploitative was a complete waste of time and I replied as such. If you'll recall what we were originally talking about here - "AI, should HN users be optimistic?"

That's a good idea and FWIW I agree that as a person who might lose their job to AI, you do deserve to feel apprehensive, even if it might lead to some good later.

Both 1) and 2) represent value almost entirely captured by businesses / business owners, and not captured by workers. For 1) the economic boost is captured by business while the losses are captured by workers. For 2) in theory, some new ideas will be created by individual people who get lucky and grow them into their own businesses, but if history is an accurate guide, most of the benefits of new inventions and technology will be captured by existing players.

> do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?

The cost cutting is the only revenue producing models for the AI companies so far. It's being pitched as a way for corporations to fire a lot of employees and save money.

Revenue for the consumer facing products is not very impressive. Consumers are mostly satisfied with the free versions and very resistant to adding yet another channel to shove advertising at them.


> But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?

I frankly do not care how much novel stuff is "unlocked" with AI tech if it means I become unemployable due to it replacing all of my skills


Change is a constant in history. Stuff happens, and then we adjust. Big changes may result in short term confusion, anger, etc. All the classic signs of the five stages of grief basically.

If you step back a little, a lot of people simply don't see the forest for the trees and they start imagining bad outcomes and then panic over those. Understandable but not that productive.

If you look at past changes where that was the case you can see some patterns. People project both utopian and dystopian views and there's a certain amount of hysteria and hype around both views. But neither of those usually play out as people hope/predict. The inability to look beyond the status quo and redefine the future in terms of it is very common. It's the whole cars vs. faster horses thing. I call this an imagination deficit. It usually sorts itself out over time as people find out different ways to adjust and the rest of society just adjusts itself around that. Usually this also involves stuff few people predicted. But until that happens, there's uncertainty, chaos, and also opportunity.

With AI, there's going to be a need for some adjustment. Whether people like it or not, a lot of what we do will likely end up being quite easy to automate. And that raises the question what we'll do instead.

Of course, the flip side of automating stuff is that it lowers the value of that stuff. That actually moderates the rollout of this stuff and has diminishing returns. We'll automate all the easy and expensive stuff first. And that will keep us busy for a while. Ultimately we'll pay less for this stuff and do more of it. But that just means we start looking for more valuable stuff to do and buy. We'll effectively move the goal posts and raise the ambition. That's where the economical growth will come from.

This adjustment process is obviously going to be painful for some people. But the good news is that it won't happen overnight. We'll have time to learn new things and figure out what we can do that is actually valuable to others. Most things don't happen at the speed the most optimistic person wants things to happen. Just looking at inference cost and energy, there are some real constraints on what we can do at scale short term. And energy cost just went up by quite a lot. Lots of new challenges where AI isn't the easy answer just yet.


We are the horses, though.

At some point those became almost fully obsolete in a productive economical sense (they're just fancy toys now, basically). No 'raising the ambition' is ever going to change that. They are what they are and they can do what they can do.

I don't know about you, but if the something in "we'll find something to do" is becoming a toy for AI or very rich people, I'm not exactly hopeful about the future.


I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome. Clinging to the past has never really worked that well.

As for rich people, they get richer and richer until people correct them. Sometimes violently. The current concentration of wealth in particularly the US seems more related to political changes since about the Reagan era than to any recent innovations related to technology.


> I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome.

This is false. Being fatalistic and 'panicking' can definitely influence and thus change the outcome. Your logic is similar to what is (incorrectly) used to dismiss the Y2K-problem, for instance: Looking back it seems like there was no need to panic, but that is only because a lot of people recognized the urgency, worked their ass off and succeeded in preventing shit from going horribly wrong.

See: https://en.wikipedia.org/wiki/Preparedness_paradox

Your handwaving is doing harm by lulling people into a false sense of security. Your initial comment amounts to "Ah, it'll be fine, don't worry about it. We'll adapt, we always have.", even though you provide absolutely no arguments specific to this enormous force of insanely rapid change in an already incredibly unstable fragile world. We might adapt, but it will require serious thought rather than handwaving and leaning back; even then it might come with massive societal upheaval and a lot of suffering.


I'm wrong to not be fatalistic?! You lost me here.

A lot of people seem to be wasting a lot of energy insisting it is all going to end in tears because <fill in reasons>. All I'm doing here is pointing out that people like this come out of the woodwork with pretty much every big change in society and then people adapt and things are society fails to collapse.

I'm not arguing there won't be changes and that they won't be disruptive to some people. Because they will and people will need to adjust. But I am arguing that a lot of the dystopian outcomes are as unlikely to happen with this particular change as they have been with previous rounds of changes. I just don't see a basis for it. I do see a lot of people who want this to be true mainly because they are afraid of having to adapt.

> already incredibly unstable fragile world

There are a lot of people arguing that things are better than ever by most metrics you might want to apply for that. The reason you might feel stressed about the news is that dystopian headlines sell better and you are being influenced by those. That's also why the Y2K got a lot more attention than it deserved in the media and then a lot of people indeed freaked out over that. Of course a lot of that got caught up in people believing for other reasons we are all doomed and that the apocalypse was coming. And it made for amusing headlines. So, it got a lot more attention than it deserved. And then the clock ticked over and society failed to collapse.


You largely ignored what I said and displayed exactly the fallacious behavior I was pointing out. Again, Y2K was not a problem because people 'freaked out' (took the problem seriously). Similarly, AI will only not be a problem due to people that spend time and effort to mitigate its issues, not due to people like you pretending that because nothing went seriously wrong in the past, nothing automatically will this time (because you "just don't see the basis for it").

This is the key point that HN commenters frequently miss: We are not the transportation owners trading in horses for cars. We are the horses.

> do people really believe no novel things will be unlocked with this tech?

Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.

Give your best example of something that is novel, ie isn't just replacing existing processes at scale.

It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.


I think that most people are pretty short-sighted about the utility cases right now (which is understandable given the negative feelings about a lot of what's currently going on).

There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.

I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.

When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.


With this said, a lot of people are likely worried about being eaten by whales when it comes to doing things with AI.

It's kind of like dealing with Amazon, or any other company that has both compute and the ability to sell the kind of product you make.

Said AI providers can sell you the compute to make the product, or they can make the product themselves with discounted compute and eat all the profits you'd make.


This is always a worry, but typically, being first to market is the most important part. As long as you can scale quickly and maintain your edge, this doesn't seem like such a big deal.

However, my product is so far removed from anything these companies would make, on top of that I'm using open-source models (e.g., oss gpt 120b is really, really good). I don't use any of the main providers like AWS, etc., and the underlying AI systems are only about 5% of the product. I need it for the idea to work, but it is a tiny part of the full offering. I can't really imagine it would make any sense for Amazon, etc., to compete on something like this.

But yes, in the end, huge conglomerates with infinite money can destroy smaller entrepreneurs - but that's not really any different than it's been for decades pre-AI.


The most obvious thing is bio-tech, protein folding, drug discovery, etc. As in, things that have an actual positive effect on humanity (not just dollars).

I don't really get people who are dismissive about this aspect of AI- my original question wasn't about cost-efficiency of developing these things, but just that the technology itself is creating things that wouldn't have been possible before. It seems hard to refute.

Whether or not it's worth the cost is a different debate entirely- about how tech trees are developed and what the second order effects of technology are. There are so many examples- the computer itself, nuclear power, etc. I think AI is probably on the same order as these.


Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?

The implication of your comment seemed to be that this was going to be so much more than replacing people. But I fail to see how any of the items you listed are anything other than that.

These things have always been possible. Just slow and limited by labor. Which is the primary and novel "unlock" of AI.

You can argue it's a good thing, and in many areas I'd probably agree. I'm directly responding to your skepticism and implied absurdity that replacement is the main unlock here. It absolutely is.


> Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?

Yes, you are off-base.

Solutions to the protein folding problem existed before, but not in the way you are implying.


Fair enough. I appreciate the correction.

I do still believe the main value proposition is large scale replacement and am unconvinced that most people driving AI adoption have these other more noble pursuits in mind with respect to AI.

But I will absolutely stand corrected here and if our dystopian future includes some genuinely useful medicinal advancements then maybe that will make the medicine (heh) go down easier.


It’s pretty decent for natural language -> query language tasks

But also you don’t need SOTA frontier models for that!


"Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist."

Wouldn't that apply to most technological advances? Cars, computers, cell phones.


Yes, but I'm not the one who introduced the "novel" constraint to the argument.

e: Also I don't know that I'd strictly bucket these specific examples you gave as shittier versions, though I guess that's a matter of perspective.


The problem is, at the moment llms are not capable of proper brainstorming. And humans are quite shit in coming up with unique ideas. The great bottleneck is still money and dissemination of given product (marketing). So nothing changes. Just the usual capitalist thing where humans will be squeezed even further and revenue funneled from more entities to fewer ones.

So now the ancillary question from your example is: "Is hand-spun cotton better than industrialized polyester?"

If you're implying that hand-spun cotton is better, that's an easy question to answer- people used to spend a huge amount of their income on clothing, also spending a huge amount of time washing it. Industrialization made clothing so much cheaper that it's now completely disposable. There's plenty of reasons why that's not a bad thing.

One reason people forget that "good quality" shoes existed was that you could only afford to buy one pair ever, not that things were made better, necessarily. (or could be both, but that replacing a pair of shoes was a financial hardship, because hand-made things, even back then, were expensive).

Even if you're against fast fasion I don't think anyone wants a pair of shoes to cost $10,000.


It seems to me you're advocating for waste, as I'm not seeing the "plenty of reasons why completely disposable cheap clothing isn't a bad thing" argument.

Replacing shoes wasn't necessary because there were cobblers. For clothing; tailors. I'd much prefer to get a set of clothing, then work with it over the course of its lifetime, over sending it to the landfill after one tear.


Many companies are following planned obsolence framework to keep their industry alive. That is the major reason for waste and drop in quality.

Define better. Fast fashion sucks, but hand-spun cotton won't give you Kevlar or modern wind-resistant clothing or fireproof materials for your furniture or... <insert half thousand different things adjacent to modern textile production>.

It's always win some, lose some with the economy, but technology itself opens previously impossible capabilities.


Better is 'longer lasting and less disposable.'

Your comment got me thinking about if technology is actually better, but that's a whole new discussion. We wouldn't need the fireproof furniture if we all used the local sweat lodge for bathing or the mess hall yurt for cooking. We wouldn't need wind-resistant clothing if we didn't make personal rockets that go 200mph to travel long distances to arrive at the same amenities (just in a different city).


> Better is 'longer lasting and less disposable.'

I'd generally agree, but there are always caveats. See e.g. glass vs. plastic bottles - glass looks like strictly superior solution environmentally, until you consider how much fuel is saved across entire logistics chain by plastic bottles being significantly lighter.

> We wouldn't need wind-resistant clothing if we didn't make personal rockets that go 200mph to travel long distances to arrive at the same amenities (just in a different city)

FWIW, I was thinking more about people who like to walk around in windy places, including mountains, etc. But even if we exclude tourists, we're still left with people who work at altitudes (including infrastructure anywhere - get on a high enough pole or roof, it's going to be windy). More generally, there are people doing useful work, including construction, services, and research, in all kinds of extreme environments, and this is directly enabled by post-industrial era fabrics.


It used to open them to most of the population - at least that was the ideal for a couple of decades - but now it seems to be opening them to oligarchs more than workers.

It's essentially a political energy source. It heats everything up.

Eventually it either explodes, goes through a phase change to a new (meta)stable state, or collapses back to a previous state.


What is the AI equivalent of wind-resistant clothing or fireproof materials?

So far the only product AI is producing is layoffs.


> wind-resistant clothing

Better autocomplete/autocorrect, "circle on screen" -> OCR anywhere, high-quality automated background removal[0].

> fireproof materials

No big examples to point at now, except maybe whatever security fixes that'll come out of Glasswing Project[1].

> So far the only product AI is producing is layoffs.

AI-related[2] layoffs are a direct consequence of useful things AI is delivering.

--

[0] - Super useful for e.g. making ID photos, which I notice I need to do increasingly often, which is likely a consequence of proliferation of remote/digital ID verification, which nicely ties us back to question 'butlike expressed, i.e. how much is technological progress actually improving things.

[1] - https://www.anthropic.com/glasswing

[2] - As opposed to "we wanted to lay off people anyway, AI just provided a socially-acceptable excuse".


What's the AI equivalent of industrialized polyester in your analogy?

From a consumer perspective, AI isn't really producing any new products with real market demand. Chatbots are fun, but there's no indication consumers are willing to pay for them.


It's been a few years and I have yet to see a single novel thing come out of it. Even chatbots weren't novel when ChatGPT came out.

It's disingenuous to say ChatGPT is not novel relative to older chatbots. The capabilities of ChatGPT compared to what came before were astonishing and continued to improve at a rapid rate.

> the AI pessimism is hard to understand in this context

This is a burden of proof inversion: historically new technology has not resulted in optimistic outcomes. Quality of life improvements were side effects of capital accruing. AI optimism is the naïve option that requires justification.


>That it's all about cost-cutting?

Cost cutting has less uncertainty than making something new, so they do that first. If something else comes along, then great.

This is also why the people should make the transition as difficult as possible for companies doing layoffs when the companies are paying proportionally very little in taxes compared to the people they are laying off.


I feel like it's easy in hindsight to say some tough sounding advice in the form of "be a hard-ass", but idk, I feel like there are plenty of real life cases that contradict this advice- taking a chance on a referral to work on something you find interesting. Of course the big-money clients will be fine with a hard-line stance and they have money to pay at the end, but that work tends to be less interesting.

OTOH, one other clear subtext of the story is the "savior" attitude of a lot of tech people, who think that, if they weren't using version control before, think, "oh, I'll just tell them about this great thing, and because it's much better they will definitely listen to me and implement it - it's only logical". But the harsh reality is that "better" things won't affect an org that went along that far and dug themselves that deep.

Never underestimate an org's ability to shoot itself in the foot, even if you think you know better. That includes getting your money from them.


There must be some kind of calculation generally based on latitude?

A sub-question that I would be curious about is how much climate in that region then affects the total possible solar energy. How much is the variance from a naive calculation just based on latitude?

One other second-order effect is: developed economies are heavily weighted towards places that are cold / farther north than less developed places (as a very general rule). And, a lot of people don't realize how much less energy efficient it is per-capita to make a space human comfortable year round in a "cold" climate vs a warm one.

-That's a new way of comparing economies where the price and stability of energy is better in a warm, more equator proximate location.


You can look at maps of solar insolation[0] - these give you typical levels of solar input. There is of course weather variations, but the long-term trends should be consistent.

One thing that can catch me is how much more north Europe is than basically all of the USA. The general solar insolation is worse, yet they are still doing a healthy business of solar. The panels are so cheap that even if you are in a crummy environment, you can just add more.

[0] https://en.wikipedia.org/wiki/Solar_irradiance


Yes, Europe is very North. Also comparing with China: Beijing is the same latitude as Madrid.


> There must be some kind of calculation generally based on latitude?

I am suspecting the same. Thanks for the reply, not sure why my comment seems to have ruffled some feathers...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: