Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hard disagree, I'm in the process of deploying several AI solutions in Healthcare. We have a process a nurse usually spends about an hour on, and costs $40-$70 depending on if they are offshore and a few other factors. Our AI can match it at a few dollars often less. A nuse still reviews the output, but its way less time. The economics of those tokens is great. We have another solution that just finds money, $10-$30 in tokens can find hundreds of thousands of dollars. The tech isn't perfect (that's why we have a human in the loop still) but its more than good enough to do useful work, and the use cases are valuable.


It's true, but do you really trust the AI generated + Nurse Review output more than Organic Nurse generated?

In my experience, management types use the fact that AI generated + Nurse Review is faster to push a higher quota of forms generated per hour.

Eventually, from fatigue or boredom, the human in the loop just ends up being a rubber stamper. Would you trust this with your own or your children's life?

The human in the loop becomes a lot less useful when it's pressured to process a certain quota against an AI that's basically stochastic "most probable next token", aka professional bullshitter, literally trained to generate plasuible outputs with no responsibility to accurate outputs.


It works because we are in a health care crisis and the nurse doesn't have anything close to enough time to do a good job.

It is really one of the few great examples that LLMs are good for in an economic sense.

In a different industry, such inefficiency would have been put out of business.

It is a unique economic condition that makes LLMs valuable. It makes complete sense.

To the wider economy though, it is hard to ignore the unreasonable uselessness of LLMs. The unreasonable uselessness points to some kind of fundamental problems with the models that are unlikely to be solved by scaling.

We need HAL to solve our problems but instead we have probabilistic language models that somehow have to grow into HAL.


These same questions could be asked about self driving cars, but they've been shown to be consistently safer drivers than humans. If this guy is getting consistently better results from ai+human than it is from just humans, what would it matter if the former results in errors given the latter results in more and costs more?


If the cars weren't considerably safer drivers than humans they wouldn't be allowed on the road. There isn't as much regulation blocking deploying this healthcare solution... until those errors actually start costing hospitals money from malpractice lawsuits (or not), we don't know whether it will be allowed to remain in use.


You can't compare an LLM output with a self driven car. That's the flaw of using the term AI for everything, it brings two completely different technologies to an artificial level ground.


TFA's while point is that there is no easy way to tell if LLM output is correct or not. Driving mistakes provide instant feedback if the output of whatever AI is driving is correct or not. Bad comparison.


Many of the things that LLMs will output can be validated in a feedback loop, e.g., programming. It's easy to validate the generated code with a compiler, unit tests, etc. LLMs will excel in processes that can provide a validating feedback loop.


I love how everyone thinks software is easy to validate now. Like seriously, do you have any awareness at all about how much is invested in testing software by the likes of Microsoft, the game studios, and any other serious producers of software? It's a lot, and they still release buggy code.


I trust it alot, in our tests the times a human nurse picked up on something the AI missed are pretty rare. The times the AI found something the nurse missed are common, almost the majority.


I believe the question was: would you trust it with your kids life? Or your own?


That might not be relevant to OPs use case. A lot of nurses get tied up doing things like reviewing claims denials. There’s good use cases on the administrative side of healthcare that currently require nurse involvement.


I think they were referring to the costs of training and hosting the models. You're counting the cost of what you're buying, but the people selling it to you are in the red.


Correct


wrong. OpenAI is literally the only AI company with horrific financials. You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.


OpenAI may be the worst, but I am pretty sure Anthropic is still bleeding money on AI, and I would expect a bunch of smaller dedicated AI firms are too; Google is the main firm with competitive commercial models at the high end across multiple domains that is funding AI efforts largely from its own operations (and even there, AI isn’t self sufficient, its just an internal rather than an external subsidy.)


Dario has said many times over that each model is profitable if viewed as a product that had development costs and operational costs just like any other product from any other business ever.


What that means, and whether it means much of anything at all depends on the assumed “useful life” of the model used to set the amortization period assumed for the development costs.


> You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.

They can still be "bleeding money on AI" if they're making enough in other areas to make up for the loss.

The question is: "Are LLMs profitable to train and host?" OpenAI, being a pure LLM company, will go bankrupt if the answer is no. The equivalent for Google is to cut its losses and discontinue the product. Maybe Gemini will have the same fate as Google+.


Are the companies providing these AI services actually profitable? My impression is that AI prices are grossly suppressed and might explode soon.


It appears very much not. There has been some suggestion that inference may be “profitable” on a unit basis but that’s ignoring most of the costs. When factoring everything in most of these look very much upside down.

While there demand at the moment, it’s also unclear what the demand would be if the prices where “real” aka what it would take to run a sustainable business.


Those sound like typical bootstrap-sized workflow optimization opportunities, which are always available but have a modest ceiling on both sales volume and margin.

That's great that you happened to find a way to use "AI solutions" for this, but it fits precisely inside the parents "tech wise, I'm bullish" statement. It's genuinely new tech, which can unearth some new opportunities like this, by addressing many niche problems that were either out of reach before or couldn't be done efficiently enough before. People like yourself should absolutely be looking for smart new small businesses to build with it, and maybe you'll even be able to grow that business into something incredible for yourself over the next 20 years. Congratulations and good luck.

The AI investment bubble that people are concerned about is about a whole different scale of bet being made; a bet which would only have possibly paid off if this technology completely reconfigured the economy within the next couple years. That really just doesn't seem to be in the cards.


Well said.

Folks were super bullish tech wise on the internet when it was new and that turned out it be correct. It was also correct that the .com bubble wiped out a generation of companies and those that survived took a decade or more to recover.

The same thing is playing out here… tech is great and not going away but also the business side is increasingly looking like another implosion waiting to happen.


I hope someone develops an AI that can do your job at a few dollars often less. That would be great, wouldn’t it? The economics of those tokens is great. It would be a solution that just finds money.

Of course, you can still be on the loop to double check its work, but no worries, you can do it part time.


1) welp, I hope this is not my healthcare provider. 2) Do you realize the cost fallacy just extends one level deeper, since those pennies on tokens soon will become hundreds your nurses become cheaper again? 3) "Our AI", come on. What exactly are you using? this is a technical forum.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: