Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Foundation models are basically compressed databases of a big chunk (75%?) of human knowledge and the query language is English (...and many others) and you're saying it's a bubble to be ignored.

Meanwhile, model providers are serving millions if not billions of tokens daily.

Don't want to say this is a dropbox comment class blog post, but certainly it... ignores something.



I think what they're saying is that the whole "This is going to solve everything" hype around LLMs is a bubble. I think it's undeniable LLM technology will persist in some form as it definitely has uses. But I don't think it will be the trillion-dollar industry that is being touted, I don't think many of these companies will survive on their own without being swallowed up by some bigger FAANG entity (heavily dependent, as they are, on VC funding)


There is always money to be made with hype. But the amount of investment being made in AI is beyond reasonable in my mind. They will not see the ROI. We have good enough AI right not to increase productivity. I don't see us getting to AGI with the current architecture bearing some new type of breakthrough.


While I agree the model is a huge compressed database of written human text, I think its a stretch to call much of what was scrape off the internet as knowledge and I don't personally see it as using language as a query language.

I expect a query language to be deterministic, and I expect the other end of the query to only return data that actually exists. LLMs are neither of those, so to me they are impressive natural language engines but they aren't ready a tool for querying human knowledge.


Yeah, there's useful, but not that useful. It help to think of them as "language extrusion confabulation machines" to understand the actual limitations.


IMO, LLMs are a neat technical phenomenon that were released to the public too soon without any regard to their shortcomings or the impact they would have on society.


It's funny that when OpenAI developed GPT-2, they've been warning it's going to be disruptive. But the warnings were largely dismissed, because GPT-2 was way too dumb to be taken as a threat.


It's a way to get free training data


GenAI tech demos: wildly impressive.

Using them productively & ethically and without getting sucked into a modern-day religious cult: virtually impossible.

Yes, I suppose humanity can pat itself on the back that it has managed to invent something which seems so whiz-bang cool and also is so utterly, utterly ghastly…all at the same time.


And the article starts with exactly this. The big chunk consists ~50% of stolen data!


And those same providers are burning how much money each day with this shit?


And carbon. Not just money. There's more externalities to it than just burning money.


Because they want to control as much of the market as possible, everyone and their dog is using LLMs for work and mails and groceries.

That doesn't change their usefulness, if tomorrow they all increase the price x10 it will remain useful for many use cases. Not to mention than in a year or two the costs might go down an order of magnitude for the same accuracy.


> Foundation models are basically compressed databases of a big chunk (75%?) of human knowledge and the query language is English (...and many others) and you're saying it's a bubble to be ignored.

Wake me up when I can access this database without having to rely on the whims of an evil megacorporation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: