Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One could argue that the discussion is once again about tech debt.

Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

 help



It worked to launch the creator into a gig at OpenAI.

Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.


Taking fewer visible risks can increase your total risk. We are already under constant threat from deterioration: aging, depreciation and decay. Entropy is the default. Action is what pushes back against it.

You do not fight entropy, only move it around, and in so doing, increase it somewhere. It is still worth it to take action. We may find an action to actually reduce entropy eventually, that does not exist yet.

I would be perfectly happy moving it off-Earth. We can consider the long term after we have a mid-term.

Sometimes I wonder if this is somehow both an answer to the fermi paradox and the increasing expansion rate of the universe. Every alien civ doing exactly this somehow.

I wonder if public perception of LLMs would be better had Google been the one to introduce them after said safety considerations

>after said safety considerations

Tons of people called for common sense regulation/guardrails years ago and were shouted down as "luddites obstructing progress." It's funny to see this discussion coming back around.


"safety considerations" don't matter. The main sticking point with LLMs is that it's a blatant theft of everyone's copyright all while letting the bosses threaten your job. Blatantly stealing to wealth transfer to the ultrawealthy.

I realized that one of my bigger issues with LLMs is actually that I worry they increase "information entropy" on average. Most tools help me reduce entropy - LLMs seem to increase it, on a global scale.

This is related to my observation that for thousands of years, written text has indicated a human author - this is no longer true, and I think this is going to be very difficult for us to wrap our human brains around fully.


Interesting take. Hadn't thought of it in terms of entropy, but it's true. Almost by definition as the training proces doesn't introduce anything novel beyond scraped inputs and a randomly initialized network. From there, the stochastic generation only adds randomness (and the prompt, of course).

Generally I think this is a legitemate issue, although:

> the training process doesn't introduce anything novel

This is not always the case. A compiler, linter, proof checker, tests, etc. can all lower entropy.


there are some scientist and theorists that argue entropy production is the ultimate sign of life (Jeremy England) and consciousness (Robin Carhart-Harris, Tom Froese)

I guess my hot tub is sentient then

That might be the case from your position. But if you were a woman whose stalker was able to locate your photos with ease and generate deepfakes or emulate your voice to feed his obsession you might think differently. If you were worrying about your kids surviving tomorrow because an AI system might target their school for the next round it bombings then copyright infringement night not be your top concern.

> It worked to launch the creator into a gig at OpenAI

The author sold his previous software business and I'm pretty sure would never need to work anymore. I doubt "a gig at OpenAI" was high on his wish list when he started on Clawdbot.


Then why did he take it?

Where's the contradiction? I'm not auditioning to play guitar for Metallica but I'd take the job if they offered it to me.

username checks out

> It worked to launch the creator into a gig at OpenAI.

True, but it doesn't scale. No amount of YOLO will let anyone else repeat that feat.


Then why does the creator keep complaining that the maintainers he onboards keep getting poached by AI companies. It seems more like it is scaling too well.

Let’s not forget that (from what I’ve read) llama.cpp was a weekend YOLO side project and kicked off this whole new industry

I believe Google held back on doing a loss leader for LLMs because of shareholders. Look at how much Meta squandered on the multiiverse. If Google gave away gemini prior to OpenAI their stock would have been hit.

If Google giving away services for free is a problem to their shareholders, I have some bad news for them about some of Google's other products.

Most of Google's other products are there to spy on you.

If you're not paying for the product, then you're the product.


Conservative thinking isn't responsible.

That's how you end up like Germany still using cash and fax machines for 60+ years.


Everyone is starting to get a real good lesson in why cash is important.fax, eh

I agree that fax machines belong into the past, but cash? I'd like to be able to pay even if the internet/power goes down, thank you very much.

Cash will have relevance as long as internet and cloud failures are still an ongoing thing .. both for lovers of privacy and viable fallbacks as required.

Of interest, today in Australian media:

Why cash has made an unexpected comeback in Australia: new study - https://theconversation.com/why-cash-has-made-an-unexpected-...

which includes figures that show while only 8% of Australian transactions are cash (by some metric, see article) 33% (a third) of the population fully supports keeping cash on.


And that is fine and I do the same.

In Germany in many places you can only pay with cash.


Cash allows for freedom and not being tracked by organisations.

Many European countries have learnt hard lesson about state protection police agencies.

A lesson that younger generations seem keen to forget and live through by themselves, because our stories aren't real enough.


OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.

But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.


> if there's one clear example of "Product Model Fit", it's OpenClaw

You think so? OpenClaw certainly owned the hype cycle for a while. There was a thread on HN last week where someone asked who was actually using it, and the comments were overwhelmingly "tried it, it was janky and I didn't have a good use case for it, so I turned it off." With a handful of people who seemed to have committed to it and had compelling use cases. Obviously anecdotal, but that has been the trend I've seen on conversations around it lately.

Also, the fact that the most starred repo on GitHub in a matter of a few months raises a few questions for me about what is actually driving that hype cycle. Seems hard to believe that is strictly organic.


pi.dev I'm much more interested in. Closer to the bone, or maybe better said, pi.dev is more like a lego and OpenClaw seems like a big Ninjago set.

I was shocked when I saw the guy behind libgdx was also behind pi.dev. Random tech worlds colliding.



Would you mind explaining what that idea actually is? I don't understand what people are trying to do with this thing, or why they would think that would be a good thing to do, and some of the stories about it sound basically insane, so I must not be grasping the core idea.

To me it seems like an LLM-based implementation of automation software like Zapier. The problem with Zapier is you need services to provide APIs and Zapier needs to support those APIs to implement it in the automation workflow.

But because OpenClaw can just use a web browser like a normal user, you don't need all these APIs and there's no theoretical limitations on the services that can be integrated and automated.

Right now there's a lot of issues/bugs. People have more trust in a deterministic solution like Zapier. But maybe the LLMs and OpenClaw will get there eventually, and if it does, I can see how that's a better solution than a deterministic system.


Plain English automation, including control of external systems. Even better that it exhibits some forms of decision-making autonomy for edge cases.

It's a handful of useful features that together feel qualitatively different, like you're talking to a real person.

Such things always feel qualitatively different to the people captured by the craze; it doesn't mean there actually is a difference.

It seems like the most fully reified attempt at allowing a person to delegate _all_ of their responsibilities to the Slop Machine.

Which has of course always been the true allure of AI. Do nothing and pretend you did something, when pretending is something you can be bothered to do.


MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.

Worse is Better rears its head again.

https://en.wikipedia.org/wiki/Worse_is_better


OpenClaw, the ultimate example of Facebook's motto "Move Fast and Break Things"

Aka a marketing play

This is why we can’t have nice things




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: