Hacker Newsnew | past | comments | ask | show | jobs | submit | Veen's commentslogin

There's a lot of, to put it lightly, bullshit in this blog article, starting with when openclaw was released (late November 2025, not January 25, 2026). The first bit of config — "listen: "0.0.0.0:8080" — is not the default. Default is loopback and it was when I first encounter this project at the end of December.

Essentially, the author has deliberately misconfigured an openclaw installation so it is as insecure as possible, changing the defaults and ignoring the docs to do so. Lied about what they've done and what the defaults are. Then "hacked" it using the vulnerability they created.

That said, there are definite risks to using something like openclaw and people who don't understand those risks are going to get compromised, but that doesn't justify blatant lying.


More that moltbot is ugly and was chosen in a bit of a panic after Anthropic complained. No one liked it, including the people who chose it.

They've recently added "lobster" which is an extension for deterministic workflows outside of the LLM, at least partially solving that problem. Also fixed a context caching bug that resulted in it using far more Anthropic tokens than it should have.

It's not. The guy behind Moltbot dislikes crypto bros as much as you seem to. He's repeatedly publicly refused to take fees for the coin some unconnected scumbags made to ride the hype wave, and now they're attacking him for that and because he had to change the name. The Discord and Peter's X are swamped by crypto scumbags insulting him and begging him to give his blessing to the coin. Perhaps you should do a bit of research before mouthing off.

I'm not saying the author of the software is to blame. This has nothing to do with him! I'm saying why it became so popular.

i'd say the crypto angle is only one factor. as is usual in the real world, effects are multifactorial.

clawdbot also rode the wave of claude-code being popular (perhaps due to underlying models getting better making agents more useful). a lot of "personal agents" were made in 2024 and early 2025 which seem to be before the underlying models/ecosystems were as mature.

no doubt we're still very early in this wave. i'm sure google and apple will release their offerings. they are the 800lb gorillas in all this.


How do envision we might disarm an adversary with thousands of nuclear missiles, other than by preemptively nuking them and hoping they don't respond in time. Not really a plausible plan.

Russia can be collapsed just like USSR did.

This time around we must demand that it fully gives up WMDs before any help or humanitarian aid reaches it.


You probably wouldn't use it for anything serious, but I've Ralphed a couple of personal tools: Mac menu bar apps mostly. It works reasonably well so long as you do the prep upfront and prepare a decent spec and plan. No idea of the code quality because I wouldn't know good swift code from a hole in the head, but the apps work and scratch the itch that motivated them.


Yes, I write everything in Obsidian and use "Paste from Markdown" in Google Docs. It's a habit I picked up years ago when Docs was much less reliable and lost work.

Plus, I want to deliver the completed document, not my edit history. Even on the occasions that I have written directly in Google Docs, I've copied the doc to obliterate the version history.


Doesn't the Claude APIs recently introduced ability to combine extended thinking with structured outputs overcome this issue? You get the unconstrained(ish) generation in the extended thinking blocks and then structured formatting informed by that thinking in the final output.


I'd always assumed that the patients in Sacks' books were lightly fictionalized composites that combined interesting features from multiple cases. The purpose being to illustrate conditions and aspects of human psychology for a general readership. Since they weren't presented as rigorous case studies, I didn't take them to be that. I find what Sacks did much less irksome than more recent psychological and social studies books that pretend to be presenting rigorous scientific fact when they are, in fact, tendentious bullshit.


I apply the same criteria to any scientific assent. What is the actual practical / clinical relevance? And is it properly studied without p-hacking, correlation/causation confusion and without signs of bias. Following these criteria, 95% of studies are useless, and strangely these overlap massively with the ones that fail to replicate. Yet I get constantly shit on for having too high standards for scientific rigor.


Small use case but I’m using skills for analysing and scoring content then producing charts. LLM does the scoring then calls a Python script bundled in the skill that makes a variety of PNG charts based on metrics passed in via command line arguments. Claude presents the generated files for download. The skill.md file explains how to run the analysis and how to call the script and with what options. That way, you can get very consistent charts because they’re generated programmatically, but you can use the LLM for what it’s good at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: