Hacker Newsnew | past | comments | ask | show | jobs | submit | arrowsmith's commentslogin

It's in TFA: "WhatsApp Relay"

Claude sounds like "clawed". Hence "Clawdbot".

Lobsters have claws.


It wasn't just one random asshole, tons of people were saying that "Moltbot" is a terrible name. (I agree, although I didn't tweet at him about it.)

OpenClaw is a million times better.


Just curious, is there something specific about Moltbot that makes it a terrible name? Like any connotations or associations or something? Non-native speaker here, and I don't see anything particularly wrong with it that would warrant the hate it's gotten. (But I agree that OpenClaw _sounds_ better)

No connotations or associations that I can think of it. It just sounds weird and is kinda hard to pronounce - doesn't roll off the tongue easily.

It's not the worst thing ever, it's just not a very aesthetically pleasing combination of sounds.


Go on twitter and search 'maltbot', 'moldbot', 'multbot', etc - the name was just awful and easy to get wrong as its meaningless. I think the crux of it is that 'Molt' isnt a very commonly used word for most people so it just feels weird and wrong.

OpenClaw just sounds better, it's got that opensource connotation and just generally feels like a real product not a weirdly named thing you'll forget about in 5 minutes because you cant remember the name.


In many non-English languages it's a terrible name to pronounce. the T-B letters link in particular. Not all languages have silent letters like English, you actually have to pronounce them.

Every single letter in Moltbot would be pronounced in English.

What about 5.1 do you prefer over 5.2?

As far as I can tell 5.2 is the stronger model on paper, but it's been optimized to think less and do less web searches. I daily drive Thinking variants, not Auto or Instant, and usually want the _right_ answer even if it takes a minute. 5.1 does a very good job of defensively web searching, which avoids almost all of its hallucinations and keeps docs/APIs/UIs/etc up-to-date. 5.2 will instead often not think at all, even in Thinking mode. I've gotten several completely wrong, hallucinated answers since 5.2 came out, whereas maybe a handful from 5.1. (Even with me using 5.2 far less!)

The same seems to persist in Codex CLI, where again 5.2 doesn't spend as much time thinking so its solutions never come out as nicely as 5.1's.

That said, 5.1 is obviously slower for these reasons. I'm fine with that trade off. Others might have lighter workloads and thus benefit more from 5.2's speed.


This is a terrible thing to say out loud*, but, in all such cases I'd rather just give them the more money to do the better answers.

It boggles the mind that "wrong answers only" is no longer just a meme, it's considered a valid cost management strategy in AI.

* Because if they realize we're out here, they'll price discriminate, charging extra for right answers.


I promise you that 99% of normal people have no idea what the Wikimedia foundation is and think that they're just donating to "fund Wikipedia".


are normal people donating to wikipedia tho


Yes, the ads are essentially a guilt tax on normies who remember Wikipedia helped them in high school


How is this an "optimization" if the compiled result is incorrect? Why would you design a compiler that can produce errors?


It’s not incorrect.

The code says that if x is true then a=13 and if it is false than b=37.

This is the case. Its just that a=13 even if x is false. A thing that the code had nothing to say about, and so the compiler is free to do.


Ok, so you’re saying it’s “technically correct?”

Practically speaking, I’d argue that a compiler assuming uninitialized stack or heap memory is always equal to some arbitrary convenient constant is obviously incorrect, actively harmful, and benefits no one.


In this example, the human author clearly intended mutual exclusivity in the condition branches, and this optimization would in fact destroy that assumption. That said, (a) human intentions are not evidence of foolproof programming logic, and often miscalculate state, and (b) the author could possibly catch most or all errors here when compiling without optimizations during debugging phase.


Regardless of intention, the code says this memory is uninitialized.

I take issue with the compiler assuming anything about the contents of that memory; it should be a black box.


The compiler is the arbiter of what’s what (as long as it does not run afoul the CPU itself).

The memory being uninitialised means reading it is illegal for the writer of the program. The compiler can write to it if that suits it, the program can’t see the difference without UB.

In fact the compiler can also read from it, because it knows that it has in fact initialised that memory. And the compiler is not writing a C program and is thus not bound by the strictures of the C abstract machine anyway.


Yes yes, the spec says compilers are free to do whatever they want. That doesn’t mean they should.

> The user didn’t initialize this integer. Let’s assume it’s always 4 since that helps us optimize this division over here into a shift…

This is convenient for who exactly? Why not just treat it as a black box memory load and not do further “optimizations”?


> That doesn’t mean they should.

Nobody’s stopping you from using non-optimising compilers, regardless of the strawmen you assert.


As if treating uninitialized reads as opaque somehow precludes all optimizations?

There’s a million more sensible things that the compiler could do here besides the hilariously bad codegen you see in the grandparent and sibling comments.

All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that. I’m saying a spec that incentivizes this nonsense is poorly designed.


Why is the code gen bad? What result are you wanting? You specifically want whatever value happened to be on the stack as opposed to a value the compiler picked?


> As if treating uninitialized reads as opaque somehow precludes all optimizations?

That's not what these words mean.

> There’s a million more sensible things

Again, if you don't like compilers leveraging UBs use a non-optimizing compiler.

> All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that.

You literally are though. Your statements so far have all been variations of or nonsensical assertions around "why can't I read from uninitialised memory when the spec says I can't do that".

> I’m saying a spec that incentivizes this nonsense is poorly designed.

Then... don't use languages that are specified that way? It's really not that hard.


From the LLVM docs [0]:

> Undef values aren't exactly constants ... they can appear to have different bit patterns at each use.

My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.

The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true. Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?

And to be explicit: “if you don’t like it, don’t use it” is just refusing to engage, not a constructive response to this critique. These semantics aren't set in stone.

[0] https://llvm.org/doxygen/classllvm_1_1UndefValue.html#detail...


> My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.

An assertion you have provided no utility or justification for.

> The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true.

The comment you replied to did in fact not do that and it’s incredible that you misread it such.

> Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?

The original snippet literally folds a branch and two stores into a single store, saving CPU resources and generating tighter code.

> this critique

Critique is not what you have engaged in at any point.


Sorry, my earlier comments were somewhat vague and assuming we were on the same page about a few things. Let me be concrete.

The snippet is, after lowering:

  if (x)
    return { a = 13, b = undef }
  else
    return { a = undef, b = 37 }
LLVM represents this as a phi node of two aggregates:

  a = phi [13, then], [undef, else]
  b = phi [undef, then], [37, else]
Since undef isn’t “unknown”, it’s “pick any value you like, per use”, InstCombine is allowed to instantiate each undef to whatever makes the expression simplest. This is the problem.

  a = 13
  b = 37
The branch is eliminated, but only because LLVM assumes that those undefs will take specific arbitrary values chosen for convenience (fewer instructions).

Yes, the spec permits this. But at that point the program has already violated the language contract by executing undefined behavior. The read is accidental by definition: the program makes no claim about the value. Treating that absence of meaning as permission to invent specific values is a semantic choice, and precisely what I am criticizing. This “optimization” is not a win unless you willfully ignore the program and everything but instruction count.

As for utility and justification: it’s all about user experience. A good language and compiler should preserve a clear mental model between what the programmer wrote and what runs. Silent non-local behavior changes (such as the one in the article) destroy that. Bugs should fail loudly and early, not be “optimized” away.

Imagine if the spec treated type mismatches the same way. Oops, assigned a float to an int, now it’s undef. Let’s just assume it’s always 42 since that lets us eliminate a branch. That’s obviously absurd, and this is the same category of mistake.


It's the same as this:

    int random() {
        return 4; // chosen by dice roll
    }
Technically correct. But not really.


Also even without UB, even for a naive translation, a could just happen to be 13 by chance, so the behaviour isn't even an example of nasal demons.


Because a could be 13 even if x is false because initialisation of the struct doesn’t have defined behavior of what the initial values of a and b need to be.

Same for b. If x is true, b could be 37 no matter how unlikely that is.


It is not incorrect. The values are undefined, so the compiler is free to do whatever it want to do with them, even assign values to them.


It's not incorrect. Where is the flaw?


Not remotely. Maybe I'm just not working on big enough projects, but I've never experienced any frustration at all with Elixir compile times.


Give your new hires my free course: https://liveviewcrashcourse.com


Wow. Mind-blowing.

Bravo to you, sir.


Digital IDs will be used to restrict your internet access.

They'll roll them out gradually. You won't need one at first. You'll still show your passport, driving license etc, until one day you give up because the digital version is convenient and you "might as well". What's your problem? Why do you care? Have you got something to hide?

Then they'll attack the easiest target: porn. We already have age-verification laws, implemented through dodgy third-party providers. But now everyone has digital government ID: we "might as well" unify things so all the porn sites check your age using the centralised government system. What's your problem? Why do you care? Won't you THINK OF THE CHILDREN??? You want to let CHILDREN watch PORN???

Then comes online retail. After all, the Southport killer bought his knife from Amazon — that was the front page headline on every paper, remember how organic and uncoordinated that was? It could all have been avoided with better age verification. And hey, we already have a way to verify age with our digital IDs. We "might as well". What's your problem? Why do you care? You want to let CHILDREN buy KNIVES?

And what about social media? Kids shouldn't use Facebook, it's bad for them. Australia already bans under 16s from social media. We already have age verification for other things. We "might as well". WHY DO YOU CARE????? THINK OF THE CHILDREN!!!

Oh, that's handy, everyone's social media accounts are now tied to their real identities. That'll come in handy when people say nasty things that the government doesn't like. After all, those riots only happened because of "misinformation". Why do you need to stay anonymous anyway? What's the problem? Why do you care? Got something to hide? You're in favour of HATE SPEECH??

The slippery slope has never been more lubricated.


That all sounds plausible, but why doesn't it happen in other EU countries where digital ID is required?


Because one has to timelapse things in order to perceive the advances of incrementalism


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: