Hacker Newsnew | past | comments | ask | show | jobs | submit | rootnod3's commentslogin

> Finally: Cloudlflare builds OAuth with Claude and publishes all the prompts: https://hw.leftium.com/#/item/44159166

Lord help us


Anyone who thought it’s near clearly hasn’t opened a book in a long time.

Sorry, but why should they? What makes OpenAI better at making Slack than....Slack? Sure, Slack can be improved, but why the fuck should that be done by OpenAI? Shouldn't OpenAI concentrate on...I don't know....AI?! And first try to break even on that promise and actually generate revenue on that shitty promise?

Until there is a bug and say due to DNS issues your LLM is. It reachable because everything is down

Good thing I've got Qwen downloaded to my MacBook in case of that eventuality!


Really? Feels like most Of HN lately is just “get on the AI hype train or get downvoted”

HN has never really walked the walk when it came to embodiment of the hacker spirit.

Before the incessant AI hype it was crypto, and before that it was JavaScript frameworks and before that it was ...


I've always understood hackers to be a subset of users at HN. Maybe there were more in the early days, but with the growth of the startup business model, a lot of different users were attracted to the site. The core value seems to be interest in technology and the cultures around it. Emphasis own the plurality of cultures because I think there are multiple, competing ones. Though, as per guidelines, any story interesting to users is acceptable for submission.

HN is only a name, in reality it's VC news.

Can your LLM do that to a running system? Or will it have to restart the whole program to run the next iteration? Imagine you build something with long load-times.

Also, your Lisp will always behave exactly as you intended and hallucinate its way to weird destinations.


An LLM can modify the code, rebuild and restart the next iteration, bring it up to a known state and run tests against that state before you've even finished typing in the code. It can do this over and over while you sleep. With the proper agentic loop it can even indeed inject code into a running application, test it, and unload it before injecting the next iteration. But there will be much less of a need for that kind of workflow. LLMs will probably just run in loops, standing up entire containers or Kubernetes pods with the latest changes, testing them, and tearing them down again to make room for the next iteration.

As for hallucinations, I believe those are like version 0 of the thing we call lateral thinking and creativity when humans manifest it. Hallucinations can be controlled and corrected for. And again—you really need to spend some time with the paid version of a frontier model because it is fundamentally different from what you've been conditioned to expect from generative AI. It is now analyzing and reasoning about code and coming back with good solutions to the problems you pose it.


Ah, so I need to pay 100s of $ and use the "frontier" model, which is always a moving BS excuse. Last month Opus 4.5 was the frontier, gotta use it, now it's 4.6, and none of them so far have produced anything consistently good.

It is NOT reasoning about code. It's a glorified autocomplete that wastes energy. Associating "reasoning" to it is an antropomorphizateion.

And calling hallucinations "lateral thinking" is a fucking stretch.

"Let's use tool `foo` with flag `-b`" even if the man page doesn't even mention said flag.

Sure, they might be able to create numerous iterations of containers, testing them, burning resources....but that is literally a thousand monkeys smashing their heads on typewriters to crank out 4chan posts.


I can’t speak to getting an LLM to talk to a CL listener, simply because I don’t know the mechanics of hooking it up. But being as they can talk to most anything else, I see no reason why it can’t.

What they can certainly do is iterate with a listener with you acting as a crude cut and paste proxy. It will happily give you forms to shove into a REPL and process the results of them. I’ve done it, in CL. I’ve seen it work. It made some very interesting requests.

I’ve seen the LLM iterate, for example, with source code by running it, adding logging, running it again, processing the new log messages, and cycling through that, unassisted, until it found its own “aha” and fixed a problem.

What difference does it make whether it’s talking to a shell or a CL listener? It’s not like it cares. Again, the mechanics of hooking up an LLM to a listener directly, I don’t know. I haven’t dabbled enough in that space to matter. But that’s a me problem, not an LLM problem.


Ah yes, lovely. That's what I want in my CI/CD...hallucinations that then churn through I don't know how many tokens trying to "fix it".

Just create a very specific and very detailed prompt that is so specific that it starts including instructions and you came up with the most expensive programming language.

It's not great that it's the most expensive (by far), but it's also by far the most expressive programming language.

How is it more expressive? What is more expressive than Turing completeness?

This is a non-sequitur. Almost all programming languages are Turing complete, but I think we'd all agree they vary in expressivity (e.g. x64 assembly vs. TypeScript).

By expressivity I mean that you can say what you mean, and the more expressive the language is, the easier that is to do.

It turns out saying what you mean is quite easy in plain English! The hard part is that English allows a lot of ambiguity. So the tradeoffs of how you express things are very different.

I also want to note how remarkable it is that humans have built a machine that can effectively understand natural language.


Absolutely this. I am tired of that trope.

Or the argument that "well, at some point we can come up with a prompt language that does exactly what you want and you just give it a detailed spec." A detailed spec is called code. It's the most round-about way to make a programming language that even then is still not deterministic at best.


And at the point that your detailed specification language is deterministic, why do you need AI in the middle?

Exactly the point. AI is absolutely BS that just gets peddled by shills. It does not work. It might work for some JS bullcrao. But take existing code and ask it to add capsicum next to an ifdef of pledge. Watch the mayhem unfold.

See, what’s missing is agentic posting. Let it write to itself a few times and simulate some reactions, then do the actual post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: