Hacker Newsnew | past | comments | ask | show | jobs | submit | tin7in's commentslogin

Astro is great and I hope they keep improving after the acquisition.

Given what agents can do, I feel a lot of the sites built on Webflow, Framer and so on will move to code and Astro is a great framework for this.


I partially agree with you that things get abandoned by users when they are too complex, but I think skills are a big improvement compared to what we had before.

Skills + tool search tool (dynamic MCP loading) announced recently are way better than just using MCP tools. I see more adoption by the people around me compared to a few months ago.


I use it all the time with coding agents, especially if I'm running multiple terminals. It's way faster to talk than type. The only problem is that it looks awkward if there are others around.

Interesting. I can think and type faster, but not talk. I am not much of a talker.

Same, whenever I try to dictate something I always umm and ahhh and go back a bunch of times, and it's faster to just type. I guess it's just a matter of practice, and I'm fine when I'm talking to other people, it's only dictation I'm having trouble with.

It has something called "Custom Words" which might be what you are describing. Haven't tested this feature yet properly.

So is this already in Handy or you are referring to a feature of the underlying models you are still not actively using?

This is already in Handy in Advanced > Custom Words.

There is also Post Processing where you can rerun the output through an LLM and refine it, which is the closest to what Wispr Flow is doing.

This can be found in the debug menu in the GUI (Cmd + Shift + D).


As an alternative to Wisprflow, Superwhisper and so on. It works really well compared to the commercial competitors but with a local model.

I'm really surprised how much pushback and denial there is still from a lot of engineers.

This is truly impressive and not only hype.

Things have been impressive at least since April 2025.


Is this satire? This comment could not be a better example of what the linked article is talking about.

Not satire. The author is in denial of what's happening.

What is happening?

Not much. They can still parrot their training data. AGI is still 5-20 years away.

I bought the Refactoring UI book years ago and it taught me so much about simplicity and good design!

Peter's (author) last project is reusing a lot of these small libraries as tools in a way larger project. Long term memory is part of that too.

It's an assistant building itself live on Discord. It's really fun to watch.

https://github.com/clawdbot/clawdbot/


Peter (author) talks more about LLMs as slot machines here: https://steipete.me/posts/just-one-more-prompt

Yeah sounds unhealthy, at least self-aware?

Speaking from personal experience and talking to other users - the agents/harnesses of the vendors are just better and they are customized for their own models.


what kinds of tasks do you find this to be true for? For a while I was using claude code inside of the cursor terminal, but I found it to be basically the same as just using the same claude model in there.

Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness


This becomes clearer for me with harder problems or long running tasks and sessions. Especially with larger context.

Examples that come to mind are how the context is filled up and how compaction works. Both Codex and Claude Code ship improvements regarding this specific to their own models and I’m not sure how this is reflected in tools like Cursor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: