Hacker Newsnew | past | comments | ask | show | jobs | submit | treetalker's commentslogin

Apparently The Register changed the title of this article. Someone posted it 10 hours before this post: https://news.ycombinator.com/item?id=47749485

This writing makes more sense when you mentally add back in the LLM's em-dashes that were removed to make it look less like a nonhuman-generated marketing article.

Thanks for the reply. The text was AI-structured around the core concept Ikuna is trying to solve: Context switching and helping with Deep Work.

Is Context Switching something important for you as well, or was it just curiosity?


On the left of the photo you'll surely recognize the Dos Equis Most Interesting Man in the World.

Having a personal zero-tolerance policy for certain things is one of the best hacks ever.

ChatINRI, how do I move the goalposts when we retake Jerusalem but I'm not raptured?

Unsatisfied with automating programming, Meta has successfully automated comedy.

Great offering, thanks!

Ideas for your feature pipeline: geographic filtering (e.g., learn to identify plant/bird species in southern Florida); temporal filtering (explore extinct species --> currently endangered species); audio to learn calls/songs.


Thanks for the feedback! Really appreciate it. I am working on specific locations. And actually the "endangered" filter is a great idea. Thanks so much!

*Edit: Wanted to mention that thousands of more species are planned as content.


One approach might be to set up two adversarial summarizers and a judge (like common-law litigation). Instead of using one model to identify and resolve all claims, a first model (plaintiff) seeks out the most supportive arguments for, and evidence of, claims across all nodes; then a second model (defendant) antagonizes the first by seeking out only disconfirming evidence and the best counter-arguments; the parties may get one or more replies or sur-replies; and then a third model (judge) evaluates the previous two against one another. The idea could be extended to incorporate appellate models that ensure compliance with the rules and propose changes to rules or addition/subtraction of rules. Appellate decisions could be maintained in a separate directory and accessed by the adversarial models.

More promising food for thought: developing and employing rules of evidence and procedure. For example, evidence may be taken only from immutable files; the first level nodes present issues, not summaries or syntheses; each issue has separate plaintiff, defendant, judge, and appellate nodes, from which a summary or explanation is ultimately created.


Apparently the convicted felon also accused His Holiness of being "weak on crime."

But notably Trump offered no criticism of the Church's history of the sexual abuse of children.


Indeed, despite Trump being a known rapist and child rapist, he would normally still attack others for doing what he himself is guilty of.

I never use LLMs for research or drafting. But I do use them to roll my own local semantic search; to whip up reusable regexes; to create small deterministic programs that, say, convert my set of paraphrased facts' citations to discrete documents into citations to the compiled appellate record; and to quickly code a Google Docs plugin that will automate away the repeated corrections to my co-counsel's bad typing and citation style (she'll never change).

For these uses, LLMs are wonderful — and Kagi Assistant is plenty good enough.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: