Hacker Newsnew | past | comments | ask | show | jobs | submit | Terretta's commentslogin

Other than the price, compare to Typora?

https://typora.io

  - focus, visual, source modes
  - full mermaid diagrams including -beta
  - math
  - inline styles including e.g. highlights, super/subscripts, etc.
  - import/export doc types
  - file organizer in addition to outline mode
  - themes for screen and print
  - GitHub GFM tables work, "all features you care about" supported
  - beautiful
  - multi-platform
  - no subscription, one time $14.99 (pro or con: not in app store)

There's a cultural divide between SV and the 85% of SMB using M365, for example. When everyone you know uses a thing, I mean, who doesn't?*

There's a reason live service games have splash banners at every login. No matter what you pick as an official e-coms channel, most of your users aren't there!

* To be fair, of all these firms, ANTHROP\C tries the hardest to remember, and deliver like, some people aren't the same. Starting with normals doing normals' jobs.


You know that memory goes back into a prompt as context that wasn't cached, so... that just adds work.

Granted, the "memory" can be available across session, as can docs...


Right, and reloading that context is the same cost as refilling the cache, so really, they're charging the same, and making it hard.

This violates the principle of least surprise, with nothing to indicate Claude got lobotomized while it napped when so many use prior sessions as "primed context" (even if people don't know that's what they were doing or know why it works).

The purpose of spending 10 to 50 prompts getting Claude to fill the context for you is it effectively "fine tunes" that session into a place your work product or questions are handled well.

// If this notion of sufficient context as fine tune seems surprising, the research is out there.)

Approaches tried need to deal with both of these:

1) Silent context degradation breaks the Pro-tool contract. I pay compute so I don't pay in my time; if you want to surface the cost, surface it (UI + price tag or choice), don't silently erode quality of outcomes.

2) The workaround (external context files re-primed on return) eats the exact same cache miss, so the "savings" are illusory — you just pushed the cost onto the user's time. If my own time's cheap enough that's the right trade off, I shouldn't be using your machine.


How I think about this is…

From early GPT days to now, best way to get a decently scoped and reasonably grounded response has always been to ask at least twice (early days often 7 or 8 times).

Because not only can it not reflect, it cannot "think ahead about what it needs to say and change its mind". It "thinks" out loud (as some people seem to as well).

It is a "continuation" of context. When you ask what it did, it still doesn't think, it just* continues from a place of having more context to continue from.

The game has always been: stuff context better => continue better.

Humans were bad at doing this. For example, asking it for synthesis with explanation instead of, say, asking for explanation, then synthesis.

You can get today's behaviors by treating "adaptive thinking" like a token budgeted loop for context stuffing, so eventually there's enough context in view to produce a hopefully better contextualized continuation from.

It seems no accident we've hit on the word "harness" — so much that seems impressive by end of 2025 was available by end of 2023 if "holding it right". If (and only if!) you are an expert in an area you need it to process: (1) turn thinking off, (2) do your own prompting to "prefill context", and (3) you will get superior final response. Not vibing, just staff-work.

---

* “just” – I don't mean "just" dismissively. Qwen 3.5 and Gemma 4 on M5 approaches where SOTA was a year ago, but faster and on your lap. These things are stunning, and the continuations are extraordinary. But still: Garbage in, garbage out; gems in, gem out.


How do you decompile a SaaS? They're a SaaS.

OTOH, their position seems to be "many LLMs make shallow bugs" is unhelpful; same as many eyes make shallow bugs considered unhelpful.

What seems genuinely needed by the open source economy to both surface these latent vulns and tamp down finding-slop is a new https://bughook.github.com/your/repo/ that these big LLMs (Mythos, etc.) support. Mythos understands if it's been used to find an vuln, and back end auto-reports verified findings the git service can feed to a Dependabot type tool.

Even better, price up Mythos to cover running a background verifier that gets the project, revalidates the issue, before that bughook.

Meanwhile, train it on these findings, so its future self doesn't create them.


Refreshing interconnection of topics.

First home “Apple //e” was in Africa, using a Korean improvement on the Apple ][+ adding lowercase and memory and more. It was lugged in by Korean ambassador's son and remained a better performer than the Apple //e once that came out.

Once I started looking for the history, I've never found what that Korean machine was.

Next came Apple IIc which ran circles around it. Then Fat Mac, SE, SE/30… but that's a different story.


It's remarkable how often it refuses to introspect but a SCREENSHOT of itself and suddenly "yeah this works fine".

This happens in all their UIs, including, say, Claude in Excel, as well.


If it's "Continue with Google" and no "Continue with Microsoft", you're ignoring 85% of the US small biz market.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: