hiq's comment was not about Whisperfish but about the presage library.
My comment can be read as "Whisperfish wrote their own implementation of the signal protocol" - which is wrong. (Sorry, can't edit it anymore)
With presage, Whisperfish has a high-level Rust library that wraps the official libsignal libraries (which are also written in Rust) and makes it much easier to write clients. The official Signal repo only contains Java/Typescript/Swift wrappers. As presage is rather obscure, I thought that some HN readers might appreciate the link.
That's my main way to find interesting links, especially as I usually find comments more interesting than the featured links. I default to the "top 20".
For paying users of Claude Code and other similar services, do you tend to switch to the free tiers of other providers while yours is down? Do you just not use any LLM-based tool in that time? What's your fallback?
Do you then think it'll improve to reach the same stability as other kinds of infra, eventually, or are there more fundamental limits we might hit?
My intuition is that as the models do more with less and the hardware improves, we'll end up with more stability just because we'll be able to afford more redundancy.
I'm assuming it's too heavy and has too much contact surface (so more friction), making it too hard to glide smoothly.
There's probably something with the position of the hand when you move the mouse as well. At least I seem to be moving mostly the wrist when I use my mouse, meaning that my hand and forearm are not always aligned; without this alignment, I feel there's more strain on the wrist when typing.
my imagined device has the hand a bit more vertical, which would give more leverage for moving the device around.
Could you do a thing with magnets where you have a special mousepad as well with the pad being all one pole pointing up and the device the same pole pointing down?
Also my imagined device would not need the full keyboard, just the full right side of a qwerty keyboard.
> They begin the year at $0 and they end the next year at $0.
Or they're dead.
If you save an extra $2000/year, what are you supposed to do with the money if you're always hungry, if you're always cold? I'm guessing you could buy food and clothes; you'd end up at $0, just slightly better off. If there's no safety net to rely on, you'd save to be able to face the next problem, and maybe pay it less with your health (which is a kind of invisible debt).
And that's even assuming there's some certain income you can rely on. In my case, I know that for the next few months, I'd at least get unemployment benefits if I lost my job. Not everyone get that, and if you don't, the income floor is $0 and it's way harder to budget.
Another aspect to consider is that maybe the case of a single person who would be in poverty throughout a long life is not representative of poverty. Some people get out of poverty, some fall into it, some die early from it. If we're considering a single person always starting at $0 and always ending up at $0, several years in a row, we already dismiss these nuances. I'm sure you can find such examples, someone who lived to be 80 with a constant wealth of $0, but how common are they really?
Very little data about expenses, but it looks like they may be growing a little slower (3-4x a year) than revenue. Which makes sense because inference and training get more efficient over time.
We don't have evidence one way or the other. But from the public statements the idea that they lose roughly their revenue seems constant over time. It's possible that that is simply a psychological barrier for investors. Meaning they grow their losses at roughly 2x their revenue growth rate.
Growing businesses tend to consume capital. How much capital is appropriate to burn is subjective, but there are good baselines from other industries and internal business justifications. As tech companies burn capital through people time, it's hard to directly figure out what is true CapEx vs. unsustainable burn.
You wouldn't demand that a restaurant jack prices up or shutdown in its first month of business after spending ~1 MM on a remodel to earn ~20k in the first month. You would expect that the restaurant isn't going to remodel again for 5 years and the amortized cost should be ~16k/mo (or less).
I would recommend that restaurant jack up their prices if they're remodeling the restaurant every other day and have no plans on stopping or slowing down that constant remodeling.
> it's hard to directly figure out what is true CapEx vs. unsustainable burn.
Exactly, and yet you're so certain they'll achieve profitability. The cost for pickles could get cheaper but if they're constantly spending more and more on the rest of the burger and remodeling the building all the time to add yet another wing of seating that may or may not actually be needed it doesn't really matter in their overall profitability right?
I've started using snippets for code reviews, where I find myself making the same comments (for different colleagues) regularly. I have a keyboard shortcut opening a fuzzy search to find the entry in a single text file. That saves a lot of time.
As an aside, I find most of these commands very long. I tend to use very short aliases, ideally 2 characters. I'm assuming the author uses tab most of the time, if the prefixes don't overlap beyond 3 characters it's not that bad, and maybe the history is more readable.
I’m not who you were responding to, but I use it on the plane, at home, mostly for coding but also for entertainment as well. I probably average about 6 to 8 hours a day in the headset. I’ve used a variety of headsets in the past, starting way back with the DK2 for Oculus, and the AVP is the first I felt was truly capable of replacing my monitors.
Thanks for sharing. Why at home? Is it simply your main setup, so you got rid of external displays thanks to it? I'd have thought that in a situation where you can easily use regular displays, these were still preferable.
I'm surprised that you find it comfortable enough for 6+ hours, especially since you probably need to keep it plugged in. I thought the consensus was that for most users it was hard to keep them on even for just a whole movie.
I'm also using it at home and the office or coworking spaces. Having the giant ultra wide screen all the time is great, and then I usually break out a handful of apps to the side (calendar, Slack, etc) and keep my desktop to almost entirely coding and sometimes browsing (sometimes use the VisionOS Safari instead). I like the setup and it's something I struggle to get with traditional monitors. Add widgets that persist their location has been awesome too.
It's definitely comfortable enough, though I got a different strap. I'm plugged in most of the time, and at home I'll wear it when I get up to pee/grab coffee.
IMHO, don't, don't keep up. Just like "best practices in prompt engineering", these are just temporary workaround for current limitations, and they're bound to disappear quickly. Unless you really need the extra performance right now, just wait until models get you this performance out of the box instead of investing into learning something that'll be obsolete in months.
I agree with your conclusion not to sweat all these features too much, but only because they're not hard at all to understand on demand once you realize that they all boil down to a small handful of ways to manipulate model context.
But context engineering very much not going anywhere as a discipline. Bigger and better models will by no means make it obsolete. In fact, raw model capability is pretty clearly leveling off into the top of an S-curve, and most real-world performance gains over the last year have been precisely because of innovations on how to better leverage context.
My point is that there'll be some layer doing that for you. We already have LLMs writing plans for another LLM to execute, and many other such orchestrations, to reduce the constraints on the actual human input. Those implementing this layer need to develop this context engineering; those simply using LLM-based products do not, as it'll be done for them somewhat transparently, eventually. Similar to how not every software engineer needs to be a compiler expert to run a program.
I agree with this take. Models and the tooling around them are both in flux. I d rather not spend time learning something in detail for these companies to then pull the plug chasing next-big-thing.
reply