I switched from neovim because plugins and updates kept breaking it, and I never really did feel like I was in control of it anyway. Helix does what it does, no fuzz. Never breaks.
You do start to think “can I get helix keybindings in my shell”, though.
I've always used GUIs and had to basically break my brain to learn vim keybindings (in Sublime Text) some time ago and the helix bindings are just different enough to throw me off. Sucks because I would prefer an out-of-box solution that "just works" and I'm comfortable in across all my machines for terminal text editing.
At least once a week I daydream about altering nushell so it can run helix as a subprocess for handling the editing parts. Maybe one day I'll go for it. Until then I'll just to make frequent use of the open-in-editor command.
But things that don’t run have much less overhead than code: you don’t need to test then, update them, maintain them, they can’t really “not work”, people will adapt if they don’t make sense.
I /love/ this idea, but I don’t think it’s practical. Documents and business practices are about arranging people into semi-predictable organizations. The computing units of those organizations are people, and people run on text, not code.
The study seems to be “solve this the obvious way, don’t think too hard about it”. Then the systems languages (C, Zig, C++) are pretty close, the GC languages are around an order of magnitude slower (C#, Java
doing pretty good at ca. 3x), and the scripting languages around two orders of magnitude slower.
But note the HO-variants: with better algorithms, you can shave off two orders of magnitude.
So if you’re open to thinking a bit harder about the problem, maybe your badly benchmarking language is just fine after all.
I agree that Google (and in the above comment MS) failed to fulfill their lofty promises (“don’t be evil” etc.)
But the blame is on us: we should have known better than to entrust our data to free services run by a company whose entire revenue comes from ads.
Proton is funded by our subscription payments. I think there’s reasonable hope that their incentives will remained aligned with those of their paying users.
> I don’t recall what happened next. I think I slipped into a malaise of models. 4-way split-paned worktrees, experiments with cloud agents, competing model runs and combative prompting.
You’re trying to have the LLM solve some problem that you don’t really know how to solve yourself, and then you devolve into semi-random prompting in the hope that it’ll succeed. This approach has two problems:
1. It’s not systematic. There’s no way to tell if you’re getting any closer to success. You’re just trying to get the magic to work.
2. When you eventually give up after however many hours, you haven’t succeeded, you haven’t got anything to build on, and you haven’t learned anything. Those hours were completely wasted.
Contrast this with you beginning to do the work yourself. You might give up, but you’d understand the source code base better, perhaps the relationship between Perl and Typescript, and perhaps you’d have some basics ported over that you could build on later.
When I teach programming, some students, when stuck, will start flailing around - deleting random lines of code, changing call order, adding more functions, etc - and just hoping one of those things will “fix it” eventually.
This feels like the LLM-enabled version of this behavior (except that in the former case, students will quickly realize that what they’re doing is pointless and ask a peer or teacher for help; whereas maybe the LLM is a little
too good at hijacking that and making its user feel like things are still on track).
The most important thing to teach is how to build an internal model of what is happening, identify which assumptions in your model are most likely to be faulty/improperly captured by the model, what experiments to carry out to test those assumptions…
In essence, what we call an “engineering mindset” and what good education should strive to teach.
> When I teach programming, some students, when stuck, will start flailing around - deleting random lines of code, changing call order, adding more functions, etc - and just hoping one of those things will “fix it” eventually.
That sounds like a lot of people I’ve known, except they weren’t students. More like “senior engineers”.
I definitely fall into this trap sometimes. Oftentimes that simple order of ops swap will fix my issue, but when it doesn't, it's easy to get stuck in the "just one more change" mindset instead of taking a step back to re-assess.
We’ve used f# professionally for the computations-intensive parts of our product for a couple of years. Here’s what comes to mind:
1. Interop with C# is great, but interop for C# clients using an F# library is terrible. C# wants more explicit types, which can be quite hard for the F# authors to write, and downright impossible for C# programmers to figure out. You end up maintaining a C#-shell for your F# program, and sooner or later you find yourself doing “just a tiny feature” in the C# shell to avoid the hassle. Now you have a weird hybrid code base.
2. Dotnet ecosystem is comprehensive, you’ve got state-of-the web app frameworks, ORMs, what have you. But is all OOP, state abounds, referential equality is the norm. If you want to write Ocaml/F#, you don’t want to think like that. (And once you’ve used discriminated unions, C# error-handling seems like it belongs in the 1980’ies.)
3. The Microsoft toolchain is cool and smooth when it works, very hard to wrangle when it doesn’t. Seemingly simple things, like copying static files to output folders, require semi-archaic invocations in XML file. It’s about mindset: if development is clicking things in a GUI for you, Visual Studio is great (until it stubbornly refuses to do something) ; if you want more Unix/CLI approach, it can be done, and vscode, will sort of help you, but it’s awkward.
4. Compile-times used to be great, but are deteriorating for us. (This is both F# and C#.)
5. Perf was never a problem.
6. Light syntax (indentation defines block structure) is very nice until it isn’t; then you spend 45 minutes how to indent record updates. (Incidentally, “nice-until-it-isn’t” is a good headline for the whole dotnet ecosystem.
7. Testing is quite doable with dotnet frameworks, but awkward. Moreover. you’ll want something like quickcheck and maybe fuzzing; they exist, but again, awkward.
We’ve been looking at ocaml recently, and I don’t buy the framework/ecosystem argument. On the contrary, all the important stuff is there, and seems sometimes easier to use. Having written some experimental code in Ocaml, I think language ergonomics are better. It sort of makes sense: the Ocaml guys have had 35 years or so to make the language /nice/. I think they succeeded, at least writing feels, somehow, much more natural and much less inhibited than writing F#.
Note how much the principles here resemble general programming principles: keep complexity down, avoid frameworks if you can, avoid unnecessary layers, make debugging easy, document, and test.
It’s as if AI took over the writing-the-program part of software engineering, but sort of left all the rest.
You do start to think “can I get helix keybindings in my shell”, though.
reply