Hacker Newsnew | past | comments | ask | show | jobs | submit | chriswarbo's commentslogin

Whenever I try something else, I always seem to keep going back to E16. Back in the day, it worked well in Gnome 2.x; these days I tend to use it in XFCE, but it feels a bit less integrated.

Exactly the same thing happened when git showed up (alongside the same things for bzr, darcs, hg, etc. too!)


rare case of feature bloat for git. Still completely optional, thank god linus.

I suppose it depends what you mean by "horribly break things".

The only thing I've noticed is that `jj` will leave the git repo with either a detached HEAD, or with a funny `@` ref checked out.

I don't think that would trouble someone who's experienced with git and knows its "DAG of commits" model.

For someone who's less experienced, or only uses git for a set of branches with mostly linear history (like a sort of "fancy undo"), I could imagine getting a shock when trying to `git commit` and not seeing them on any of the branches!


> I don't think that would trouble someone who's experienced with git and knows its "DAG of commits" model.

I think most people that have git experience don't know what a DAG is and have never used reflog.


> It wants me to start with the new and describe command

jj doesn't "want" anything.

I always end a piece of work with `new`: it puts an empty, description-less commit as the checked-out HEAD, and is my way of saying "I'm finished with those changes (for now); any subsequent changes to this directory should go in this (currently empty) commit"

The last thing I do to a commit, once all of its contents have settled into something reasonable, is describe it.

In fact, I mostly use `commit` (pressing `C` in majutsu), which combines those two things: it gives the current commit a description, and creates a new empty commit on top.


> I often find that in the process of making one change, I have also made several other changes, and only recognize that they are distinct after following the ideas to their natural conclusion.

I do that all the time. With git, everything starts "unstaged", so I'd use magit to selectively stage some parts and turn those into a sequence of commits, one on top of another.

With jj I'd do it "backwards": everything starts off committed (with no commit message), so I'd open the diff (`D` in majutsu), selecting some parts and "split" (`S` in majutsu) to put those into a new commit underneath the remaining changes. Once the different changes are split into separate commits, I'd give each a relevant commit message.


Tooling can support both (e.g. don't assume all hashes have the length of a SHA1, etc.); but they can't be used together in one repo.


Haskell has a package to make testing this sort of thing easier https://hackage.haskell.org/package/inspection-testing


I'm confused; how is writing a shell command (using shortcuts like those in the article!) "wasting time", but describing what you want to an LLM, having it make a plan, reading the plan, editing it, and running it is somehow not a waste of time?

You also mention there being "little value", when your proposed approach costs literal money in form of API/token usage (when using hosted models).

> Now I've moved to coding in Haskell

You might like https://hackage.haskell.org/package/turtle or http://nellardo.com/lang/haskell/hash/


Theirs "turns off" one element of a pipeline; yours turns off everything after a certain point.

This will output the stdout of mycmd1:

    mycmd1 #| mycmd2 | mycmd3
This will output the stdout of mycmd3:

    mycmd1 | \# mycmd2 | mycmd3


Can you explain to me why either of these is useful?

I've somehow gotten by never really needing to pipe any commands in the terminal, probably because I mostly do frontend dev and use the term for starting the server and running prodaccess


Pipelines are usually built up step by step: we run some vague, general thing (e.g. a `find` command); the output looks sort of right, but needs to be narrowed down or processed further, so we press Up to get the previous command back, and add a pipe to the end. We run that, then add something else; and so on.

Now let's say the output looks wrong; e.g. we get nothing out. Weird, the previous command looked right, and it doesn't seem to be a problem with the filter we just put on the end. Maybe the filter we added part-way-through was discarding too much, so that the things we actually wanted weren't reaching the later stages; we didn't notice, because everything was being drowned-out by irrelevant stuff that that our latest filter has just gotten rid of.

Tricks like this `\#` let us turn off that earlier filter, without affecting anything else, so we can see if it was causing the problem as we suspect.

As for more general "why use CLI?", that's been debated for decades already; if you care to look it up :-)


no no, not asking why use CLI. If I was less lazy, I would use it more often


I can imagine a pipeline where intermediate stages have been inserted to have some side effect, like debug logging all data passing through.


Ah duh, cheers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: