Shorthand isn't necessarily a good act to follow, as most shorthand requires a solid understanding of context, and it's not uncommon that the person who wrote the shorthand is the only one who can completely understand it in original form.
Personally, I'm familiar enough with Haskell to be able to slowly work my way through most code, but I'm not at the level where I would be confident in writing anything but simpler programs.
I find idiomatic Haskell suffers from what I call "have to limit my line length-itis" - i.e. "Ord" instead of "Ordered", using "x" and "xs" as identifiers instead of e.g. "first" and "rest". This makes for compact code, but not necessarily very readable code. And for anyone who wants to tell me that it's closer to mathematic notation, well, frankly, maths could take a few hints from modern software engineering about readability.
Then there's the dissonance between actually declaring an algorithm a la Haskell/FP, and actually describing how the algorithm works.
It is sometimes nice to declare algorithms compositionally. This thing I want is the max of this joined on to five of these filtered by this criteria. Lovely. But opaque as to the implementation.
I've found that in my experience with most software I write, the parts where the exact implementation isn't that important don't take a long time to implement. Immutability, idempotency, and removing side effects can work just the same in non-FP languages here. The other parts, which almost always involve some sort of IO, require very precise control over implementation, or state, or timing, and are often very difficult to declare with "is" - sometimes the only sensible way I can seem to think of them is a series of processes.
When I try to implement this sort of thing functionally it feels like a retrofit, and never as elegant as the simple imperative "do this, then this, then that".
I'm not trying to be anti-Haskell or anti-FP - I love and use FP principles every day. But I'm definitely in the camp that thinks "pure FP" is the best solution to only a small set of problems.
To me, where Haskell fails is that it has very little to offer for these imperative problems. Which, for many applications, makes it almost worse than even a crappy old imperative lang.
>When I try to implement this sort of thing functionally it feels like a retrofit, and never as elegant as the simple imperative "do this, then this, then that".
Not sure what you mean by elegance. I am not interested in elegance. I am interested in the code being readable, many weeks from now. I am interested in the guarantee that there are no hidden dependencies in the code am looking into. I am interested in the guarantee that the code/computation wouldn't end up being a mud ball comprising of a dozen mutable variables and their transient state, that can go arbitrarily wrong in a million ways involving half a dozen loops, that cannot be examined in isolation. Those are the things I use Haskell for.
Also, I think a reason for the kind of difficulty you describe might be that a lack of fluency in the vocabulary of FP, which is things like maps, folds, zips, filters etc. Known these functions is one thing. Being fluent in their use by combining them is different.
The frequently encountered "Haskell is not readable" mindset stems from the fact that there are so many people, still somewhat new to Haskell, who know these functions, but are not fluent in their use and common patterns (which is a fact that is oblivious to them), try to read code written by people who are fluent in the same...
I agree around the desired outcomes regarding readability and side effects, but it's not like other languages can't be used in this manner. If you're an OO-ist and you design good interfaces with good contracts, the 'million ways involving half a dozen loops' can be very quickly limited in scope to a couple of methods in a small class.
That's not to say that it always happens, and certainly some less, uh, experienced developers will write terrible code. But they'll write terrible code in Haskell as well (or no working code at all, which has been my experience at least once).
I'd consider myself very comfortable with maps, folds, filters and zips. I wouldn't consider myself super fluent with their use in Haskell, which definitely contributes to my own struggles with the language.
But for something like the canonical quicksort example in Haskell, it takes me quite a while to figure out whether it's actually implementing quicksort by the book, or whether it's implementing something that sounds like quicksort but isn't. This is because I have to map the declaration to the underlying implementation when it matters. Probably more often than not it doesn't matter at all as long as the code works and isn't causing problems (which is where the functional primitives are fantastic), but I do find that digging deeper into a more complex functional algorithm can be a difficult task.
This is because you have to think about how every part might work behind the scenes. Am I doing something stupid like mapping my whole data set with a computationally intensive function? Is one of the innocent-looking predicates in my list comprehension actually some super intensive function that's had an operator overload? Am I going to be applying this predicate to the entirety of a massive list where in an imperative context I would have a really obvious switch case or set of ifs, a clear non-symbol invocation of reallyExpensiveFunction, and exited my loop early?
It's a little difficult to describe I guess, but for me reading [what I believe to be] idiomatic Haskell code at a high level is reasonably straightforward, if somewhat slow due to the compactness, but actually understanding what that code is doing can be incredibly difficult.
In some ways it's the same type of issue I have against liberal use of recursion. It might be reasonably easy to describe a recursive algorithm, but to really get in and understand it requires a much deeper understanding that's often easily more difficult than understanding its imperative cousin. There are real reasons why comp sci students struggle with it.
Personally, I'm familiar enough with Haskell to be able to slowly work my way through most code, but I'm not at the level where I would be confident in writing anything but simpler programs.
I find idiomatic Haskell suffers from what I call "have to limit my line length-itis" - i.e. "Ord" instead of "Ordered", using "x" and "xs" as identifiers instead of e.g. "first" and "rest". This makes for compact code, but not necessarily very readable code. And for anyone who wants to tell me that it's closer to mathematic notation, well, frankly, maths could take a few hints from modern software engineering about readability.
Then there's the dissonance between actually declaring an algorithm a la Haskell/FP, and actually describing how the algorithm works.
It is sometimes nice to declare algorithms compositionally. This thing I want is the max of this joined on to five of these filtered by this criteria. Lovely. But opaque as to the implementation.
I've found that in my experience with most software I write, the parts where the exact implementation isn't that important don't take a long time to implement. Immutability, idempotency, and removing side effects can work just the same in non-FP languages here. The other parts, which almost always involve some sort of IO, require very precise control over implementation, or state, or timing, and are often very difficult to declare with "is" - sometimes the only sensible way I can seem to think of them is a series of processes.
When I try to implement this sort of thing functionally it feels like a retrofit, and never as elegant as the simple imperative "do this, then this, then that".
I'm not trying to be anti-Haskell or anti-FP - I love and use FP principles every day. But I'm definitely in the camp that thinks "pure FP" is the best solution to only a small set of problems.
To me, where Haskell fails is that it has very little to offer for these imperative problems. Which, for many applications, makes it almost worse than even a crappy old imperative lang.