Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love Haskell the language, but Haskell the ecosystem still has a way to go:

* The compiler is slower than most mainstream language compilers

* Its ability to effectively report errors is poorer

* It tends to have 'first error, breaks rest of compile' problems

* I don't mind the more verbose 'stack trace' of errors, but I know juniors/noobs can find that quite overwhelming.

* The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

* This ^ significantly steepens the learning curve for juniors and those new to Haskell and generally gets in the way for those more experienced.

* The library ecosystem for key capabilities in 'enterprise dev' is poor. Many unmaintained, substandard, or incomplete implementations. Often trying their best to be academically interesting, but not necessarily usable.

The library ecosystem is probably the biggest issue. Because it's not something you can easily overcome without a lot of effort.

I used to be very bullish on Haskell and brought it into my company for a greenfield project. The company had already been using pure-FP techniques (functors, monads, etc.), so it wasn't a stretch. We ran a weekly book club studying Haskell to help out the juniors and newbies. So, we really gave it its best chance.

After a year of running a team with it, I came to the conclusions above. Everything was much slower -- I kept telling myself that the code would be less brittle, so slower was OK -- but in reality it sapped momentum from the team.

I think Haskell's biggest contribution to the wider community is its ideas, which have influenced many other languages. I'm not sure it will ever have its moment in the sun unfortunately.



I kind of agree that Haskell missed its window, and a big part of the problem is the academic-heavy ecosystem (everyone is doing great work, but there is a difference between academic and industrial code).

I’m personally quite interested in the Koka language. It has some novel ideas (functional-but-in-place algorithms, effect-handler-aware compilation, it uses reference counting rather than garbage collection) and is a Microsoft Research project. It’s starting to look more and more like an actual production-ready language. I can daydream about Microsoft throwing support behind it, along with some tooling to add some sort of Koka-Rust interoperability.


Koka is indeed incredibly cool, but:

It sees sporadic bursts of activity, probably when an intern is working on it, and otherwise remains mostly dormant. There is no package manager that could facilitate ecosystem growth. There is no effort to market and popularize it.

I believe it is fated to remain a research language indefinitely.


You’re probably right. I just think it’s the only real candidate for a functional language that could enter the zeitgeist like Rust or Swift did, it’s a research language that has been percolating at Microsoft for some time. A new language requires a major company’s support, and they should build an industry-grade ecosystem for at least one problem domain.


I'm just now discovering Koka. I'm kinda blown away.

I'm also a little sad at this defeatist attitude. What you said might be true, but those things are solvable problems. Just requires a coordinated force of will from a few dedicated individuals.


There is a team of dedicated people working on Koka. They say the language isn’t production ready, and they don’t seem to be rushing. But I don’t think they’d bother with VSCode/IDE support if they didn’t feel like they were getting close.


Be the change you want to see?


Hear, hear!


A big if, granted, but if roc delivers on its promises it could also be a pretty compelling language — maybe a bit too niche for super enterprisey stuff but it could definitely have a zeitgeisty moment.


F# exists


I keep (seems mistakenly) expecting them to try and push F# along with their dotnet ML tooling more since, while it is strictly typed, F# lets you mostly avoid making your types explicit so exploration of ideas in code is closer to Python than it is to c# while giving you the benefits of a type system and lots of functional goodies.


Yups. And that's about it for F#. One can await the announcement that MSFT stops maintaining it.


People have been predicting the imminent demise of F# since its first version 20 years ago.


A lot of major C# features were first implemented in F#. I think of it as a place for Microsoft engineers/researchers to be more experimental with novel features that still need to target the CLR (the dotnet VM). Sometimes even requiring changes to the CLR itself. In that lens, it has had a very large indirect financial impact on the dotnet ecosystem.


> * It tends to have 'first error, breaks rest of compile' problems

`-fdefer-type-errors` will report those errors as warnings and fail at runtime, which is good when writing/refactoring code. Even better the Haskell LSP does this out of the box.

> * The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

Which other functional programming languages do you think have better tooling? Experimenting lately with OCaml, feels like Haskell's tooling is more mature, though OCaml's LSP starts up faster, almost instantly.


> Which other functional programming languages do you think have better tooling?

F#, Scala

> OCaml's LSP starts up faster

It was two years ago that I used Haskell last and the LSP was often crashing. But in general there were always lots of 'niggles' with all parts of the tool-chain that just killed developer flow.

As I state in a sibling comment, the tooling is on the right trajectory, it just isn't there yet. So, this isn't the main reason to not do Haskell.


Coincidentally I've started using the Haskell LSP around two years ago, and crashing is not one of the issues I've had with it.

Since you mention F#, and C# in your previous comment, are you on the Windows platform? Maybe our experience different because of platform as well. Using GHCup to keep in sync compatible versions of GHC, Cabal and the LSP probably contributed a lot to the consistent feel of the tooling.

I use the Haskell LSP for its autocompletion, reporting compile errors in my editor, and highlighting of types under cursor. There are still shortcomings with it that are annoyances:

* When I open up vim, it takes a good 5-10 seconds (if not a bit more) until the LSP is finally running.

* When a new dependency is added to the cabal file, the LSP needs to be restarted (usually I quit vim and reopen the project).

* Still no support for goto definition for external libraries. The workaround I have to use in this case is to `cabal get dependency-version` in a gitignored directory and use hasktags to keep a tags file to jump to those definitions and read the source code/comments.

The later two have open GitHub issues, so at least I know they will get solved at some point.


I’ve been using f# in production for 4+ years and haven’t used windows in like 15 years.

Speaking of LSP, the lsp standard is developed by Microsoft so naturally any dotnet language will have good lsp support.


> Since you mention F#, and C# in your previous comment, are you on the Windows platform?

Linux Mint.


> Since you mention F#, and C# in your previous comment, are you on the Windows platform?

Since dotnet core (now dotnet 5+), the Microsoft version of dotnet has not been tied to windows outside a few exceptions like old Windows UI libraries (WPF/WinForms) and stuff like WCF once they revived it.


The Haskell LSP crashes less often now than 2 years ago. It isn't perfect yet, but pretty usable for us.


Has Scala gotten better because I remember it being quite painful in the past (tho probably mostly due to language issues more than anything)


The IntelliJ IDEA plugin for Scala is built by Jetbrains, so it has official support. It has its quirks, but so does the Kotlin plugin.

Sbt is better than Gradle IMO, as it has a saner mental model, although for apps you can use Gradle or Maven. Sbat has had some awesome plugins that can help in bigger teams, such as Scalafmt (automatic formatting), Scalafix (automatic refactoring), Wartremover and others. Scalafmt specifically is best in class. With Sbt you can also specify versioning schemes for your dependencies and so you can make the build fail on problematic dependency evictions.

Scala CLI is also best in class, making it comfortable to use Scala for scripting – it replaced Python and Ruby for me: https://scala-cli.virtuslab.org/

Note that Java and Kotlin have Jbang, but Scala CLI is significantly better, also functioning as a REPL. Worth mentioning that other JVM languages hardly have a usable REPL, if at all.

The Scala compiler can be slow, but that's when you use libraries doing a lot of compile-time derivation or other uses of macros. You get the same effect in similar languages (with the exception of Ocaml). OTOH the Scala compiler can do incremental compilation, and alongside Sbt's support for multiple sub-projects or continuous testing, it works fairly well.

Scala also has a really good LSP implementation, Metals, built in cooperation with the compiler team, so you get good support in VS Code or Vim. To get a sense of where this matters, consider that Scala 3.5 introduces "best effort compilation": https://github.com/scala/scala3/pull/17582

I also like Kotlin and sadly, it's missing a good LSP implementation, and I don't think Jetbrains is interested in developing it.

Also you get all the tooling that's JVM specific, including all the profilers and debuggers. With GraalVM's native image, for example, Scala fares better than Java actually, because Scala code relies less on runtime reflection.

I'd also mention Scala Native or ScalaJS which are nice to have. Wasm support is provided via LLVM, but there's also initial support for Wasm GC.

So to answer your question, yes, Scala has really good tooling compared to other languages, although there's room for improvement. And if you're comparing it to any other language that people use for FP, then Scala definitely has better tooling.


SBT is awful. I've never used Gradle, but if SBT is saner then I'm worried. This blogpost is a bit old, but still on-target: https://www.lihaoyi.com/post/SowhatswrongwithSBT.html


All build tools are terrible, and among the available build tools, Sbt is OK.

Let me give you an example … in Gradle, the order in which you specify plugins matters, due to the side effects. Say, if you specify something complex, like Kotlin's multiplatform plugin, in the wrong order with something else, it can break your build definition. I bumped into this right off the gate, with my first Kotlin project.

In Sbt this used to matter as well, but because Sbt has this design of having the build definition as an immutable data structure that's fairly declarative, people worked on solving the problem (via auto-loading), and since then, I've never bumped again into ordering issues.

There are other examples as well, such as consistency. In Sbt there's only one way to specify common settings, and the keys used are consistent. Specifying Java's targeted version, for example, uses the same key, regardless if the project is a plain JVM one, or a multiplatform one.

Sharing settings and code across subprojects is another area where Gradle is a clusterfuck, whereas in Sbt it's pretty straightforward.

Don't get me wrong, Gradle doesn't bother me, and it has some niceties too. Other ecosystems would be lucky to have something like Gradle. But I find it curious to see so many people criticizing it when almost everything else is pretty terrible, with few exceptions.

---

Note that Li Haoyi has great taste, and Mill is looking good, actually. But he also likes reinventing the wheel, and the problem with that for build tools is that standardization has value.

Standardization has so much value for me that I would have liked for Scala to use Gradle as the standard build tool, and for Scala folks to work with Gradle's authors to introduce Scala as an alternative scripting language for it, despite me liking Gradle a lot less. Because it would've made switching and cross-language JVM development easier.


SBT has a learning curve but it also has a nice ecosystem, for example sbt-native-packager is better than its competitors in maven or gradle.


Sbt is too complex and powerful for its own good. I had a love-hate relationship with it, and now I try to avoid it if I can.

I like scala-cli a lot. It's very promising, but I think it's too new to be proclaimed best-in-class yet. Time will tell, and I'm rooting for it.


Well, there's Intellij IDEA with the scala plugin, and it's pretty good. I regularly debug my code in the IDE with conditional breakpoints, etc.

SBT still makes me want to throw the laptop through the window.


In the pre-LSP era, I worked as a novice Scala developer, and I didn most of my Scala work in Emacs with ENSIME. It was pretty good. I imagine the language server is pretty usable by now.


Scala's has made some horrible language compromises in order to live on the JVM in my opinion.


I'd argue that Scala's "compromises" in general make it a better language than many others, independent of the JVM.

But we can talk specifics if you want. Name some compromises.


I haven't yet felt the need for third party tooling in OCaml. OCaml has real abstractions, easily readable modules and one can keep the whole language in one's head.

Usually people do not use objects, and if they do, they don't create a tightly coupled object mess that can only be unraveled by an IDE.


> Experimenting lately with OCaml, feels like Haskell's tooling is more mature.

I feel like OCaml has been on a fast upward trajectory the past couple of years. Both in terms of language features and tooling. I expect the developer experience to surpass Haskell if it hasn't already.

I really like Merlin/ocaml-lsp. Sure, it doesn't have every LSP feature like a tool with a lot of eyes on it, such as clangd, but it handles nearly everything.

And yeah, dune is a little odd, but I haven't had any issues with it in a while. I even have some curve-ball projects that involve a fair amount of C/FFI work.

My only complaint with opam is how switches feel a little clunky. But still, I'll take it over pip, npm, all day.

I've been using OCaml for years now and compared to most other languages, the experience has been pretty pleasant.


My little experiments with OCaml have been pleasant thus far (in terms of language ergonomics), but on the tooling side Haskell (or rather I should say GHC) is pretty sweet.

For what I had to do thus far, at one point I needed to step debug through my code. Whereas in GHC land I reload my project in the interpreter (GHCi or cabal repl), set a break point on function name and step through the execution. With OCaml I have to go through the separate bytecode compiler to build it with debug symbols and the I can navigate through program execution. The nice thing is that I can easily go back in execution in flow ("timetravel debugging"), but a less ergonomic. Also less experienced with this flow, to consider my issues authoritative.

I don't have that much experience with dune (aside from setting up a project and running dune build), but one thing that confused me at first, is that the libraries I have to add to the configuration do not necessarily match the Opam package names.

The LSP is fast, as mentioned before, it supports goto definition, but once I jump to a definition to one of my dependencies I get a bunch of squiggly lines in those files (probably can't see transitive dependency symbols, if I where to guess). I can navigate dependencies one level deeper than I can with the Haskell language server, though.

I actually want to better understand how to build my projects without Dune, and probably will attempt to do so in the future. The same way I know how to manage a Haskell project without Cabal. Feels like it gives me a better understanding of the ecosystem.


Elixir’s tooling is awesome, in my opinion.


I’m curious what you think is awesome about its tooling? For me, mix is capable enough, but I consider the IDE story to be pretty lacking actually.


No lies detected. I love Haskell, but productivity is a function of the entire ecosystem, and it’s just not there compared to most mainstream languages.


Most of your comments boil down to two items:

- The Haskell ecosystem doesn't have the budget of languages like Java or C# to build its tooling.

- The haskell ecosystem was innovative 20 years ago, but some newer languages like Rust or Elm have much better ergonomics due to learning from their forebearers.

Yes, it's true. And it's true for almost any smaller language out there.


If you boil down my comments, sure, you could say that. But, that's why I didn't boil down my comments and used more words, because ultimately, it doesn't say that.

The thread is "Why Haskell?", I'm offering a counterpoint based on experience. YMMV and that's fine.


Counterpoint: Elixir. While it sits on top of industrial-grade Erlang VM, the language itself produced a huge ecosystem of pragmatic and useful tools and libraries.


The Haskell community is also very opinionated when it comes to style and some of the choices are not to everyone taste. I’m mostly thinking of point-free being seen as an ideal and the liberal usage of operators.


Point-free and liberal use of operators are and have long been minority viewpoints in Haskell.

I say this as someone who prefers symbols, point free, and highly appreciates "notation as a tool of thought".


For a minority viewpoint, it seems fairly pervasive in the library ecosystem at least to me. Then again, I hate most usage of point-free and I think the correct amount of custom operator is exactly zero so I might be particularly sensitive to it.


> The library ecosystem is probably the biggest issue.

I'd love to know which things specifically you're thinking about. For what we've been building, the "integration" libraries for postgres, AWS, etc. have been fine for us, likewise HTTP libraries (e.g. Servant) have been great.

I haven't _yet_ encountered a library problem, so am just very curious.


A few years ago I tried to use Servant to make a CAS[0] implementation for an academic project.

One issue I ran into was that Servant didn't have a proper way of overriding content negotiation: the CAS protocol specified a "?format=json" / "?format=xml" parameter, but Servant had no proper way of overriding its automatic content negotiation - which is baked deeply into its type system. I believe at the time I came across an ancient bug report which concluded that it was an "open research question" which would require "probably a complete rework".

Another issue was that Servant doesn't have proper integrated error handling. The library is designed around returning a 200 response, and provides a lot of tooling to make that easy and safe. However, I noticed that at the time its design essentially completely ignored failures! Your best option was basically a `Maybe SomeResponseType` which in the `None` case gave a 200 response with a "{'status': 'error'}" content. There was a similar years-old bug report for this issue, which is quite worrying considering it's not exactly rocket science, and pretty much every single web developer is going to run into it.

All of this gave a feeling of a very rough and unfinished library, whose author was more concerned about writing a research paper than actually making useful software. Luckily those issues had no real-world implication for me, as I was only a student losing a few days on some minor project. But if I were to come across this during professional software development I'd be seriously pissed, and probably write off the entire ecosystem: if this is what I can expect from "great" libraries, what does the average ones look like - am I going to have to write every single trivial thing from scratch?

I really love the core language of Haskell, but after running into issues like these a few dozen times I unfortunately have trouble justifying using it to myself. Maybe Haskell will be great five or ten years from now, but in its current state I fear it is probably best to use something else.

[0]: https://en.wikipedia.org/wiki/Central_Authentication_Service


> Your best option was basically a `Maybe SomeResponseType` which in the `None` case gave a 200 response with a "{'status': 'error'}" content.

This seems to be an area where my tastes diverge from the mainstream, but I'm not a fan of folding errors together. I'd rather a http status code only correspond to the actual http transport part, and if an API hosted there has an error to tell me, that should be layered on top.


Well, that's why errors have categories:

HTTP status ranges in a nutshell:

1xx: hold on

2xx: here you go

3xx: go away

4xx: you fucked up

5xx: I fucked up

(https://x.com/stevelosh/status/372740571749572610)


At work we had to do both of these things and it is possible if im remembering correctly.


I tried building a couple small projects to get familiar with the language.

One project did a bunch of calculation based on geolocation and geometry. I needed to output graphs and after looking around, reached for gnuplot. Turns out, it’s a wrapper around a system call to launch gnuplot in a child process. There is no handle returned so you can never know when the plot is done. If you exit as soon as the call returns, you get to race gnuplot to the temp file that gets automatically cleaned up by your process. The only way to eliminate the race is by sleeping… so if you add more plots, make sure you increase your sleep time too. :-/

Another utility was a network oriented daemon. I needed to capture packets and then run commands based on them… so I reached for pcap. It uses old bindings (which is fine) and doesn’t expose the socket or any way to set options for the socket. Long story short, it never worked. I looked at the various other interfaces around pcap but there was always a significant deficiency of some kind for my use case.

Now, I’m not a seasoned Haskell programmer by any means and it’s possible I am just missing out on something fundamental. However, it really looks to me like someone did a quick hack that worked for a very specific use-case for both of these libraries.

The language is cool but I’ve definitely struggled with libraries.


The project was a cloud agnostic platform-as-a-service for building healthcare applications. It needed graph-DBs, Postgres, all clouds, localisation, event-streams, UIs, etc. I won't list where the problems were, because I don't think it's helpful -- each project has its own needs, you may well be lucky where we were not. Certainly the project wasn't a standard enterprise app, it was much more complex, so we had some foundational things we needed that perhaps your average dev doesn't need. However, other ecosystems would usually have a decent off-the-shelf versions, because they're more mature/evolved.

You have to realise that none of the problems were insurmountable, I had a talented team who could overcome any of the issues, it just became like walking through treacle trying to get moving.

And yes, Servant was great, we used that also. Although we didn't get super far in testing its range.


Probably referring to something like spring (for java), which is a one stop shop for everything, including things like integration with monitoring/analytics, rate-limiting, etc


Spring is probably the worst framework created, so I wouldn't list that as an example :/


I completely agree. I'm interested in making the Haskell tooling system better. I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

I'm also curious about the slowness of compilation and whether that's intrinsic to the design of GHC.


The Haskell Language Server (LSP) always needs help: https://github.com/haskell/haskell-language-server/issues?q=...

As for GHC compile times... hard to say. The compiler does do a lot of things. Type checking and inference of a complex type system, lots of optimizations etc. I don't think it's just some bug/inefficient implementation, bc. resources have been poured into optimizations and still are. But there are certainly ways to improve speed. For single issues, check the bug-tracker: https://gitlab.haskell.org/ghc/ghc/-/issues/?label_name%5B%5...

For the big picture, maybe ask in the discourse[1] or the mailing list. If you want to contribute to the compiler, I can recommend that you ask for a gitlab account via the mailing list and introduce youself and your interests. Start by picking easy tickets - GHC is a huge codebase, it takes a while to get familiar.

Other than that, I'd say some of the tooling could use some IDE integration (e.g., VS Code plugins).

[1]...https://discourse.haskell.org/


thanks!


The highest priority is probably making real debugging tools. Right now, the only decent debugging tool is ghc-debug to connect to a live process, and doing anything over that connection is slow, slow, slow. ghc-debug was the only thing which was able to resolve a long standing thunk leak in one of my systems, and I know that unexplained thunk leak caused a previous startup I was at to throw away their Haskell code and rewrite it in Rust. In my case, it found the single place where I had said `Just $` instead of `Just $!` which I had missed the three times I had inspected the ~15k line program. ghc-debug still feels like a pre-alpha though, go compare it to VisualVM for what other languages have.

Also, I have found very little use for the various runtime flags like `+RTS -p`. These flags aren't serious debugging tools; I couldn't find any way to even trigger them internally at runtime around a small section, which becomes a problem when it takes 10 minutes to load data from disk when the profiler is on.

The debugging situation with Haskell is really, really bad and it's enough that I try to steer people away from starting new projects with the language.


> I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

Simplifying cabal probably, though that's a system-level problem, just just a cabal codebase problem.


thanks!


The Brittany code fixer needs a maintainer. The previous one had to step away. It has a unique approach to code formatting that the ormolu/fourmolu formatters doesn’t. There’s lots of the philosophy and such in the docs.

I like it better than the ormolu family because it respects your placement of comments and just formats the code itself. But it isn’t maintained as of a few years ago.

https://hackage.haskell.org/package/brittany


Indeed, we used Brittany too, and I was pretty gutted when the support ended!


thanks!


Great vids by the way! I watch your channel.


> It tends to have 'first error, breaks rest of compile' problems

Sort of. It has a "failure at a stage prevents progress to next stage", so a parse error means you won't type check (or indeed, continue parsing). See these proposals for some progress on the matter

* https://github.com/haskellfoundation/tech-proposals/pull/63

* https://github.com/ghc-proposals/ghc-proposals/pull/333


I understand why it happens, but it's an absolute killer for refactoring.

I didn't mention refactoring in my list because it may just be personal experience: my style of coding is to write fearlessly knowing that I will also refactor fearlessly. So less upfront thinking, more brute force writing (on instinct) & aggressive refactoring. I find I get my solution much faster and it ends up being more elegant.

Having a parse error or a type inference error in another module causing all other inference to fail kills the refactoring part of that process where there are syntax/semantic errors everywhere for a period of time whilst I fix them up.

It's good to see the issue acknowledged and hopefully resolved in the future.

Additionally, it would be good to see some proper refactoring tooling. Renaming, moving types/functions from one module to another, etc.


> Additionally, it would be good to see some proper refactoring tooling. Renaming, moving types/functions from one module to another, etc.

Doesn't HLS have renaming, at least?


If you are willing / able to report these pain points in detail to the Haskell Foundation, this is going to be valuable feedback that will help orient the investments in tooling in the near future.


All bug reports are good. But is this not obvious? Do the Haskell developers not use other language ecosystems? This goes beyond “this edge case is difficult” and into “the whole tooling stack is infamously hard to work with.” I just assumed Haskell, like eMacs, attracted a certain kind of developer that embraced the warts.


No, we use plenty of other stuff.

My $DAYJOB language:

* Can't build a binary

* Uses an inexplicable amount of memory.

* Has an IDE which constantly puts itself into a bad state. E.g. it highlights and underlines code with red even when I know it's a pristine copy that passes its tests. I periodically have to close the project, navigate to it in the terminal, run 'git status --ignored' and delete all that crap and re-open the project.

* Is slow to start up.

* Has a build system with no obvious way to use a 'master list' of version numbers. In our microservice/microrepo system, it is a PITA to try to track down and remove a vulnerable dependency.

* Has been receiving loads of praise over the last 18 months for starting to include stuff that Haskell has included for ages. How's the latest "we're solving the null problem" going?

What the GHC compiler does for me is just so much better at producing working software than $DAYJOB language + professional $DAYJOB IDE, that I don't think about the tooling.

If you want to put yourself in my shoes: imagine you're getting shit done with TypeScript every day, and some C programmers come along and complain that it's missing the bare minimum of tools: strace, valgrind and gdb. How do you even reply to that?


> If you want to put yourself in my shoes: imagine you're getting shit done with TypeScript every day, and some C programmers come along and complain that it's missing the bare minimum of tools: strace, valgrind and gdb. How do you even reply to that?

You tell them to strace/valgrind node whatever.js and instead of gdb use built-in v8 debugger as node inspect whatever.js


We do use other ecosystems, yes. I haven't really found the tooling for Haskell to be particularly obstructive compared to other languages. I've run into plenty of mysteries in the typescript, python, ObjC/Swift, etc. ecosystems that have been just as irritating (sometimes much more irritating), and generally find that while HLS can be a bit janky, GHC is very good and I spend less time scratching my head looking at a piece of code that should work but does something wild than in other languages.


I think tooling is something that is clearly on a good trajectory. When I consider what the Haskell tooling was like when I first started using it, well, it was non-existent! (and Cabal didn't even understand what dependencies were, haha!)

So, it's much, much better than it was. It's still not comparable to mainstream languages, but it's going the right way. So, I wouldn't necessarily take that as the killer.

The biggest issue was the library ecosystem. We spent an not-small amount of time evaluating libraries, realising they were not up to scratch, trying to do build our own, or interacting with the authors to understand the plans. When you're trying to get moving at the start of a project, this can be quite painful. It takes longer to get to an MVP. That's tough when there are eyes on its success or not.

Even though I'd been using Haskell for at least a decade before we embarked upon that path, I hadn't really ever built anything substantial. The greenfield project was a complex beast on a number of levels (which was one of the reasons I felt Haskell would excel, it would force us to be more robust with our architecture). But, we just couldn't find the libraries that were good enough.

My sense was there's a lot of academics writing libraries. I'm not implying that academics write poor code; just that their motivations aren't always aligned with what an industry dev might want. Usually this is around simplicity and ease-of-use. And, because quite a lot of libraries were either poorly documented or their intent was impenetrable, it would take longer to evaluate.

I think if the Haskell Foundation are going to do anything, then they should probably write down the top 50 needed packages in industry, and then put some funding/effort towards helping the authors of existing libraries to bring them up to scratch (or potentially, developing their own), perhaps even create a 'mainstream adoption style guide', that standardises the library surfaces -- there's far too much variability. It needs a keen eye on what your average industry dev needs though.

I realise there are plenty of companies using Haskell successfully, so this should only be one data point. But, it is a data point of someone who is a massive Haskell (language) fan.

Haskell has had a massive influence on me and how I write code. It's directly influenced a major open-source project I have developed [1]. But, unfortunately, I don't think I'll use it again for a pro project.

[1] https://github.com/louthy/language-ext


> The compiler is slower than most mainstream language compilers

Depends on which mainstream languages one compares with; there's always C++.

My project here has 50k lines Haskell, 10k C++, 50k lines TypeScript (code-only, not comments). Counting user CPU time (1 core, Xeon E5-1650 v3 3.50GHz):

    TypeScript 123 lines/s
    Haskell     33 lines/s
    C++          7 lines/s


Can you clarify what "7 lines/s" means? Surely you are not saying that your 10k lines of C++ take more than 23 minutes to compile on a single core? Is it 10k lines of template spaghetti?

For comparison, I just compiled a 25k line .cpp (probably upwards of 100k once you add all the headers it includes) from a large project, in 15s. Admittedly, on a slightly faster processor - let's call it 30s.


It means exactly that, 23 mins on a single core for 10k lines!

(Insert "This is C++" Sparta image here.)

It is quite clean application code but it _uses_ some of the most template heavy open-source libraries around (e.g. Eigen, CGAL, boost) -- all hallmarks of the strength of C++.

If you look at other popular open-source C++ projects, such as Ceph or the Point Cloud Library (PCL), 8-hour single-core compile times are, unfortuanately, normal.

I fully agree that C++ code bases that are more C-like compile much faster. But many typical C++ projects that use standard C++ features compile at 7 lines per second.

The same holds for Haskell: If you write very simple code and do not use common functionality (TemplateHaskell, Generics deriving), you'll also get a 20x compile time speedup.


It is a shame that the article almost completely ignores the issue of the tooling. I particularly find the attitude in the following paragraph offensively academically true:

  All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another. There is a computational equivalence between them. The main differences are instead in the expressiveness of the languages, the guardrails they give you, and their performance characteristics (although this is possibly more of a runtime/compiler implementation question).
I decided to have a go at learning the basics of Haskell and the first error I got immediately phased me because it reminded me of unhelpful compilers of the 80s. I have bashed my head against different languages and poor tooling enough times to know I can learn, but I've also done it enough times that I am unwilling to masochistically force myself through that gauntlet unless I have a very good reason to do so. The "joy" of learning is absent with unfriendly tools.

The syntax summary in the article is really good. Short and clear.


> All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another.

That stuck out to me as well, I said out loud "that is a very Haskell thing to say". It would be more accurate to say that Turing Completeness means that any programme you write in one language, may be run in another language by writing an emulator for the first programme's runtime, and executing the first programme in the second.

Because it is not "in fact" the case that a given developer can write a programme in language B just because that developer can write the program in language A. It isn't even "in principle" the case, computability and programming just aren't that closely related, it's like saying anything you can do with a chainsaw you can do with a pocketknife because they're both Sharp Complete.

I shook it off and enjoyed the rest of the article, though. Haskell will never be my jam but I like reading people sing the virtues of what they love.


I think this is being taken as me saying “therefore you can write any programme in Haskell” which, while true, was not the point I was trying to make. Instead I was trying to reduce the possible interpretation that I was suggesting that Haskell can write more programmes than other languages, which I don’t think is true.

> computability and programming just aren’t that related

I … don’t think I understand


> > computability and programming just aren’t that related

> I … don’t think I understand

That's such a Haskell thing to say!

Ok, I'm teasing a bit now. But there's a kernel of truth to it: a good model of the FP school which forked off Lisp into ML, Miranda, Haskell, is as an exploration of the question "what if programming was more like computability theory?", and fairly successfully, by its own "avoid success at all costs" criteria.

Computability: https://en.wikipedia.org/wiki/Computability_theory

> Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.

Programming: https://en.wikipedia.org/wiki/Computer_programming

> Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks.

Related, yes, of course, much as physics and engineering are related. But engineering has many constraints which are not found in the physics of the domain, and many engineering decisions are not grounded in physics as a discipline.

So it is with computability and programming.

> “therefore you can write any programme in Haskell” which, while true

It is not. That's my point. One can write an emulator for any programme in Haskell, in principle, but that's not at all the same thing as saying you can write any programme in fact.

For instance, you cannot write this in Haskell:

http://krue.net/avrforth/

You could write something in Haskell in which you could write this, but those are different complexity classes, different programs, and very, very different practices. They aren't the same, they don't reduce to each other. You can write an AVR emulator and run avrforth in it. But that's not going to get the blinkenlichten to flippen floppen on the dev board.

Haskell, in fact, goes to great lengths to restrict the possible programs one can write! That's one of the fundamental premises of the language, because (the hope is that) most of those programs are wrong. About the first half of your post is about things like accidental null dereferencing which Haskell won't let you do.

In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program. Turing Completeness doesn't change that, and even has limited bearing on it.


> For instance, you cannot write this in Haskell:

> http://krue.net/avrforth/

From what I understand about the link, you likely meant that one cannot write an interpreter for avrforth in Haskell which reads avrforth source code and executes it on bare metal, because such an interpreter will need to directly access the hardware to be able to manipulate individual bits in memory, access registers, ports, etc. and all of this is not possible in Haskell today. If this is not the case, please feel free to correct me.

However, if my understanding is correct, I don't see how this is a problem of Haskell being a functional or "leaning more towards computability theory" language rather than a mismatch of model of computation between the language and the hardware. Haskell can perform IO just fine by using the IO monad which uses system calls under the hood to interact with the hardware. If a similar mechanism is made available to Haskell for accessing the hardware directly (e.g. a vector representing the memory and accessible within the IO monad), it should be possible to write an interpreter for avrforth in Haskell. This means that the current constraint is a tooling/ecosystem limitation rather than a limitation of language itself.


Oh ok I get what you mean now, I thought you were being a bit more obtuse than that.

So my original intent with that paragraph was very different, but you're right that I was not very precise with some of those statements.

Thanks for taking the time to explain, you've definitely helped expand the way I've thought about this.


Nicely said, this in particular;

> In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program.

Language shapes thought and hence once the simpler Imperative programming models (Procedural, OOP) are learnt it becomes quite hard for the Programmer to switch mental models to FP. The FP community has really not done a good job of educating such programmers who are the mainstay in the Industry.


> It is a shame that the article almost completely ignores the issue of the tooling.

Mostly because while I found of the tooling occasionally difficult, I didn’t find Haskell particularly bad compared to other language ecosystems I’ve played with, with the exception of Rust, for which the compiler errors are really good.

> The syntax summary in the article is really good

Thanks, I wasn’t so sure how to balance that bit.


> compared to mainstream languages like C#

Out of curiosity does this also hold true for F#?


F#’s tooling is worse than C# for sure, but it’s a big step-up from Haskell and has access to the .NET framework.

I listed C# because that’s the mainstream language I know the best, and arguably has best-in-class tooling.

Of course you have to be prepared to lose some of the creature comforts when using a more left-field language. But, you still need to be productive. The whole ecosystem has to be a net gain in productivity, or stability, or security, or maintainability — pick your poison depending on what matters to your situation.

I had hoped Haskell would pay dividends due to its purity, expressive type-system, battle tested-ness, etc. I expected us to be slower, just not as slow as it turned out.

Ultimately the trade off didn’t work for us.


Thank you for the answer. It’s exactly because of C#’s excellent tooling I was wondering if they had done similar for F#.

> The whole ecosystem has to be a net gain in productivity, or stability, or security, or maintainability — pick your poison depending on what matters to your situation.

I very much agree with you on this. I’ve worked in places where we used Typescript on the back-end because it was easier for a small team to work together (and go on vacations) while working in the same language even though there was a trade off performance wise. Ultimately I think it’s always about finding the best way to be productive.


F#'s biggest issue is C#. It benefits from Visual Studio and Jetbrains Rider as best-in-class tools, but having to rely on the .NET Framework means relying on an OO first library ecosystem in your functional code. Which can be clumsy and looks a little messy with the mix of camelCase and PascalCase functions.

Also, it has support for features that probably shouldn't be in the language, but are because of C# (interfaces and type-inheritance for example).

The compiler is slower than C# but arguably fast enough. And they have made a weird choice about ordering of source-files dictating ordering of compilation, so you have to manually sort source-files in the IDE. Which is both a pain and makes it sometimes hard to visually find your source-file because they're not in alphabetical order.

I like F# but it doesn't have enough unique features over C# to make it worthwhile imho.

Disclaimer: The last time I wrote any F# was about 5 years ago. Things may be different now!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: