F# always struck me as one of the most terribly underrated languages. I'm a lover of MLs in general, but F# lands on one of the sweet spots in PL space with ample expressive power without being prone to floating off into abstraction orbit ("pragmatic functional" is the term I believe). It is basically feature complete to boot.
My theory as an outsider: F# is strongly tied to the Windows world, the corporate world, where a conservative approach is always preferable, on your tech stack and if you need to hire peons coding all day. The corporate world isn't leaving OOP anytime soon, because it's what 95% of engineers focus on, the silent majority which do not frequent HN or play with functional languages in their weekends. The corporate world runs on Java and C#.
If F# had been released in the open-source, flashy and bustling world of Linux and macOS developers, it would have had a much greater success.
I know you can run F# on Linux, but just like running Swift, it feels like outsiders I wouldn't want to bet my business on if I were a Linux-only shop (which I am), however nice it feels. Also a decade ago when it had a chance to take root, Microsoft was still the Embrace Extend Extinguish company. It's not good enough to risk it, just like I'm not gonna use SQL Server for anything.
I am admittedly biased, because although I started programming recreationally in the LAMP-stack world of mid-aughts fame, a huge portion of my professional career has been in C# and the .NET stack.
I think you are grossly overestimating the degree to which the programming language you choose to use to solve a business problem constitutes "betting your business on." How would your business fundamentally change if your first 10k lines of code was in F# as opposed to Go, or Java, or Python, or TypeScript? These are also all languages I've been paid to use, and have used in anger, and with the exception of Java were all learned on the job. This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?
Aside from the EEE quip, I didn't catch any "M$ bad" vide in GP's post.
I think the situation is clear-cut: until recently, you couldn't really run .net on anything else than Windows, so the only people using it were those already invested in the ecosystem.
Among the people invested in the windows ecosystem, many (most ?) are large "non-tech" companies who hire people who mostly see their jobs as a meal ticket. These people don't have the inclination (for lack of curiosity, or time, or whatever reason, doesn't matter) to look into "interesting" things. They mostly handle whatever tickets they have and call it a day. Fiddling with some language that has a different paradigm wouldn't be seen as a good use of their time on the clock by corporate, or during their time off work by themselves, since they'd rather spend that time some other way.
That's for coming in my defense. You are right. I'm not a big fan of Microsoft, but I also don't hate them.
It's pretty simple, really. I am a Linux engineer, and it is not a great investment of time and money for me to get into .NET. I knew F# was cool, but is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on? It makes no business sense at all.
> is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on?
I'm not a software engineer myself, nor a Windows person, so I don't know the specifics, but FWIW, my client runs production .Net code of the C# variety on Linux, connected to pgsql. It's some kind of web service for checking people's tickets (think airport gates where you scan your ticket to enter), so not exactly a toy, even though it's nowhere near unicorn-scale. It seems to work fine, in the sense that I've never heard anyone complain about anything related to this setup. No "works for me but is borken in prod" or "thingy is wonky on Linux but OK on Windows so have to build expensive workaround".
The devs run VisualStudio (non Code) on their Windows laptops. Code is then built and packaged by Azure Pipelines in a container and sent to AWS to be run on ECS.
But it never was a tier 1 platform during its growth. So most non-Windows devs put their focus on other platforms. There is nothing wrong with that.
I could learn .NET now, but I don't really have an interest to do so at this point; Also, the devs you talk about are on Windows, using their tier 1 IDE (Visual Studio) that only runs on Windows, which is my point exactly.
That's a fair point. Tooling is an important aspect of a language, at least for me. I don't know what the VS Code on Linux experience is like for .net.
I tried to dip my toes into F# out of curiosity, and it worked by following some tutorial and VS Code. But it did seem somewhat bare bones. Although I'll admit I'm spoiled by Rust and IntelliJ.
Working for an org who bet on a mix of scala, python, and typescript, I can tell you which languages are being bet on for the rewritten services, and which language is getting in the way of getting things done.
Using it in a context where you need to make money, it's a bad bet. Fine for academic ideas and such things, but really hard to build a business around. And the tooling, community, libs, and docs show how it just can't punch the same weight as other languages when at the end of the day you need to get shit done.
We have both Akka and http4s in use, and are migrating to http4s for those services. We need to do more things more quickly with fewer hands. TS and Python are just easier and better tooled for the majority of our (CRUD) work.
dotnet compiles in general are slow AF on macs, and F# really stood out as the slowest last time I give it a kick.
F# looks wonderful, but unless you’re already in the MS ecosystem, dotnet just feels bad and out of place. And I guess if you are already in the MS ecosystem you’re using C#.
> This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?
As with almost all "vibes"-related comments, this doesn't hold up. There isn't any criticism; just a positing that the sort of corporate, process-heavy companies that will major on Microsoft programming languages will be the last ones to want to try functional programming languages.
Would agree with this. I don't think the language choice, is as massive bet on the business as people think. I've seen much more niche and ancient langs without an ecosystem (no libraries, no SDK's to popular products, etc) build very profitable products. I would see these languages as a much greater risk.
As long as it has a base capability (libraries, maturity) and when people join they can be productive with it in a month or so then the risk is pretty low. For F# most .NET developers, even Node developers IMO will get used to F# relatively quickly. From my anecdotal experience with a number of languages its probably one of the easiest to onboard out of the FP langs balancing the FP methodology while trying to be practical/pragmatic. It has a large ecosystem via the .NET platform and supplements it with FP specific F# libraries where pragmatic to do so.
When it's time to scale out your team and now you're trying to hire dozens of F# developers it starts to matter a lot more. You can throw a rock and hit a Java developer. I hate the language, but finding other people who can be productive in it is trivial compared to F#.
One of the common threads among companies I've worked at which I would consider "successful" is that they don't really classify developers based on what languages they've used before. If you're a good programmer you can become a net positive in almost any language in a few weeks to a few months, and productive within the first year. Some of the worst companies I've worked for were the type who would toss a resume in the trash because they had 1 year of experience in $LANG from a few years ago and not the "have used it for 3 of the last 3 years" they wanted.
I think it depends on what you mean by "successful". Surely multi-billion dollar financial organizations are by at least some definition successful. They are a complete shit show from a tech standpoint. They are so large they cannot effectively manage specialist developer staff outside of very narrow niches. Standardization when you've got thousands of developers across hundreds of products matters. Maybe some "successful" startup can make things work when they are small. But you'll find they start to standardize when they hit real scale.
Totally agree; F# really feels like a language designed by someone who really does understand the theory, why it's important, but also wanted to make the language realistic to use in industry.
When I was at Jet and Walmart, I never really felt "limited" by F#. The the language was extremely pleasant to work with, and I think most importantly, it was opinionated. Yeah, you can write Java/C#-style OOP in F# if you really want, but it's not really encouraged by the language; the language encourages a much more Haskell/OCaml-style approach to writing software.
Even calling C# libraries wasn't too bad. MS honestly did a good job with the built-in .NET libraries, and most of them work without many (or any) issues with the native F# types. Even third-party libraries would generally "just work" without headache. .NET has some great tools for thread-safe work, and I'm particularly partial to the Concurrent collections (e.g. ConcurrentDictionary and ConcurrentBag),
I also think that F# has some of the best syntax for dealing with streams (particularly with the open source AsyncSeq package); by abusing the monadic workflow ("do notation" style) syntax, you can write code that really punches above its weight in terms of things it can handle.
Now, on the JVM side you something like Scala. Scala is fine, and there are plenty of things to love about it, but one thing I do not love about it is that it's not opinionated. This leads to a lot of "basically just Java" code in Scala, and people don't really utilize the cool features it has to offer (of which there are many!). When I've had to work with Scala, I'm always that weirdo using all the cool functional programming stuff, and everyone else on my team just writes Java without semicolons.
But the basic point of the article does make a reasonable point; part of the reason that Scala has gotten more traction is because Java is just such a frustrating language to work with. Scala isn't perfect but being "better than Java" is a pretty low bar in comparison.
C# is honestly not too bad of a language; probably my favorite of the "OOP-first" languages out there. The generics make sense, the .NET library (as stated before) is very good, lambdas work as expected instead of some bizarre spoof'd interface, there are some decent threading utils built into the language, and it's reasonably fast. Do I like F# more? Yeah, I think that the OCaml/Haskell style of programming is honestly jsut a better model, but I can totally sympathize with a .NET shop not wanting to bite the bullet on it.
Martin Odersky is just a very nice guy and I get the impression that he isn't keen on saying "no", which is how you end up with a language that allows you to use xml tags inline (no longer supported in Scala 3),
The "opinionated" Scala are the Typelevel and Zio stacks, which are very cool.
The problem with the "better Java" approach is that although it has helped Scala's growth a lot, it has also made it susceptible to Kotlin. The Scala code that doesn't use the advanced type magic can be straightforwardly rewritten in Kotlin instead. Kotlin also stops your bored developers from building neat type abstractions that no one else understands.
People who use Scala only has a "better Java" can now use Kotlin has a "better "better Java"".
Yeah, and I think that's why a language like Clojure, which is substantially more opinionated than Scala, has been relatively unphased by Kotlin. Clojure is much more niche than Scala, and the adoption has been much more of the "slow and steady" kind.
People who are writing Clojure likely aren't looking at Kotlin as an "alternative"; while they superficially occupy a similar space, I don't think Clojure has any ambitions of being a "better Java", but rather a "pretty decent lisp that runs on the JVM with some cool native data structures and good concurrency tools". I do like it better than Java, but that's because I like FP and Lisp a lot; if I needed a "better Java" right now, I would unsurprisingly probably reach for Kotlin.
Yep, Scala got a lot of attention because you could kinda write it like Java, and Java hadn't changed much in a very long time - people were looking for a "better Java" - and Clojure obviously isn't that.
Kotlin's whole point is a "better Java", so it's going to grab people who went to Scala for a "better Java". Also Java actually has a sane roadmap and methodology to get better too, so there's that now too - with the preview/incubating JEPs, people can see what is coming down the pipeline.
Yep, I don't dispute anything you said there, I think that's pretty consistent with what I said.
Clojure makes no claims of being "Java++". It's a lisp first and foremost that focuses on embracing the host platform and being broadly compatible with existing libraries and strong concurrency protections.
You can use eventlog traces, from Debug.Trace [1]. You can (traceEvent $ "look: " ++show bazinga) everywhere you need and then stare at the log to your heart content.
Not everything is tracing and debugging, sometimes you really need to output intermediate results for "normal", "production" purposes. One could still abuse Debug::Trace, but that would really be ugly.
I also object to that "everywhere". It is far easier to just dump an extra 'print' line somewhere inside a for-loop than into a `foldl (*) 1 $ map (+ 3) [17, 11, 19, 23]`. And that is an easy one...
With eventlog you have lightweight profiling and logging tool for "normal", "production" purposes. You can correlate different metrics of your program with your messages. This is not an abuse of Debug.Trace (notice the dot), it is normal state of affairs, regularly used and RTS is optimized for that use case.
I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.
Eventlog traces are RTS calls wrapped into unsafePerformIO, you are right. The trace part of eventlog is optimized for, well, tracing and is very, very lightweight. It is also safe from races, whereas simple unsafePerformIO (putStrLn $ "did you meant that? " ++ show (a,b,c)) is not.
In my opinion, eventlog traces make much better logging than almost anything I've seen.
Right now, developing with C++, I miss the power of Haskell's RTS.
> I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.
So actually not everywhere. And QuickCheck does something else entirely.
You missed the word "usually". You really, really do not need a print within the body of a loop of any tightness. But you can have it.
The foldl example of yours should be split into a property checking and controlling for expected properties of the input. The first part is done via quickcheck and second part usually is done with assertions and/or traces.
But nothing preclude you from having your trace there, inside foldl argument. This is clearly wrong place to have it, but still you can have it there.
So I repeat, you can have your traceEvents everywhere.
You're thinking of Haskell. F# was modelled after OCaml, which doesn't attract monad transformer stacks, and doesn't have a zoo of compiler extensions.
Well, they aren't actually compiler extensions but pre processor extensions (PPX).
And I would really like if OCaml would have had the possibility to add the needed PPXs names to the source file (like Haskell's compiler extensions). So as to not have to read the Dune (or whatever build system is used) file to get to know where `foo%bar` or `[@@foo]` is coming from and what is doing. But at least the usage of `ppxlib` nowadays should make PPXs "compose" aka. not stamping on each other's feet.
I haven’t used it for some time but OCaml certainly used to have a zoo of incompatible compiler extensions. Circa 2008 or so I once hit on the brilliant idea of using protobufs to get two mutually incompatible halves of an ocaml program to talk to one another only to find that required yet another compiler extension to work.
I'm pretty sure F# was modeled on both. There are some definite "Haskell-isms" in F#; if nothing else, monads are typically done in something more or less equivalent to the `do` notation (an `async` or `seq` block), for example.
The syntax superficially looks a lot like OCaml, but it doesn't do the cool stuff with OCaml functors and modules; you write it a lot more like Haskell most of the time.
Don Syme began with a port of Haskell to .Net, but SPJ convinced him that this is a bad idea, so he did choose OCaml. ("The Decision to Create F#", Page 9)
Is superoptimization related to supercompilation, or are these unrelated compiler optimization techniques? From what I have read, it seems like Turchin [1] et al. are trying to prove equality using some sort of algebraic rewriting system (i.e., the "intensional" approach), whereas superoptimization uses an extensional technique to first try proving disequality by testing on a small set of inputs, then applies formal verification to the remaining candidates.
Massalin (1987) [2] calls the first phase "probabilistic execution" and claims that nearly all of the functions which pass the PE test also pass the more rigorous Boolean verification test. Can you give any insight into the benefits of TT over more "automated" optimizations? I am curious if MLTT/HoTT is more suitable for certain compiler optimizations or offers additional expressive power for proving equivalence, or is the benefit mostly ergonomics?
Supercompilation is "top down": beginning with the given program code and trying to optimise it until it can't find any more improvements. It basically runs an interpreter to evaluate the code, but unknown values (e.g. runtime data) must be treated symbolically. It's similar to aggressively inlining (essentially "calling" functions), and aggressively optimising static calculations.
Supercompilation is good at removing unused scaffolding and indirection, e.g. for code that's written defensively/flexibly, supporting a bunch of fallbacks, override hooks, etc. A common problem with supercompilation is increasing code size, since it replaces many calls to a single general-purpose function/method, with many inlined versions (specialised to various extents).
Superoptimisation is "bottom up": generating small snippets of code from scratch, stopping when it finds something that behaves the same as the original code.
For sure among the hardest problems in software engineering are versioning dependencies, and managing dependencies. At least those are the two I find the most aggravating. It seems like almost nobody can get it right even though component-based software engineering, SoA, etc. I think are generally extremely good ideas. The execution is pretty crummy pretty much everywhere.
With all that said, my sense is that hardware engineering has its own heap of Sisyphean problems and complexities. I definitely would not go back to working on hardware engineering problems like I did super early in my career (a mix of embedded firmware, device drivers, PCB design, and web development). I shudder at the thought of ever working with anything Verilog/VHDL, Xilinx, or SPICE ever again, or debugging PCB designs on the bench top in the lab with an oscilloscope and a logic probe. At least in school I ran more than a few bodge wires to patch a mistake in a PCB design iteration. Maybe in some sense, it's a blessing that those linear systems theory abstractions fall apart utterly in RF engineering problems, and one has to contend with the fact that all circuits radiate. At least circuits that still contain the magic smoke.
Maybe I'm different, but I used Kleinberg/Tardos and CLRS in my undergraduate algorithms class, and I preferred CLRS to KT and the other alternatives (Algorithm Design Manual, etc.), though KT was great too. I've heard from others that KT was better for them for learning how to actually design algorithms as well.
I had a similar experience. It may be that I have a math background and CLRS feels more like a math book than do other algo texts, but it was a breath of fresh air for me after looking at other popular texts that felt so much more handwavy. It felt so much more concise and the math arguments felt more compelling to me anyway. I mean, I am glad though that different texts exist because different people seem to benefit from different texts.
The construction of MIR titles reminds me a lot of older Springer GTM and Grundlehren der Mathematischen Wissenschaften series titles. I have a number of Springer titles I've picked up over the years and the older printings were beautifully typeset and bound with very good paper. I try to find the older printings of Springer books when I can when chasing down a copy for my personal library.
Springer books were beautifully typeset until they started having authors do their own typesetting. That's one of the mixed blessings of LaTeX.
Even into the early 2000s, Springer would often make the first printing of a textbook sewn-bound, then subsequent printings were cheaply perfect-bound. I stopped buying new Springer books online around 2005, because I wanted to check the binding before buying.
Interesting, I must have lucked out quite often and got the first printing back in the mid to late 2000s since back then virtually all of my print Springer books I bought back then were sewn-bound.
I find coffee products to be a frustrating product category. At least for me, I have adverse reactions to certain blends of coffees--particularly nausea, coldness, and headaches--while other ones don't cause trouble at all, so I have to try different coffee blends with a bit of caution. I notice that any amount of coffee past the second cup in the morning doesn't really add anything to my alertness, productivity, or ability to pretend to be a morning person--with the bulk of the benefit being from the first cup--but anything past the second cup makes it harder to fall asleep later at night. I can only imagine how one gets so far as to put down a pot of coffee a day (and then some!) for years on end.
The Blind Spot: Lectures On Logic by Jean Yves-Girard
For context, Girard is a mathematical logician, philosopher, and co-discoverer of the type system System F (Haskell, ML, etc.). The book is a monograph on proof theory, and I was interested in learning more about affine and linear logic to deepen my understanding of Rust and other language ecosystems focused around the ability to explicitly model resources. However, along the way, I learned some other great things: (1) continental philosophy is deep and cool; (2) mathematical writing can be simultaneously rigorous, clear, and hilarious; and it reinforced (alongside Alain Connes's Noncommutative Geometry, and various French philosophers) (3) French academic writing is both frustratingly and delightfully idiosyncratic. Girard writes polemically about other aspects of knowledge, mathematics, etc., and there's heaps of dry humor and anecdotes throughout the book. It's a hard book to read even by pure mathematics standards--a topic not exactly known for being a brisk read--but it was worth it just for the side discoveries alone.
I did a double take when I saw "continental philosophy." Usually I don't expect that to be mixed with math, since it's roots are more in the humanities.
How are exactly does continental philosophy factor into these other topics?
Jean Yves-Girard's thinking evolved from analytic philosophy to continental philosophy over the course of his life, and in this book in particular, some of his asides and polemics critique how we conceptualize truth, knowledge, and logic, and how other fields conceptualize that stuff, from a continental perspective. The fact that this kind of stuff turned up in a mathematical logic book of all places really struck me. It put me on a path to taking a more serious interest in the continental school, and reading more of it (currently chewing on Gilles Deleuze and Bruno Latour). It's a very unusual and difficult book, that led me to very different (compared to what I am used to, anyway) modes of thinking.
That is quite fascinating - I was big into continental philosophy in college, especially Deleuze. I might have to check that out!
If you're interested in this stuff, I think you might enjoy Manuel DeLanda - he gives Deleuze a sort of analytic treatment. His book "A New Philosophy of Society: Assemblage Theory and Social Complexity" takes a lot of ideas from A Thousand Plateaus, but makes them clearer and more accessible.
Another suggestion if interested in Deleuze, and more specifically A Thousand Plateaus: someone named Brent Adkins has a great companion for the work. Unless you have extensive experience with the thinkers & ideas referenced in ATP (e.g. Freud, Marx, Nietzsche), I found secondary sources very helpful for setting up context around certain terms and concepts.
I discerned a similar ontology as the OP over the years in different hobbies too. My sense is that what quadrant one leans into can change in time and space too. In my case, I've had a whole heap of nerd hobbies in my life, among them being game collecting and retrogaming.
I was a game collector even when I was a child back in the heyday of console generations 4 and 5 all the way into my mid twenties, and also a game player since then up to the present as well. When I was much younger I used to love collecting original hardware and media complete in box and everything, and I kept everything I got when it was new in absolutely pristine condition (I was a very atypical child) but as I got older I grew tired of the collecting aspect of video games and sold everything off. At some point it dawned on me I was just buying stuff and putting it in a box (falling into the kit quadrant) and not really playing the games at all. After realizing all that, and when I had acquired everything I wanted (I tended to collect a mix of only games I liked + genre instead of by system, so my collecting requirements were much smaller and easier to achieve), the fun was over and I cashed out some years later. With retrogaming I now vastly prefer a Raspberry Pi and a gamepad in the living room. Fortunately I was able to get in and get out of the game collecting hobby when it was a fairly cheap hobby to participate in (i.e. when CIB copies of classic games were like tens of dollars at most).
I was also an avid tabletop gamer who was into the playing and collecting aspect of Magic: The Gathering and other TCGs, particularly Legend of the Five Rings, Vampire: The Eternal Struggle, and VS System. I started playing MTG way back in the ancient days and had a particular taste for vintage. I also played standard, booster draft, legacy, and extended, but vintage was always my favorite. Back then vintage was accessible on a high school budget with planning and dedication, and wasn't the rich man's game it now, at least if one is not using proxies. For a long time I was both into the competitive side of the game, as well as the collecting side of the game. Over time I grew tired of the playing aspect of Magic: The Gathering and vastly preferred the collecting aspect of it, before life priorities changed entirely and I lost interest in TCGs completely.
I don't really do the collecting/kit hounding side of any hobbies anymore, I prefer the actual doing of the thing instead, but I am grateful I was able to get in, enjoy it fully, and get out of the collecting aspect of both video games and MTG long before both became impossibly expensive to participate in, though it is fun to dip in and see what is going on with those things from time to time.
Not stated in the article, but I suspect that another big reason a lot of hobbyists obsess over the gear and kit aspect of a hobby is because buying kit is a substitute for not having time or energy to actually do the hobby itself. In some sense, buying kit is materializing fantasies about actually doing the thing, but not having the time or the place to do it.
Type inference occasionally bites one's hand when pushing data across an FFI boundary. In one instance I was writing some graphics code for a project and for some reason the colors in the rendering model were coming out all wrong on the screen (TL;DR it ended up looking like two of the color channels were missing from some texture maps) and it turns out that Rust inferred the type of the buffer as a vector of f64s instead of a vector of f32s. Putting the type in specifically fixed the problem promptly.
Lesson learned: Sometimes type inference fails, and always annotate your types at FFI boundaries!
Yeah, it's important to annotate anything that interacts with the outside world, because that code/API isn't available for the compiler to reason about before run-time. Same goes with HTTP endpoints (on both the client and the server)
I know two people who worked through SICP. One of them had so much fun with it they worked through it twice cover to cover in their free time. In both cases they reported to me that SICP was quite a revelation to them, since Scheme is about as simple a practical programming language as one can get other than maybe stack languages like Forth. That is, while [un-|simply-]typed lambda calculus and Turing machines are simpler models of computation, I wouldn't exactly call them practical models of computation. In one case they said that it really gave them real purchase on what computation really means. That is, what is it really about when one strips off all the type theory, build tools, and dependency hell. My sense is that SICP is better left to after getting a couple years of experience doing computer programming to appreciate though for most people though.