Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In most popular languages, the order of evaluation of both statements and expressions is specified. For your example, the insert query call is guaranteed to happen before the select query call in Java, C#, JS, Go, Python, Ruby, Rust, Common Lisp, SML. It is indeed unspecified in C, C++, Haskell, Scheme, OCaml.

While C and C++ are extremely commonly used, I would still say that the majority of popular languages fully define evaluation order. Even more so since most of these languages considered they are fixing a flaw in C. Rust is particularly interesting here, as they initially did not specify the order, but then reversed that decision afelter more real-world experience.

> "Procedural code" (i.e. serial/sequential execution) is a strategy for dealing with that. It's particularly easy (give each step a single dependency; its "predecessor"), but also maximally inefficient.

> Forcing it by default leads to all sorts of complication (e.g. multithreading, "thread-safety", etc.). Making it opt-in gives us the option of concurrency, even if we write almost everything in some "Serial monad" (more likey, a continuation-passing transformer)

You yourself admit that serial code is a strategy for dealing with the complexity of the world - it doesn't complicate anything, it greatly simplifies things.

Threads and other similar constructs are normally opt-in and used either to model concurrency that is relevant to your business domain, or to try to achieve parallelism as an optimization. They are almost universally seen as a kind of necessary evil - and yet you seem to advocate for introducing the sort of problems threads bring into every sequential program.

Thinking of your program as a graph of data dependencies is an extremely difficult way to program, especially in the presence of any kind of side effects. I don't think I've ever seen anyone argue that it's actually something to strive for.

Even the most complete and influential formal model of concurrent programming, sir Tony Hoare's CSP, is aimed at making the order of operations as easy to follow as possible, with explicit ordering dependencies kept to a minimum (only when sending/receiving messages).

> That's often acceptable when running on a single machine, and sometimes acceptable for globally-distributed systems too (e.g. that's what a blockchain is).

It's not just blockchain: TCP and QUIC, ACID compliance and transactions, at-least-once message delivery in pub-subs: all of these are designed to provide an in-order abstraction on top of the underlying concurrency of the real world. And they are extremely popular because of just how much easier it is to be able to rely on this abstraction.



> You yourself admit that serial code is a strategy for dealing with the complexity of the world - it doesn't complicate anything, it greatly simplifies things.

Serial code greatly simplifies serial problems. If you want serial semantics, go for it. Parent comments mentioned Haskell, which lets us opt-in to serial semantics via things like `ST`. Also, whilst we could write a whole program this way, we're not forced to; we can make different decisions on a per-expression level. Having rich types also makes this more pleasant, since it prevents us using sequential code in a non-sequential way.

However, there is problem with serial code: it complicates concurrency. Again: if you don't want concurrency then serial semantics are fine, and you can ignore the rest of what I'm saying.

> Threads and other similar constructs are normally opt-in and used either to model concurrency that is relevant to your business domain, or to try to achieve parallelism as an optimization. They are almost universally seen as a kind of necessary evil

Such models are certainly evil, but not at all necessary. They're an artefact of trying to build concurrent semantics on top of serial semantics, rather than the other way around.

> and yet you seem to advocate for introducing the sort of problems threads bring into every sequential program.

Not at all. If you want sequential semantics, then write sequential programs. I'm advocating that concurrent programs not be written in terms of sequential semantics.

Going back to the Haskell example, if we have a serial program (e.g. using `ST`) and we want to split it into a few concurrent pieces, we can remove some of the data dependencies (either explicit, or implicit via `do`, etc.) to get independent tasks that can be run concurrently (we can also type-check that we've done it safely, if we like). That's easier than trying to run the serial code concurrently (say, by supplying an algebraic effect handler which doesn't obey some type class laws) then crossing our fingers. The latter is essentially what multithreading, and other unsafe uses of shared-mutable-state are doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: