Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.

I have built a decent amount of multithreaded and distributed systems. I agree in principle with everything you wrote in that paragraph. In practice, I find such techniques add a lot of unstable overhead in the processing phase as memory is copied, and again while the results are merged, conflicts resolved, etc. They also lock you into a certain kind of architecture where global state is periodically synchronized. So IMO the performance of these things is highly workload-dependent; for some, it is barely an improvement over serialized, imperative code, and adds a lot of cognitive overhead. For others, it is the obvious and correct choice, and pays huge dividends.

Mostly I find the benefit to be organizational, by providing clear interfaces between system components, which is handy for things developed by teams. But as you said, it requires developers to understand the theory of operation, and it is not junior stuff.

Completely agree that we could use better language & compiler support for such things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: