Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In the beginning (that is, the 90’s), developers created the three-tier application. [...] Of course, application architecture has evolved greatly since the 90's. [...] This complexity has created a new problem for application developers: how to coordinate operations in a distributed backend? For example: How to atomically perform a set of operations in multiple services, so that all happen or none do?

This doesn't seem like a correct description of events. Distributed systems existed in the 90s and there was e.g. Microsoft Transaction Server [0] which was intended to do exactly this. It's not a new problem.

And the article concludes:

> This manages the complexity of a distributed world, bringing the complexity of a microservice RPC call or third-party API call closer to that of a regular function call.

Ah, just like DCOM [1] then, just like in the 90s.

[0] https://en.wikipedia.org/wiki/Microsoft_Transaction_Server

[1] https://en.wikipedia.org/wiki/Distributed_Component_Object_M...



I only ever played with DCOM and Transaction Server, and never in production, but I do wonder what about that tech stack made it so absolutely unworkable, and such a technological dead-end? Did anyone ever manage to make it work?


What I remember is that there were social reasons, market reasons, and technical reasons that MTS didn't pan out. First, Microsoft was out-of-fashion in startup culture. Second, the exploding internet boom had little demand for distributed transactions. Third, COM was a proprietary technology that relied on C++ at a time when developers were flocking to easier memory-managed languages like Java, which was or at least was perceived to be more "open." I'm sure there were other reasons, but that's what looms in my mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: