This was, I think, the greatest strength of MapReduce. If you could write a basic program you could understand the map, combine, shuffle and reduce operations. MR and Hadoop etc. would take care of recovering from operational failures like disk or network outages by idempotencies in the workings behind the scenes, and programmers could focus on how data was being transformed, joined, serialized, etc.
To your point, we also didn't need a new language to adopt this paradigm. A library and a running system were enough (though, semantically, it did offer unique language-like characteristics).
Sure, it's a bit antiquated now that we have more sophisticated iterations for the subdomains it was most commonly used for, but it hit a kind of sweet spot between parallelism utility and complexity of knowledge or reasoning required of its users.
To your point, we also didn't need a new language to adopt this paradigm. A library and a running system were enough (though, semantically, it did offer unique language-like characteristics).
Sure, it's a bit antiquated now that we have more sophisticated iterations for the subdomains it was most commonly used for, but it hit a kind of sweet spot between parallelism utility and complexity of knowledge or reasoning required of its users.