Sure ... but if a loop is effectively doing a map, a filter, and a bunch of other operations all at once? It's a lot quicker to figure out what's going on if it's been written with combinators (once you're familiar with them) than if it's the vanilla loop.
If we assume the operations we're talking about take time linear in proportion to the list, you've just gone from a * n time to b * n time, where b > a. Both of these are still O(n) in Big O notation, which drops constants, because constants unless they're very large or n is extremely large, tend to have relatively little effect on the running time of an algorithm.
Choosing to write more verbose, difficult to decipher, difficult to maintain code, under the claim that it will perform better, is not a good thing: "premature optimisation is the root of all evil".
In practice, if this becomes an issue (which you find out through benchmarking once you know there is a perf issue), most modern languages offer an easy way to swap out your data-structure for a lazily evaluated one, which would then perform the operations in one pass. Languages like Haskell or Clojure are lazy to begin with, so do this by default.
My comment was carefully worded in order to denote that it is not true in all cases.
Your Big O analysis is correct. However, in a real-world case the list could be an iterator over a log file on disk. Then you really don't want to use chained combinators, repeatedly returning to the disk and iterating over gigabytes of data on a spinning plate.
And yeah, you could benchmark to figure that out, or you could use the right tool for the job in the first place.
Or you could use a library explicitly designed with these considerations in mind, that offers the same powerful, familiar combinators, with resource safety and speed. e.g.,
I've no doubt an equivalent exists for Clojure, too, although I'm not familiar enough with the language to point you in the right direction.
One of the most amazing things about writing the IO parts of your program using libraries like these is how easy they become to test. You don't have to do any dependency injection nonsense, as your combinators work the same regardless of whether they're actually connected to a network socket, a file, or whether they're just consuming a data structure you've constructed in your program. So writing unit tests is basically the same as testing any pure function - you just feed a fixture in and test that what comes out is what you expect.
I found this really useful when writing a football goal push notifications service for a newspaper I work for. I realised that what the service was essentially doing was consuming an event stream from one service, converting it into a different kind of stream, and then sinking it into our service that actually sent the notification to APNS & Google. The program and its tests ended up being much more succinct than what I would normally write, and the whole thing was fun, and took hardly any time.
This started as a conversation about for loops vs. chained combinators and wound up with specialized libraries for IO. You're not wrong, but that's a lot more work than a for loop to parse a log file efficiently.