You can see from this image — http://perfectionkills.com/images/minifier_benchmarks.png — that pages under 100KB are all under a second. But then it grows. I suspect regex parsing (not catastrophic but close) to be the culprit. If you have time to look into it and optimize — that would be super helpful, of course!
The 7.7s to process a 400KB page (Wikipedia) is particularly surprising. Assuming a processor that executes ~2 billion instructions a second, that's roughly 37,000 instructions executed for each byte of input, or a throughput of ~52KB/s. I wonder where all the time is being spent, as from my understanding minifiers just parse the input document and then write it out in some smaller canonical form.
Also, please, whenever you publish benchmarks, always include the specifications of the system they were performed on! 52KB/s may be horribly slow on a 3GHz i7 but pretty good for a 100MHz Pentium.
Note that "max" settings were used, meaning that, for example, both JS and CSS had to be minified (and that's delegated to UglifyJS2 and clean-css packages correspondingly).
I'd have to agree, while not exactly the same, Google's closure compiler only starts to take a noticeable amount of time when you get into the thousands of lines.
Wow. Yes. I would even say taking more than a single second is pretty unacceptable. There's a bad algorithm in there somewhere...