I'm not so sure about this. Generally speaking there will be more work done on the CPU to decompress at higher levels (e.g. 6 through 9). It is possible (although unlikely) that you will get higher decompression speed, but only if the bottleneck wasn't CPU to begin with (e.g. network or disc).
My gut feeling is that if you are pulling down data faster than 40 Megabits and have a CPU made within the past 7 years (possibly including mobile), you won't be bottlenecked by I/O generally speaking.
Most compression algorithms don't take more work to decompress at higher levels, and actually perform better due to having less data to work through. Gzip consistently benchmarks better at decompression for higher levels.
It's not just about bottlenecks, but aggregate energy expenditure from millions of decompressions. On the whole, it can make a real measurable difference. My point was only really that it's not so cut and dry that it's a good trade off to take a 5% file size loss for 20% improved compression performance. You'd have to benchmark and actually estimate the total number of decompressions to see the tipping point.
My gut feeling is that if you are pulling down data faster than 40 Megabits and have a CPU made within the past 7 years (possibly including mobile), you won't be bottlenecked by I/O generally speaking.