Take any non-trivial debugging task (i.e. something that slipped through the net for several releases in a live system, not a simple bug in fresh code) and set me on it 100 times, and you'll almost definitely see at least 10x variation in how long it takes me, and that's just because of the nature of debugging, sometimes you get lucky and sometimes you don't.
Of course, there's some real variation in the averages that people will settle down to over the course of hundreds of debugging tasks, but an individual's productivity on debugging tasks varies so much that I'd be hard pressed to say much at all about the individuals until I'd seen them do at least a few trials.
It may be the case that the differences in the average times people take do end up being on the order of 10x or more, but it would take a lot of observation to say that for sure (exactly how much depends on what sort of distributions we see when we measure this stuff).
Kudos for pointing out that it's the shape of the distribution that matters, not how far apart its endpoints.
Interestingly there are articles out there which suggest that one distribution (of competence, rather than productivity, but intuitively you'd expect the two to be related) is actually bimodal rather than normal, "the camel has two humps":
Of course, there's some real variation in the averages that people will settle down to over the course of hundreds of debugging tasks, but an individual's productivity on debugging tasks varies so much that I'd be hard pressed to say much at all about the individuals until I'd seen them do at least a few trials.
It may be the case that the differences in the average times people take do end up being on the order of 10x or more, but it would take a lot of observation to say that for sure (exactly how much depends on what sort of distributions we see when we measure this stuff).