It's important to note that (as far as I can tell) the author doesn't actually debunk the claim that good programmers are 10x more productive, only the claim that studies show that programmers are 10x as productive.
Personally, I think the author is unintentionally also pointing out the problem of relying on studies of programmer productivity: it's next to impossible to measure. I seriously doubt that there is no valid research on this because no one has gotten around to it. Too many peoples' businesses rely on understanding programmer productivity inside and out for that to be the case.
Personally, I agree with whoever it was that argued that we need to view this as a soft science like psychology or sociology. We need to focus on working in spite of our imperfect means of measuring productivity rather than dismissing all of our research based on it. Such is the nature of studying the human mind.
I read quite a bit about measurable programmer productivity in the 90's because it was an interest of mine. Basicly the studies were worthless. Professional programmers were too expensive, so CS undergraduates were used. As you can imagine there are 10x differences in programmer experience among undergraduates. When my son was interviewing colleges a decade or so ago, we sat down and estimated that he had written about 40,000 lines of code in high school, mostly professionally and mostly hard stuff, like video codecs, mone was easy stuff like html or php. Some of you have done that. Some of his classmates had done some basic a summer or two before. Well there is at least 10x that he could easily maintain. 10 years after college there are still 10x programmers, but things are much moir even.
The bottom line? Studies using a group of undergraduates on a day-long project are worthless.
The other thing I have seen are efforts to measure professional programmers via time sheets, even had to do it myself, usually in the guise of getting us to produce more reliable estimates. Not more accurate estimates, but more reliable in the sense that Jack is always off by 4x while Jill is consistently off by 1.5x. These efforts usually died because programmers resisted the hassle.
Other measurements have been equally worthless. Measure by lines of code? You get lots of blank line and comments. I have even worked on projects that required 2 pages of comment for 5 lines of code.
The real issue is studding programmers is expensive. In class assignments or contests you can find a huge range of time to completion. But, paying a statistically significant number of programmers to solve the same task is prohibitively expensive let alone a wide range of identical tasks.
PS: I have completed the same task as other programmers in less than 10% of the time. However, assessing overall productivity requires that someone is consistently faster on a wide range of tasks not just that they happen to solve something in 10% or even 1% of the time.
I dunno, introductory level programming classes create an opportunity to observe anywhere from 5 to 200 people solving the same problem.
I've never taught a CS course, but back in my college days I helped a friend with his CS homework by taking advantage of weak file permissions to copy other people's homework. The shocking thing was that, looking at a small sample, most of the programs didn't work correctly. To complete the assignment our way, we had to fix bugs in programs that other people wrote, which is fantastic preparation for a professional programmer ;-)
But using introductory programming courses as an example of writing real-world code would be a huge mistake. Most of those assignments in my day were almost trivially easy for someone that actually understands the material (and almost impossible by those who don't). It would not be a fair assessment of software engineering productivity.
Worse, the homework assignments are generally completed outside of class under widely ranging conditions. (One student finds a quiet spot in a library, another is in a cramped dorm room with a roommate practicing trumpet, and another goes out and parties until dawn and tries to crank out a project three hours before it's due.)
In your attempt to weed out conflating variables, you introduce a similar variable... namely: all programmers working in the same conditions definitely reduces external stimuli which could impact performance, however it introduces a different variable in which the some (or many!) of the programmers are no longer working under their preferred conditions dropping their performance.
How much of your speed in completing a task based on prior experience with similar or identical problems with your experience with the programming environment being similar to your peers?
I think we should all agree on a definition of developer first. Not just in the sense of "HTML isn't a language", but picking a specific domain in which to compare developers.
I don't think the concept of productivity can span multiple fields. Does a C programmer have the same productivity as a Ruby programmer? On the other hand, maybe some part of a Ruby project needs optimizing, so a routine is written in C. Maybe there can be a definition of productivity that crosses languages... but we don't have one yet.
A C programmer may not have the same productivity as a Ruby programmer - but when you need to write a device driver for Unix/Linux or a Windows System service - Ruby is not a decent choice. Productivity needs to be relative to the task at hand and choosing the correct language for the job is part of that task.
In my last startup we used 3 Measures for productivity.
Number of functions/procedures/methods, Number of lines of non-comment code and the number of defects found in the
code. I wrote far more code than anybody else in the
company. I took on all the really hard stuff that other developers had already failed at. My number of defects per line of code and per function/procedure/method was always lower than all the other developers.
So I would choose to politely disagree. You can measure productivity.
Well what would make you more productive vs. another programmer in solving an identical problem with both of you having all the same programming tools available?
1. Familiarity with the library set / language
2. Familiarity with the problem space
3. Familiarity and speed with tools/programming environment
4. Research speed (and research tools/materials)
5. Mental Program structural planning speed
6. Previous Experience with the specific problem
7. Knowledge with time saving programming structures
8. Skill with mathematics and algorithms
9. Computer Hardware & Internet Connections
10. Ability to choose the best tools for the job
11. Learning speed
12. Motivation, energy, ability to concentrate for long periods of time
And that's listing mostly external factors that are somewhat obvious, not some less obvious speed of mental thinking or something similar.
You can define developer productivity as speed to complete an identical project compared to another and the quality/maintainability/readability of the software they produce to match the project requirements.
Developer productivity is a lot like obscenity. And jazz. It may be hard to define. But it doesn't matter. Because I know it when I see it. YMMV. But that's not my problem. I don't have to define precisely what it means to be hit in the face by a rotting fish. I'm pretty sure I'll know it when it happens.
Now back to Terminal and vi so I can be productive again...
Well you could use peer ranking, and see how well the rankings correlate with one another. That's the method they use in expertise studies for subjective fields.
I don't know. A subjective judgement of the results doesn't mean that a study isn't scientific. You just need to be systematic and to be explicit about how you measure.
Part of my point was, though, that nobody should need to do an objective study. :) Because it's pretty frickin obvious to anybody with a brain who works in the field, in my judgment. And it's not limited to our field. It's a human-wide phenomenon.
From the article: "Critical examination of ill-supported assertions takes a lot more time than making the assertions in the first place"
Anyways, it's really an unfalsifiable claim, isn't it? For any study that doesn't show a 10x difference, you could claim that it didn't have any truly "good" programmers.
The burden of proof should be on those who make the claim.
Apologies for going off topic, but I just read the articles on debunking the MMR <-> autism link. I misread your open paragraph to be:
"It's important to note that (as far as I can tell) the author doesn't actually debunk the existence of precognition, only the claim that studies show precognition to exist."
Yes, but quite vacuous. The author can't prove the non-existence of evidence supporting the 10x claim. He was critiquing our folklor-ish belief in this 10x claim, without evidence, and our gross indifference to this fact.
Let my try another tack, proposing: excluding some bottom <10% that are incompetent, and those with less than 5 years of experience, all programmers have exactly the same 1x productivity capability; all apparent levels of difference are really either ramp-up on new technologies or flaws in development processes and tools. <citation of irrelevant study on something else> <citation of myself, citing the aformentioned study>.
... do you now believe in this 1x claim? No? Hmm, perhaps I need to get it quoted more and then you'll believe it?
The author's point is that "quoting it more" and "it seems reasonable" are the only basis for our belief in the 10x claim. But neither are scientifically, empirically relevant. We should follow up with further studies to confirm the effect, and then investigate the cause. But we don't. And because we have not, we don't have any scientific basis for believing this 10x claim.
This isn't a matter of "dismissing all of our research based on it" -- there is nothing to dismiss. There has been no sufficient research.
Oddly, my initial misread is inaccurate. There are far more studies supporting the existance of precognition than supporting this 10x claim. ;-)
Personally, I think the author is unintentionally also pointing out the problem of relying on studies of programmer productivity: it's next to impossible to measure. I seriously doubt that there is no valid research on this because no one has gotten around to it. Too many peoples' businesses rely on understanding programmer productivity inside and out for that to be the case.
Personally, I agree with whoever it was that argued that we need to view this as a soft science like psychology or sociology. We need to focus on working in spite of our imperfect means of measuring productivity rather than dismissing all of our research based on it. Such is the nature of studying the human mind.