Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Article talks a lot about science, and then provides no actual studies that refute the original claim. That's not how science works. Devise a hypothesis. Develop a test to confirm or deny. Run test.

To the contrary, a very important part of science is a lot of lazy onlookers scouring what others have done and shouting "You didn't prove what you think you proved!"

Granted, it's always better when they are able to do the experiment right, but failing that, knowing that the original one was done wrong is still extremely valuable information, especially when it's a result so widely quoted.

"Well they were debugging, not programming." Because no programmers in the real world spend a significant amount of their time debugging. I'd consider that to be a significant amount of time for a programmers day.

To me, the problem with taking a study on a single round of debugging as meaningful is that debugging, overall, is a higher variance activity than writing fresh code.

Say there's a large group of exactly identical programmers, who would each take an average of 50 minutes to find a bug, but the amount of time for any individual to find a particular bug ranges uniformly from 10 to 90 minutes. If you give each of them a single bug to find, you're almost definitely going to measure "productivity ratios" that come close to the limit of 9x (with a big enough group). But all of these programmers are identical, so we damn well better not be publishing that as a result that "Programmer productivity varies at least as much as 9 to 1". We need to run more tests, or at least otherwise parse the data in some way in order to figure out what we can really conclude.

In comparison, the act of writing fresh code will usually have lower variance, because it's less of a "search through unfamiliar shit" task, so we could probably take inferences based on single trials there a tiny bit more seriously, though it still doesn't justify sweeping this issue under the rug.

I haven't read the study that we're referring to, so maybe I'm wrong, but since I know that this claim wasn't the primary focus of the paper (they apparently just noticed high internal variation within each of the two groups that they were comparing), I have a sneaking suspicion that we are talking about a single debugging task, and directly comparing the best and worst performers on that task.

FWIW, Norvig's results (http://norvig.com/java-lisp.html) are more meaningful as far as drawing distinctions between programming language productivity (though they don't distinguish whether the languages used are a cause or an effect of productivity differences), but they also cannot be directly used to prove that, for instance, some programmers are 30x more productive than others, because the variance within an individual's productivity is not accounted for.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: