Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Coverage is very useful in determining whether your tests actually do what you think they're doing. It's not very useful as a metric to be optimized. Writing good end-to-end tests that really cover the functionality as specified is very difficult, doing so blindly without tool support is doomed to fail.

An even better tool than measuring coverage is setting up a mutation testing framework and adding tests until all mutations are caught.



I agree, and I think there is an arrogance by those with the view that 'coverage testing is useless' which blinds them to the real purpose of code coverage testing: to tell you what you haven't tested yet.

Coverage isn't supposed to tell you anything about the quality of the tests beyond what code simply has never been tested yet.

And yet, people seem to think "code coverage testing" means something else entirely. The only purpose for paying attention to coverage metrics, as a manager, is to understand how much more work there is to be done on the tests - or, to determine code paths which never get executed and are therefore dead weight on the project.


> the real purpose of code coverage testing: to tell you what you haven't tested yet.

I can get 100% code coverage with tests passing on:

    // Returns the result of A*B
    int mult(int A, int B) {
      return 4;
    }
I won't know the functionality is not tested. Code coverage will just be an illusion of security.


Code coverage is not all you should be doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: