Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The same way you assess results in a programming language you have used before. In a more complicated project that might mean test suites. For a simple project (e.g. a Bash script) you might just run it and see if it does what you expect.


The way I assess results in a familiar programming language is by reviewing and reasoning through the code. Testing is necessary, but not sufficient by any means.


Out of curiosity, how do you assess software that you didn't write and just use, and that is closed source? Don't you just... use it? And see if it works?

Why this is inherently different?


You are correct that this is indeed a mostly unsolved problem. In chapter 15 of "The Mythical Man-Month", Fred Brooks called for all program documentation to not only for how to use a program, but also for how to modify a program [1] and, relevant to this discussion, for how to believe a program. This was before automated tests and CI/CD were a thing, so he advocated for shipping testcases with the program that the user could review and execute at any time. It's now 50 years later, and this is one of the many lessons in that book that we've collectively not picked up on enough.

[1] Side-note: This was written at a time when selling software as a standalone product was not really a thing, so everything was open-source and the "how to modify" part was more about how to read and understand the code, e.g. architecture diagrams.


As you said, this is in a very different context. He was building an OS, which was sold to highly technical users running their own programs on it.

I'm talking about "shrinkwrap" software like Word or something. There's nothing even close to testing for that this is not just "system testing" it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: