Well, all it does is basically enforce you code with testing (abstraction) in mind, and only focus on what you need.
So you build feature A. Great.
Why not write a test to instantiate the feature? It will fail, because you haven’t built it yet. Now go build it.
I assume you build the interface first already, before every little detail of the methods?
Also, you really don’t have to stick to the TDD principles that much. It’s basically to ensure:
1. You have a test which can fail (it actually tests something)
2. You actually have a test for your unit
3. You actually only code what you’re supposed to
This is great for juniors, but as you have more experience, these individual steps lose value, but the principles remain I think.
Also, I would never write tests for every method (as TDD might have you believe), because that’s not my “unit” in unit testing.
So you build feature A. Great.
Why not write a test to instantiate the feature? It will fail, because you haven’t built it yet. Now go build it.
I assume you build the interface first already, before every little detail of the methods?
Also, you really don’t have to stick to the TDD principles that much. It’s basically to ensure: 1. You have a test which can fail (it actually tests something) 2. You actually have a test for your unit 3. You actually only code what you’re supposed to
This is great for juniors, but as you have more experience, these individual steps lose value, but the principles remain I think.
Also, I would never write tests for every method (as TDD might have you believe), because that’s not my “unit” in unit testing.