I’ve been told that writing tests ensures your software works properly. Unfortunately, no amount of tests will 100% tell you that your software is bug free. It can’t. Testing is all about adding confidence. Confidence that your efforts are going towards the right goal.
We tackle logical complexity with unit tests. Tests for every code branch, and ensuring our outputs match our expected outputs for every set of inputs. We test for catchable failures. These tests are lightning fast: they simply build and manipulate data structures to meet some assertion. At this level, we are making sure that this tiny unit does its job exactly as its been programmed to. The job you told it to do. Now, that job may be wrong in the first place, but even so, it’s successfully achieving your failure.
Now, that unit may not even be compatible with the rest of the system. So we write integration tests. This ensures that units are compatible with each other, and can successfully be wired together to perform tasks. These tests are simple: connect A and B, and hopefully they yield C. They are a little slower, because they require some form of a running system. Luckly, less of these are needed.
Even with perfect units doing what their told in perfect harmony, you may not be solving business problems yet. Working code is nice (or at least, slightly better than non-working code), but if it’s not serving its purpose, is it anything more than dead weight?
Confidence gives us power. The power not to worry, the power to move onto the next task. In other words: feedback. Letting us know when we are done, and when we still have plenty of work to do. Testing creates a feedback loop as we work to tell us if what we built actually works. TDD gives us a very tight feedback loop: failing test, write code, passing test, etc.
Why are so many developers leaving the most critical test until the end? The acceptance tests. Does this code actually solve the problems it was created to? If it doesn’t, how much time was wasted writing code before this was realized? That is the feedback loop you need to tighten.
It’s easy to work in a silo, and write beautiful code that is “bug free”, fully covered, and runs lightning fast. It’s easy to merge that into an application and wait for bug reports inevitably come in. You may think you are performing because you’re delivery high quality code and getting your job done. And you may be. Or maybe not. You won’t know unless you actually test against your requirements.
One way you can ensure you are on the right track is creating two nested feedback loops. The outer loop, testing against business requirements. You can formalize this with tools like Behat or Cucumber. As an example, these may drive use cases and check the results. It will depend on your domain. And that domain will depend on which layer you are working in.
The problem domain of your web layer is HTTP. Your acceptance test suite should, as expressively as possible, assert that various HTTP request messages are responded to with the corresponding HTTP response messages. That’s it.
Your business domain layer should, as expressively as possible, assert that your business objects can interact under the correct conditions, with the correct outcomes. With an expressive and accurate ubiquitous language, you can ensure those concepts drill all the way down to the code, rather than just at the periodic status updates.
While you are implementing these acceptance tests, you’ll write units of code that integrate with the rest of your system. And you’ll write tests to ensure those things do those jobs. When you are all done, your business tests should pass. And you know you’ll be done.
With these tests driving your progress, you can be confident that your solution is actually solving the business problems it was meant to. When your code leads you astray, you’ll quickly realize that they aren’t helping you achieve your goal, and you’ll find your way again.