This is part 4 of a 7-part series comparing different kinds of software bugs.
One of the best ways to reduce the number of runtime errors in your code is to use automated testing. This includes a variety of techniques:
What is an Automated Test Error?
For the purposes of this series, an automated test error is any automated test that is not passing. In other words, an automated test error is a failing test.
Causes of Automated Test Errors
There are two possible causes of a failing test:
- A problem with the code
- A problem with the test
Yes, this means you need to check two different places to find the problem. This may seem like a downside at first. In fact, the need to maintain two separate "code bases" (the actual code plus the tests) is a frequent criticism of automated testing generally.
A natural double-check
One major advantage of automated testing, though, is that it provides a natural double-check.
It's possible to have a bug in your code. It's possible to have a bug in your test. It's possible to have a bug in both your code and your test. It's unlikely to have the same bug in your code and your test.
Forced resolution of ambiguous requirements
Oftentimes, in the course of implementing your code you will realize that you forgot to account for some requirement in your test. Or, perhaps you will interpret the requirements differently when writing the test than you do when writing the code. Automated testing will force you to resolve the differences between your two approaches. This is a great way to sharpen your thinking and ensure your code is meeting the system requirements.
Consequences of Automated Test Errors
Errors that you catch during testing do not make it into your production code. This is a Very Good Thing™. So why are automated test errors fourth on the list?
Let's consider the first four types of bugs. Remember, the bug types are listed in order of most preferable to least preferable:
- Syntax errors
- Compile errors
- Misunderstood requirements (before you start writing code)
- Automated test errors (i.e., failing tests)
Syntax errors (1) and compile errors (2) are numbers one and two on the list because they are dead simple to identify and fix.
So long as you realize that you've misunderstood the project requirements (3) before writing code, the time you lose to this type of "bug" will be minimal. That said, it will likely cost you more time than a simple syntax or compile error. If you've fundamentally misunderstood the requirements, you may need to rethink your entire program design.
That brings us to failing tests (4). Depending on the nature of the failed test, resolving the problem can be time-consuming. And if you created a test that you don't need because you misunderstood the project requirements, you've wasted double the time (the time to write the test plus the time to write the code).
This is why it's important to understand the project requirements before you start writing automated tests.
A false sense of security
I want to leave you with one last note about automated testing. No matter how many tests you write, you can only test for known knowns and known unknowns. By definition, you can never account for the unknown unknowns–those things that you don't know you don't know. While automated testing is better than no automated testing, it is not a panacea.