Testers vs TDD
Test-driven development was supposed to eliminate the need for independent testing. But it doesn't go far enough and still needs TDD testing.
Test-driven development was supposed to eliminate the need for independent testing. Alas, it doesn't go far enough.
Test-driven development (TDD) earned a reputation of making software more robust. Does that mean you can fire all the testers? Spoiler: No.
Test-driven development is a method for writing software in small chunks. You start with a test, then write functional code to make the test pass, and finally refactor the functional code to clean it up. The idea of TDD was proposed by Kent Beck in the early 1990s, as part of Extreme Programming, an Agile software development methodology.
TDD is sometimes summarized as "Red, Green, Refactor." The interfaces on many testing harnesses, such as JUnit (for Java) and NUnit (for .NET), show red lights when tests fail and green lights when tests pass. There's another step to consider, however. You need to think about the desired behavior and carve out a small chunk of code – typically five lines – to implement next.
As proposed by Beck, in TDD you never write functional code until you have a failing test. Yet, it takes practice to learn to write the tests first (rather than after the fact); developers often have trouble shifting their habitual way of working. Writing the tests before the functional code just feels wrong. You do get used to it, though it may take months; by then it starts feeling right.
The biggest benefit of TDD is that it removes the fear of breaking your code. As you add unit tests and functional code, you also build a library of regression tests that you run frequently. When you add a new feature, fix a bug, or refactor to clean up your code, running the tests again reassures you that you didn't break anything. Or at least it confirms that you didn't break any of the tests that you wrote.
What do testers bring to the table?
Software managers are sometimes tempted to eliminate or reduce software QA departments when the coders adopt TDD, on the grounds that the programmers are also writing tests. That decision usually is a mistake, because testers provide value outside of the developers’ unit tests.
Unit tests are only one of the kinds of tests needed to adequately cover modern code. TDD developers rarely write end-to-end integration tests. They may avoid writing unit tests that require significant setup or that rely on other software components, such as a populated database.
Dedicated testers are more likely than coders to take the time to perform exploratory (ad-hoc) testing, which can find bugs that weren't imagined during the development of the code. Testers also come to the product with fresh eyes compared to the coders who have been immersed in the software for long hours.
Additionally, software developers often are not interested in setting up CI/CD tooling or in organizing the team's tests into a master regression test. Testers consider all of that to be part of the job. Developers may not be involved in implementing shift left testing beyond TDD. For shift left testing, testers can gather information, help with requirements management, and help to define the acceptance criteria, before a single test or line of functional code is complete.
What bugs do testers find that TDD doesn't?
Security is one large, important testing area that isn't normally addressed by writing unit tests. Testers look for security flaws with automated vulnerability testing tools, manual security assessments, penetration tests, security audits, and security reviews.
It's difficult for developers to write unit tests to test GUIs. Instead, testers use automation tools specific to the supported application environments, such as browsers, desktop applications, and mobile apps.
Developers can have a hard time with hardware-dependent bugs, as the dominant way of working is for a coder to work on a single machine. QA departments often collect rooms full of varied computers and devices, as well as images of many operating system versions. An alternative way of testing on many different model devices is to use a crowd-sourced testing service.
While TDD can theoretically catch bugs in edge cases, they are called “edge cases” for a good reason. When a coder designs the tests for a function point, obscure edge cases may escape notice in the heat of the moment. Testers are more likely to find these than are the coders who wrote the software.
Similarly, cross-module flaws can sometimes escape scrutiny in unit tests. They can easily arise when one programmer misunderstands the interface or boundary conditions of another programmer's module. Cross-module bugs are often found during end-to-end testing and ad-hoc testing.
In summary, while there's a lot to be said for TDD as a development practice, it doesn't usually provide complete test coverage of your code. For that, you still need testers.
Whoever writes the test cases, it makes sense to follow established guidelines. These might help.