Watching Out for False Positives and False Negatives in Software Testing
What is the difference between false positive and false negative results in software testing? Why do they occur and how to resolve them? Find out here.
Everyone who’s anyone is testing – and you should too. Yet, finding defects in a complex system can sometimes be difficult. Designing test cases to find those defects, however, can be even more challenging. What’s really troubling, though, is when you do test your system with said test cases and the test results lie to you by either giving a false positive or false negative. Things can get pretty sticky fast when you can’t trust the results.
If you’ve worked in the software-testing field for some time now, then you’re most likely familiar with this situation. In fact, you’ve probably already encountered it. For those who haven’t, however, let’s just say you should expect this to happen to you. Speaking to those who are novices in the field, we’ll cover a bit about what false positive and false negative test results are, why they occur and how to help reduce your chances of it occurring again.
What is a false positive and a false negative?
The best way to explain what a false positive and false negative is, is to think about it terms of medicine. For example, in the medical field, the purpose of a test is to determine if you have a disease or illness of any sort. If something were found, the test result would come back as positive. Likewise, if nothing was found, it would result in a negative outcome. The same can be said for software testing, but with bugs.
False positives come into play when a test case fails, but in actuality there is no bug and/or the functionality is working correctly. False negatives, on the other hand, are given when the test case passes, but there is in fact a bug present in the system and/or the functionality is not working as it should.
Though both are an annoyance, it’s safe to say that a false negative is more damaging than a false positive, as it creates a false sense of security. Whereas a false positive may consume a lot of a tester’s energy and time, a false negative allows a bug to remain in the software for an indeterminate amount of time. While both increase costs, a false negative would end up costing substantially more and, also, jeopardize customer retention, as it would leave your software open to vulnerabilities.
Why do they occur?
Whenever a test case results in a false positive or false negative, the best way to figure out how it happened is to ask yourself these questions: Is the test data wrong? Did an element’s functionality change? Was there a change in the functionality of the code? Were the requirements ambiguous? Did the requirements change?
These are just some of the reasons either false result would appear, so it’s important to really break down the test case to see where things went awry.
Best practices for reducing false positives and false negatives
In the case of a false positive test result, automation tools can sometimes help improve how often you receive a false result. For instance, Functionize’s machine-learning platform, which automates software testing, would pull information from your site by falling on other selectors and elements around it to determine if an element has changed or remained the same. Thus, dramatically decreasing the brittleness of test cases.
To reduce your chances of receiving a false negative, ensure a better test plan, cases for testing and testing environment. For both false results, however, try using different test data, metrics and analysis and perform a thorough review of test cases and test execution reports.
Finally, be aware that both forms of testing – manual and automation – are needed to help ensure a false test result doesn’t slip through the cracks. And, above all else, remember to be thorough and diligent throughout the entire software testing process. With hard work and having this knowledge in hand, you can’t go wrong.