Ai In Software Testing - 3 Ways AI is Improving Software Quality
A closer look at how AI can be used to improve software testing, regression tests methodologies, machine vision visual processing, etc.
Marc Andreessen famously said that software is eating the world. This notion, that every company must become first and foremost a software company, is hardly a radical notion these days.
However, even as businesses across industries have invested deeply in their software capabilities, they are now struggling with the complexities of modern software development and deployment — software is more distributed, is released in a continuous fashion, and increasingly incorporates aspects of machine learning into the code itself, making the testing and QA function all the more challenging.
Today most enterprise labs require engineers to write testing scripts, and their technical range of skills must be equal to the developers who coded the original app. This additional overhead in quality assurance corresponds with the increasing complexity of the software itself; current methods can only be replaced by systems of increasing intelligence. Logically, AI systems will be increasingly required to test and iterate systems which themselves contain intelligence, in part because the array of input and output possibilities are bewildering.
AI in software testing is already being applied in a variety of ways. Here are three areas in which AI is making the most immediate impact:
3 Ways AI is Improving Software Quality
Regression Testing
One aspect of testing that is particularly well suited for AI is regression testing, a critical part of the software lifecycle which verifies that previously tested modules continue to function predictably following code modification, serving as a safeguard that no new bugs were introduced during the most recent cycle of enhancements to the app being tested. The concept of regression testing makes it an ideal target of AI and autonomous testing algorithms because it makes use of user assertion data gathered during previous test cycles. By its very nature, regression testing itself potentially generates its own data set for future deep learning applications.
Current AI methods such as classification and clustering algorithms rely on just this type of primarily repetitive data to train models and forecast future outcomes accurately. Here’s how it works. First, a set of known inputs and verified outputs are used to set up features and train the model. Then, a portion of the dataset with known inputs and outputs are reserved for testing the model. This set of known inputs are fed to the algorithm, and the output is checked against the verified outputs to calculate the accuracy of the model. If the accuracy reaches a useful threshold, then the model may be used in production.
Machine Vision
Getting computers to visualize their environment is probably the most well-known aspect of how AI is being applied in the real world. While this is most commonly understood in the context of autonomous vehicles, machine vision also has practical applications in the domain of software testing, most notably as it relates to UX and how Web pages are rendered. Determining if web pages have been correctly rendered is essential to website testing. If a layout breaks or if controls render improperly, content can become unreadable and controls can become unusable. Given the enormous range of possible designs, design components, browser variations, dynamic layout changes driven, even highly-trained human testers can be challenged to efficiently and reliably evaluate rendering correctness or recognize when rendering issues impact functionality.
AI-based machine vision is well suited to these types of tasks and can be used to capture a reviewable ‘filmstrip’ of page rendering (so no manual or automated acquisition of screen captures is required). The render is analyzed through a decision tree that segments the page into regions, then invokes a range of visual processing tools to discover, interrogate, and classify page elements.
Intelligent Test Case Generation
Defining software test cases is a foundational aspect of every software development project. However, we don’t know what we don’t know so test cases are typically limited to scenarios that have been seen before. One approach is to provide an autonomous testing solution with a test case written in a natural language and it will autonomously generate functional test automation.
Among the diverse techniques under exploration today, artificial neural networks show greatest potential for adapting big datasets to regression test plan design. Multi-layered neural networks are now trained with the software application under test, at first using test data which conform to the specification, but as cycles of testing continue, the accrued data expands the test potential. After a number of regression test cycles, the neural network becomes a living simulated model of the application under test.
As AI becomes more deeply embedded in the next generation of software, developers and testers will need to incorporate AI technologies to ensure quality. While it may be a frightening prospect to imagine how a program could train itself to test your apps, it is as inevitable as speech recognition and natural language processing appeared to be a few years ago.
About the author:
Jon Seaton is the Director of Data Science for Functionize, providers of an autonomous software testing platform that incorporate AI and machine learning technologies to automate software development.