Why should you trust AI for testing?
Read about Artificial Intelligence from Functionize. Click now to learn why to trust AI for testing.
AI often gets a bad rap. There are countless stories of AI systems demonstrating racial bias, sexism, or flawed decision making. Moreover, many people fear AI will take their jobs. As a result, there has been a big emphasis recently on creating ethical AI. Here at Functionize, we take this seriously. We want you to trust AI. Read on to learn how we make sure you remain in control, even as our AI-powered platform transforms your testing.
Trust, ethics, and AI
Last year, the EU published an interesting report looking at the requirements for ethical AI. The focus of the report was on systems that make decisions that affect our lives. Things like AIs that decide whether you deserve a loan, or who set the limit on your credit card. As a result, the report’s authors looked at what steps were needed to create an AI that was both ethical and trustworthy.
Despite the focus, much of what is in the report is actually really relevant to AI-powered testing. The authors divide the problem into three. Lawfulness, Ethics, and Robustness. Clearly, for a system like Functionize, lawfulness isn’t really relevant. However, ethics and robustness are.
Within Ethics, there are several key requirements. These include:
- Human agency and oversight. Put simply, this means there should always be a human involved somewhere in the system. This could be a direct involvement, where a human has to approve every action, or it might simply be having oversight.
- Transparency. By its very nature, an AI model is a complete black box. This is one of the things that makes it hard for people to trust AI. So, you should try to make the actions of the AI as transparent as possible.
- Accountability. This aspect is often overlooked in AI-powered systems. At the end of the day, someone has to take responsibility for the actions of the system.
Later, we will see how these are reflected in the design decisions we have made in Functionize.
The other key aspect is robustness. AI systems quickly become indispensable. So, it is vital that you ensure they are robust and reliable. Moreover, if they do fail, they should do so safely.
Why we don’t trust AI
It’s hardly surprising that some people inherently distrust AI. There have been some truly terrible stories over the years of where AI has got it wrong. 2015 saw reports of Black people being tagged as “gorillas” by Google’s image recognition. This was because the system had primarily been trained with pictures of White people. In 2018, reports came out of a man who was wrongly dismissed by an AI system. He only found out because suddenly his building and system access was revoked. And In 2019, Apple hit the headlines for the wrong reasons after their automated system awarded David Heinemeier Hansson’s wife a credit limit 20x lower than his. Despite the fact they share their bank account and all their wealth.
Then you get the whole world of dystopian science fiction. Think of films like Terminator, 2001 a Space Odyssey, or A.I. Artificial Intelligence. In all these cases, AI is painted as being either actively malign or misguided. But in all cases at the expense of humankind. In many ways, AI fuels our primordial fear of things we cannot understand or explain. As Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.”
Added to that is the very natural and real fear that AI will end up taking all our jobs. Why trust AI if it is about to make you unemployed? Research has suggested that certain occupations may vanish altogether. For instance, truck drivers have a risk of obsolescence of 80-100%. Certainly, long-distance truck driving will likely be replaced by autonomous trucks in the next two decades. This is far from being a new phenomenon—we already saw this exact pattern happen in traditional industry and manufacturing. Indeed, it’s been the story since the earliest days of the Industrial Revolution when saboteurs smashed factory machinery. Automation reduces the need for human workers while (often) increasing the quality of the output and efficiency of the process.
How Functionize builds trust in AI
Here at Functionize, we have always been very aware of the trust issue. We know that smart systems such as ours seem to put testers’ jobs at risk. We understand that users may not trust the results of our tests. And we get that there is something almost magical about how our system seems to understand your application. So, we have taken several key steps to help you to trust what we do.
Human oversight
Functionize tests are easy to debug and understand. Each test step, we record screenshots before, during, and after, so you can be sure the correct action happened. But you can help to ensure the tests are robust by using good verification steps. These steps are ones that you add to the test that ensure nothing is going awry. They rely on the application logic to act as a check on the test logic. For instance, if the test is completing an address form with input validation, it should check that there are no errors detected. Or if an action triggers a page load, the test should check that the correct page has now loaded. This helps you to trust that the test is running correctly.
Transparency
One of the smartest things with Functionize tests is their ability to self heal. If your site UI changes or you tweak the site logic, traditional automated tests would fail. But our tests are more intelligent than that. The AI works out what has changed and updates the test accordingly. For instance, you may have restyled the “add to cart” button, renamed it as “buy” and moved it to a different place on the screen. But our ML models will simply find the new button and select that, just like a human user will. However, we know some users are worried by this sort of “automagical” fix. So, the UI will flag any such updates to you, so you know which tests self-healed when you look at the test results.
Accountability
The final thing we do is ensure full accountability. Every single test you create is stored in our Test Cloud, along with details of every test run. Our system uses all this data it collects to help build an incredibly detailed model of your application. But it also means you have access to the complete history of all your testing. You can go back and see how tests have changed or evolved over time. This means you can verify that new bugs aren’t simply regressions or oversights. It also can help with identifying troublesome bugs that may be hard to recreate or that only occur occasionally.
Don’t just take our word for it
This blog is all about how to trust AI. But we aren’t asking you to just trust what we say. We would rather you tried it out for yourself. Sign up for a free trial today and see smart testing in action. Or if you’re not ready just yet, book a live demo with one of our team.