5 Questions the VP of Quality Needs to Ask
Machine learning and AI have made quality assurance more sophisticated. QA leaders need to know what to prioritize and how to empower their teams.
Quality assurance (QA) is truly having a moment. In the early days, QA was seen as a simple checklist item. A natural step to check that coding and testing has been done correctly to reflect the client’s business requirements. But with today’s advances in machine learning, artificial intelligence and cloud computing, software requirements are broader and deeper than ever before. Automation has made testing easier. DevOps is breaking down age-old silos. Quality assurance has a stronger influence on user experience than ever before.
Test automation has solved several problems for software developers and boosted efficiency, scalability, and flexibility. But working out the right test automation approach is a work-in-progress for many. As the VP of Quality, you may experience whack-a-mole days from time to time. Expand test coverage but then deal with test debt. Reduce manual effort in testing but instead reallocate effort to test maintenance. It’s not easy balancing cost, time, and quality to literally make sure that the final product is perfect. But when all the pieces fit, when everything clicks and you meet your go-live date. When your end users applaud your client’s offering in the market. That’s when you know you are doing your job well.
Leading the QA team comes with the responsibility of always improving to meet market demands. You need to keep pushing your team to improve and go beyond “business as usual”. Here are 5 questions you need to ask to understand your status quo and devise a complete, comprehensive QA strategy:
1. What goals have you defined this year?
If you were the one who made the business case for investing in test automation, ROI will always be top of mind. More importantly, your KPIs are directly or indirectly linked to customer experience. Your QA goals will likely revolve around effectiveness and efficiency.
As your organization progresses along the testing automation journey, look closer at the metrics that give you most value. Identify metrics from all application layers and stages to get a clearer, deeper view of how much test automation is helping your team.
Figure 1 Explore both regression and progression views for metrics that will add the most value.
What is the weakest link in your testing infrastructure today? Did test automation give your team a spike in speed only to get stuck with test maintenance further downstream? Or is automated testing taking attention away from the crucial manual testing needed to avoid bugs getting leaked to production? What’s the larger business imperative–cost-efficiency or time to market? Are leading indicators driving your activities, or do lagging metrics cover your approach with the resources at hand?
Define, reinforce, and implement your QA goals. Then solve them with the full force of AI-powered testing.
2. What resources are allocated to managing infrastructure?
As we adopt newer and more scalable technology strategies, it would be wise to review the roles and responsibilities of your development, testing and QA teams. Have a clear view of how new solutions are impacting your current resource allocation matrix.
Depending on where your organization is on their cloud journey, your DevOps resources could be spending a significant part of their day performing tasks related to cloud implementation or infrastructure maintenance. They may be raising tickets for environment provisioning, or killing time waiting on follow-ups. They may be hand-holding indirect stakeholders in scope creep from cloud migration projects.
The CI/CD approach definitely unlocks tremendous value at every layer of the entire technology stack. But that also means that there are sense checks, quality checks and that extra caution at every step (as there should be). As VP of Quality, it may serve you well to check how much time your resources are spending on creating or maintaining cloud infrastructure. If your teams are equipped with infrastructure-as-code (IaC), then they are likely self-servicing their environments on demand. They spend time carefully making configuration changes and checking them into version control. Or you could turn to a modern cloud-based AI-powered solution that allows you to run as many tests as you need, when and where you need them.
3. How confident is your team about the accuracy of your test results?
Ensure you have strong diagnostic processes with continuous learning and improvement embedded in them. And that diagnostic approaches are embedded in your team’s culture. What does failure look like when you are looking at your automated test results? Are there any recurring patterns? What workflows are your team following to get to the root of the problem?
A high incidence of failures in automated tests could indicate a number of underlying issues:
- The acceptance criteria are too narrow – are they aligned with the user story? Revisit business and design – does design reflect the business requirement accurately? Or has the inaccuracy popped up while defining the test case?
- The development team has checked in their build too soon. How are your developers incentivized? Do they prioritize speed over accuracy? Often the pressure to deliver quick releases can be traced back to business – but what business team would want their customers to experience technical issues? Or does your company reward your developers and testers for quantity rather than quality?
- The human element in testing has been underestimated. Yes, test automation works wonders for tests that have predictable, repeatable results. But as applications scale and CI/CD adds more features, it’s only natural that a step gets skipped occasionally. As we’ve learned time and time again, manual testing isn’t going away. Revisit design, code or test scripts to identify the test cases or criteria that need to account for human intervention.
Check in with your team regularly to gauge their confidence in the accuracy of test results. Often, most of the failures they see will actually be false positives. Using an AI-powered testing tool will get rid of most of these, allowing for greater confidence in test results.
4. Are your team’s skills mapped to their bandwidth?
An ideal QA tester demonstrates a perfect balance of the following skills:
- Understands business processes and customer journeys
- Has a grasp of programming logic
- Is a proficient project manager and knows how to resolve conflicts
- Has a respect for QA and software development history
- Is a naturally curious bug-hunter
You may currently be fortunate enough to work with a perfect QA tester or analyst, but not every VP is so fortunate. A good yardstick of QA effectiveness and efficiency is how your team’s skill sets are mapped to their bandwidth and resources.
Think about how you want to approach resource allocation. In many cases, QA activities are driven by an amped up development team that is pushing out releases to keep up with business goals. This calls for all hands on deck to make sure that releases are bug-free, but without compromising that attention to detail that makes QA actually assure quality. Consider hiring extra QA testers when your team is overloaded, or when your team needs specialized skills such as vulnerability testing or domain expertise.
In other cases, if your application logic is particularly complex, you may have hired testers with a background in development. With a strong foundation in programming, they are well-equipped to be writing test scripts. However, if writing scripts has not been their core competency for a while, are they the right resource for scripted test automation? Or would they be more efficiently resourced in test maintenance?
5. Do your business users have visibility of your test strategy and implementation?
Who is the most important stakeholder in your test strategy? Is it the end user? The account team? The tester? The designer or developer?
It’s worth noting that business users are specifically tasked with understanding the customer and translating their human needs into technical requirements. There is an obvious benefit in linking users across business, design, development and testing – but it may not be reflected in your processes. Ask your teams how much visibility business users have of testing. Is business involved at all in developing the strategy?
If business users had a view of what your QA team is testing, would their feedback add more value to the test results? And how would it impact continuous development? Would it waste your team’s time or make your testing process more efficient? If business users were equipped with tools that let them create their own tests, would it drive up the quality of the final product that goes to market?
Imagine a model where the load of quality assurance could be distributed across all the stages of the SDLC. If non-technical business users had access to test design and implementation, could that release resources from your QA team to focus on more value-adding technical refinement?
To summarize, quality assurance in the age of continuous integration and continuous delivery is a matter of continuous improvement. The more lean and circular software development becomes, the more layers quality assurance cuts across. QA leaders have to push their teams to think beyond the testing phase, upskill in new technologies and incorporate the entire continuous lifecycle into their testing approaches. Book a demo with us to learn more about how you can use machine learning and AI to empower your QA team.