Estimating testing time: a few useful guidelines
In a need of setting up a test schedule for a new application? No worries, take this advice from the experts!
You’ve got a new application underway. You need to map out some kind of testing schedule. Here’s advice from the experts about how to do it.
How do you accurately estimate how many QA engineers you need for a project? Like many of you, Karl Becker, a project team lead, was trying to calculate an answer. “We’re a small team without a dedicated QA staff—we basically just passed the QA hat around,” says Becker. “Recently, we hired a dedicated a QA person and we’re trying to figure out if we should hire more.”
It turns out, the most important part of Becker’s question isn’t how many but when. There’s a growing consensus about the need to shift left – that is, to put QA into the process much earlier.
“It’s an old-school mind-set that you don’t need to spend money on QA resources until the very end of a project,” says Robert Jackson, a gaming industry veteran and VP of Mobile Publishing. “That no longer works. You’ve got to budget QA resources from the beginning of the project and allow enough time for multiple rounds of testing.”
Jackson creates his product development budgets with an 80/20 ratio in mind. “If you’re an experienced manager, you should be able to accurately predict about 80% of what you’ll need before you start development,” he says. “But you’ve also got to plan for surprises, so assume 20% of your plan will change.” To accommodate this, Jackson recommends, “Plan for additional resources at ‘crunch time.’ The last thing you want is to go to upper management and say, ‘I need more resources.’”
Correctly budgeting for additional staff is a function of project complexity. At a bare minimum, you should have two to three full-team QA team members for any project. “At alpha build, you want at least a couple of fully-dedicated QA resources to do a complete sweep of the product and to de-bug every issue before beta,” warns Jackson, with a heavy emphasis on at least. Be aware as you budget that you may need to add specialists to your team before release, such as software/hardware compatibility testers and load testers.
For a complex project, Jackson suggests you begin with a ballpark figure that you need at least one QA person for every six developers. That’s true especially if the project is going to take over a year to develop and it includes extensive database or back-end communication.
Start QA staffing early in the schedule
It is essential that QA is an integral part of development from the beginning, not just a proofreading step at the end. “The more QA is involved, the better your product is,” says Jackson. “Fixing issues in the final two weeks of development just doesn’t work. More often than not, the project will be coming in ‘hot,’ so make sure you plan for additional resources at the end of the development.”
To that end, Jackson strongly recommends assigning a QA lead to attend the very first development meeting and to continue as an integral part of the regular review process. “The earlier in the process you start QA, the more knowledgeable your QA team is. You want them to become as expert on your project as the designers. This way, they’ll know what resources they’ll need at each stage of the development process.”
There is one key requirement for this QA lead, according to Jackson: Don’t pick someone who’s too junior. Instead, he encourages managers to look at this role as an opportunity to grow new leaders. “Find someone who is creative, enthusiastic, and has good communication skills but is also senior enough to assess the landscape of the project—put them in that first developer’s meeting, mentor them, and let them thrive.”
Quantify needs for upper management
Obviously, one clue you need to add more QA staff is that they’re so overworked that they threaten to quit – or they do. But before it comes to that, is there a more quantified way to anticipate future needs?
To paraphrase Pirates of the Caribbean, “The [answer] is more like guidelines than actual rules.” So welcome aboard the Black Pearl, as we attempt to give you some metrics.
First, to reassure you that you’re not alone: Two years ago, the Consortium for IT Software Quality (CISQ) published a 44-page report on The Cost of Poor Quality Software. As the report states:
“On average, software developers make 100 to 150 errors for every thousand lines of code…. Even if only a small fraction—say 10 percent—of these errors are serious, then a relatively small application of 20,000 lines of code will have roughly 200 serious coding errors.”
However, that estimate is based on a textbook written in 1996. There have been other estimates, some of which put the error rate down as low as 1/1000. Nevertheless, most of those estimates are also based on work done over a decade ago.
The ideal approach is to look at similar projects in your own company to understand your team’s error rates, which is why it’s important to document results. If no comparable projects exist, discretely ask around. Obviously, there’s a limit to how helpful people are willing to be—you’re not going to call up your biggest competitor and say, “Hey, we’re also working on a dating app.” But if you dig through trade articles and online forums, you can likely fill in some reasonable numbers.
So far we’ve discussed one metric: What’s it going to take to get the product out the door? But Marc S. Martin, a partner at Perkins-Coie suggests an even more important metric to consider: What’s the worst thing your product can do to the end-user?
Set a target error rate based on the criticality of the product. “A line of code driving a pacemaker is different from keeping your calendar, so a standard can and should vary,” says Martin.
Ask yourself questions like these: If we screw up, can someone die? If we screw up, can someone lose their home? If we screw up, will someone be behind on all their bills and unable to eat? If we screw up, will we trigger a class action lawsuit that will bankrupt the company? These may not be questions you can answer by yourself; they may require discussions with upper management or even your legal counsel.
Once everyone’s signed off on the target error rate, you can create your own algorithm:
- Based on similar projects, this project will require ~n lines of code.
- Based on similar teams, the error rate is likely to be ~n/per 1000 lines.
- Based on similar projects, our senior QA engineers catch ~n errors in an eight-hour workday. (Remember the rate drops if they go into heavy overtime.)
- Based on similar projects, our junior QA engineers catch ~n errors in an 8-hour workday. (Again, the rate drops if they go into heavy overtime.)
- This project can tolerate no more than n% error rate.
- This project is budgeted to take no more than ~n
- Therefore, we need at a minimum n senior staff and n junior staff. Ideally, we should have n+.
Naturally, there’s going to be a lot of guesstimating. Still, at least it quantifies something, so you can go to senior management and say, “At a rough estimate, based on similar projects, we’re looking at 700 serious errors. You’ve only given me the resources to catch 400 of them before we release. Are you comfortable with that?”
Fight hard for your staff. Writing in InfoWorld, Dr. Bill Curtis, CISQ executive director, summed up what’s at stake for upper management: “When losses from IT malfunctions hit 5 or 6 digits, IT managers are at risk. When losses hit 7 or 8 digits, IT and line-of-business executives are at risk. When losses hit 9 digits C-Level jobs are at risk.”
Everyone fears legacy systems; it’s like pulling a rogue thread on an old sweater that’s apt to unravel. But the longer you leave legacy software alone, the worse the situation becomes. As this white paper demonstrates, Functionize can help.