How to do a product evaluation for enterprise software
Before your company commits to a new software production evaluation, it makes sense to determine if the software truly meets the business’s needs.
Before your company commits to a new enterprise application, such as a programming or test tool, it makes sense to determine if the software truly meets the business’s needs. Make your “proof of concept” project a success by following these guidelines.
Imagine that your company is considering the purchase of an expensive new application to help in the company's software development. The right people established the criteria and whittled down the candidates to a handful of applications. One product in particular appears to be a match. The vendor’s demo looks great, and the feature list makes it sound perfect for your needs. So far, so good.
The next step is for the management team to arrange for a license to determine if the software is worth the hefty investment. The boss asks you to be in charge of the proof of concept (PoC) project.
So, how do you go about it? It helps to have a plan beyond “kick the tires.”
Not every corporate purchase requires a PoC. If software is inexpensive, the easy answer is, “Buy a copy for a small team and see what they think.” However, the plan needs to be more than “poke at it randomly” if judging an application’s value requires that it be deployed company-wide, particularly if it needs nontrivial integration with other enterprise software or if the software is used differently across departmental communities.
Your mission: Determine if this application is worth the investment.
PoC team: Assemble!
The boss may have put you in charge of the PoC project, but it is unlikely to be a solo endeavor. You need the right people at the table. Keep your team as small as possible, but not too small; the two-pizza rule is relevant.
First, identify people who are qualified to lead the implementation and to determine the tool’s value. For example, a team evaluating a DevOps and CI/CD tool should include developers, operations/administrators, and release managers, says Dan Goerdt, president of Flexagon.
Aim for a blended team that can articulate the problem holistically, recommends Henry, a senior manager in data consulting. “Include people who work to resolve problems, not people who raise problems,” he suggests.
Not everyone on the team needs to test the software in an everyday manner. But it’s important to get advisors and representation from other departments, such as enterprise architecture, networking, and security. Asks Madan Thangavelu, senior engineering manager at Uber: “Who at our company is going to maintain the installation of this? Are they on board with it?”
It’s common for a PoC project to include managers who think in budget terms. But be sure that the evaluation process includes its hands-on users. You wouldn’t deploy an expense-reporting tool only with Accounting’s input, after all; you’d also find out whether the tool helps ordinary employees get reimbursements sooner.
Who is apt to resist adoption? If you are evaluating a tool that users dislike but the enterprise needs (such as those intended to enforce security, compliance, or process controls), says Joe, a manager at a tech consulting firm, decide which executive or manager will own the potential blowback. Don’t wait until the end of the project to get their perspective.
Enlist the vendor’s help
Choose one person to serve as single point of contact with the vendor, says Jamie Kurt, technical solutions architect at Functionize. (If you’re reading this essay, the champion is probably you.) A single contact minimizes miscommunication and orchestrates the technical side of the evaluation process. Among the champion’s responsibilities is to work with the vendor on preparation tasks before the PoC starts; to keep participants on task; and to round up the end results and turn them into a report with the final recommendation.
Schedule dedicated working sessions where the customer and vendor work together to configure and execute the PoC scope and scenarios, suggests Goerdt.
Ask the vendor upfront about expected participation, adds Joe, a manager at a tech consulting firm. If you need unique expertise – say, the vendor’s security boffin – arrange for it early in the process.
“The champion can help get all the preparation work done before the PoC starts,” Kurt adds. That ensures nobody is standing around, waiting on setup issues to be resolved, for instance, and thus delaying the evaluation project.
One reason for the close communication between vendor and PoC champion is to streamline special circumstances. “You are likely to have some legacy systems that need integration to make this work,” Joe points out. “Consider opening up some budget for these services. It will mean both you and your vendor put more priority into a trial and will get a more honest evaluation.”
Don't feel as though vendor contact is an oddity. According to the Evans Data Developer Marketing survey, 26% of developers immediately explore a vendor representative’s recommendation, when the vendor offers tutorials or other suggestions, and nearly 88% at least consider a vendor’s recommendations.
It also makes sense to confirm the budget with your own management. Ensure that that the purchase is tentatively approved, assuming the project succeeds, says Kurt. “There is nothing worse than getting through a successful PoC and then at the end you find out the money is not available.”
Establish metrics up front
“Clearly document scope and success criteria,” says Goerdt. “This sets the stage for everything else.” Make sure the objectives and success criteria are concrete and verifiable. If you don’t know what measurements suggest success or failure, how can you know if you achieve them?
One way to go about that is to detail the current pain points – the issues that the new tool is meant to solve – along with how they affect the business. They should dovetail with the “shopping list” of product features, including the must-haves and the nice-to-haves.
Restating the problem definition may be a good starting point for designing the metrics and test methodology. Doing so helps executives and team members to align on goals and metrics.
This also has a less-obvious advantage: It may serve as a politically expedient way to get out of an unwanted tool. In some enterprise organizations (oh no not yours, surely) a vendor may capture an executive’s attention, causing the boss to say, “Evaluate this thing.” It can be valuable to revisit the options in a presentation so you can genuinely respond, “This solves a problem that we don't have.”
By the end of the project, says Kurt, you need to give your management team an ROI calculation to determine how long it will take to reap benefits from the cost of the new software purchase. “So it is critical that you have metrics on your current processes (such as of resources or time spent with current tools) you can later compare to the same metrics for the new software.” If you don’t think through those metrics ahead of time, you can’t collect the data, much less offer the comparison.
Set a reasonable testing period
You don’t want a PoC to drag on forever. Make sure you create a schedule with deadlines, including a hard stop.
Dedicate people who have the time and the skillset to appropriately evaluate the software, and let them know its priority. “If you know of other projects coming up that will pull your resources away, then delay until they can focus,” adds Kurt.
Set the tool test period to two or three runs of the software’s business cycle, says Joe. For example, if the software serves a monthly sales cycle, you need two or three months; if it's a quarterly financial close, you may need six to nine months.
How to evaluate software
A PoC project is no different than doing a Minimum Viable Project evaluation, says Henry, a senior manager in data consulting. He starts with requirements:
- What are the pain points you feel right now?
- Why do you think the new tool will help solve them?
- What are the things that the current tool does that the new tool also needs to do?
- What additional contexts need to be considered, such as security or networking?
Then consider the Non-Functional Requirements (NFR):
- What is a realistic workload? That might include transactions per second, concurrency, or other workloads.
- How does the new tool work from a resiliency and availability perspective? Is it easy to engineer it for the availability service level agreements (SLAs) you need?
- What's the impact if the new tool goes down? (Compare that to how often the old tool goes down.) If the software fails, how do you escalate and mitigate quickly?
“What’s the smallest amount of work that allows you to look at the previous bullet points? There's your PoC,” Henry concludes.
Most tools are designed for generic use cases. But they can fail short in handling custom environments. As you design the evaluation plan, identify areas where your organization environment or processes are unique. Look at how the software works at its reasonable, expected load – and also at the edge cases.
For example, says, Thangavelu, test for scale. “If all our employees use it, will it fail just because it cannot handle the load from all our employees using it at the same time? What will happen if we grow 10x; will it still be the software of choice? Will it cost us a fortune if we use this software more?”
That doesn’t mean you should only sweat over the hardest use case (unless it is mandatory), says Thangavelu. Validate the product for the 80% use-case first.
Also, consider vendor lock-in. If, after a year, the company decides not to continue using this tool, what are the options to get data out of the system?
And, obviously, document everything.
Draw your conclusions
You probably need to make a presentation at the end of the PoC project, or at least publish a report about the evaluation. The report usually includes some sort of score or qualification based on the initial objectives and other findings. It has a financial section as well, such as licensing costs, including a comparison to other options (such as developing a custom product in-house).
But don’t forget a key issue: Do users like it? “Go back to the people you interviewed for the functionality and explain then how it will be in the new world,” says Thangavelu. “Are they excited? If not, find out why.”
Pay attention to your own emotional investment, too; don’t make a Yes decision for the wrong reason. A PoC is easy when you can fail fast – that is, when you immediately determine that the software doesn’t do what you want. But, as one CIO explains, “If the trial went on for a long time to discover the problems, then it's much harder to say No, because you already sunk a lot of time into it.”
Keep in mind that evaluating a product isn’t a multiple-choice quiz that requires a 100% test score. “The tools’ cultural impact is really important for us,” explains Jeri, an IT manager. “Does it enforce the parts of the culture that we care about most? A big part of this comes down to giving things a try and getting as much feedback as possible.”
If the tool aligns with the team’s goals, it may still be worth the investment – particularly if the vendor is willing to work with you. Among the questions to ask, suggests Thangavelu, is, “What is the cost of adding a new feature to the software? Is it an option and is it needed?”
Jeri has worked with vendors based on shared values. “It’s a very privileged position to be in. But for us, we hope to ultimately choose the tools that make sense to the people who work here, and to be opinionated about what to trade off, and what to optimize for.”
Ultimately, a PoC can be a success when the customer and vendor work closely to plan and execute the process. Identify the problems you need to solve. Then try to design the smallest possible amount of work—in a representative environment—to demonstrate whether this tool would solve that problem. Measure it against key business performance indicators. If you follow that path, the conclusion usually will be self-evident.
In case you wonder why we think this process is relevant: Functionize delivers an autonomous, AI cloud-based testing platform. We’d like to think you’ll enjoy this product overview, which should convince you that the software can make your team’s life better.