How to Deal with the Challenges of Cross-Browser Testing
A deep dive into cross-browser testing, including the challenges QA teams face such as frequent browser updates, difficulty of automation etc.
Cross-browser testing is poorly understood by too many developers, testers, and managers. If you’re a QA manager, it’s understandable if you have the urge to emit a primal scream at the mere mention of it. Also known as browser compatibility testing, this type of testing can be quite frustrating for your developers, a monotonous tedium for your testers, and seemingly devoid of any glamor for you. Too often, cross-browser testing is given lower priority during tight schedules. Some software teams have historically made it simple by limiting their liability to a specific browser, but this is now quite unacceptable. Users expect an app to work in any browser, on any operating system, on any device.
Unless you’re developing an app for strictly internal use, it’s sensible to face the music and support all major browsers—which means that your team will need to build for—and test against—all major browsers. Yes, that is very sensible and prudent. But remember: your testers have their plates full with functional, regression, and automated testing. They are going to balk at duplicating the entire test plan to accommodate all browser/OS combinations. You could take a risk-weighting approach, but if your team lack cross-browser testing experience then it can take a long time to properly identify and assess the risks. If only your testers had superpowers!
While all users will view broad browser compatibility as an expectation, successful cross-browser verification is the result of much hard work done by development and QA teams. In this article, we’ll look at what makes this conceptually simple goal so difficult to achieve, and how Functionize addresses the serious challenges of cross-browser testing.
What is cross-browser testing?
Testing web applications across all major browser/OS platform combinations is a daunting task. Your team must test all the things! Well, almost all things. To serve customers and users well, the expectation of a consistent experience is really unavoidable. User confidence and retention depend heavily on a glitch-free performance across all targeted browser versions, hardware and device types, screen resolutions, and operating systems.
“Testing leads to failure, and failure leads to understanding.”
— Burt Rutan, designer of the Voyager and SpaceShipOne specialty aircraft
Cross-browser testing must go well beyond visual verification to include the browser-related business logic and unseen functionality. This often encompasses client- and server-side code, and involves a variety of different measures:
- Code validation — Ensure that your JavaScript and CSS validates completely across all of your target browsers.
- User interface — Verify that all aspects of the UI are in close alignment with the requirements and specifications.
- Operation — Check for consistent operation throughout all pages and popups. This includes links, panels, tabs, and navigation elements.
- Performance — Look for differences in some browsers with respect to UI or processing performance.
- Mobile —Are there discrepancies in presentation across the mobile browsers that you are targeting? It’s important to test for consistency on with regard to orientation and resolution.
The challenges of browser compatibility testing
Functionize considers these to be the top three challenges to cross-browser testing:
- It’s impracticable to sequentially test all browser-version-OS combinations
- Vendors update browsers quite often
- Cross-browser testing is difficult to automate on your own
There are many browser-version-OS combinations
Some people still maintain the notion that because a web app works in one browser that it should also work equally well in all browsers. It is high time for everyone to banish all such thoughts from their minds, from now through all eternity. Each browser is a highly complex application, and each type will have its own distinctive features, behaviors, quirks, and bugs—and it will render pages differently from other browsers.
Let’s say that your product team targets Chrome, Safari, Firefox, Opera, and Internet Explorer on Windows, macOS, and Linux. To those with little experience at cross-browser testing, this may seem to be a formidable undertaking, but it’s actually quite manageable:
- macOS: 4 Browsers
- Windows: 4 Browsers
- Linux: 3 Browsers
That’s a total of 11 browser types.
Since many users enable automatic updates for each of the browsers they use, let’s tentatively assume it would be safe to limit browser coverage to the latest version. (Read below for the challenge on frequent browser updates.)
Let’s extend our scope to include the latest version of each browser on each of the latest three operating systems:
- Windows 8: 4 Browsers
- Windows 8.1: 4 Browsers
- Windows 10: 4 Browsers
- OS X El Capitan: 4 Browsers
- OS X macOS Sierra: 4 Browsers
- macOS High Sierra: 4 Browsers
- Ubuntu 17.04: 3 Browsers
- Ubuntu 17.10: 3 Browsers
- Ubuntu 18.04: 3 Browsers
This increases the total to 33 combinations.
For many web-app teams, this is already a substantial list. But, we could add more combinations that may be necessary for some product development teams, such as combinations that would accommodate the remaining 32-bit operating systems, different vendors for Linux, different versions of operating systems, major plug-ins, and other common factors.
For illustration, consider that each combination would require about an hour of testing for a mid-size app using conventional methods. If your testing environment is not automated for parallel cross-browser testing, it would be necessary to endure 33 hours of browser testing—prior to any bug fixes. Remember, that is for browser compatibility testing only.
This is a good time to raise some questions on priorities:
- Is it necessary to support all browsers and all operating systems?
- Should all of the code validate entirely on all browsers and platforms?
- To what extent should you automate browser compatibility testing?
- Are there any legacy platforms that your team must support?
Take time to carefully consider the answers to these questions after examining your user base analytics to ensure that your decisions won’t disrupt or exclude customers.
Frequent browser updates
Automatic updates and short release cycles have made it such that the major browsers are no longer static software. At least every 8 weeks, a new browser version is likely to become available to many users—most of whom will not even realize when the browser has been updated. A browser update will add more features, more quirks, and more bugs. Keep in mind that browser vendors release their updates on different schedules. If you are targeting more than two browsers, this means that it will be necessary to update your browser suite and retest at least once every 4-6 weeks.
Does this read much like a horror story? It is and will remain so. That is, until you find a way to automate with a solid solution that can help you manage it all effectively.
Cross-browser testing is difficult to automate on your own
Another challenge is automation. To anyone who has never made the attempt to automate cross-browser testing, it may seem to be an easy solution. Test early, test often, take shortcuts if necessary, rinse, and repeat. But the reality of in-house browser compatibility testing automation is something that can sober you up very quickly.
When evaluating test automation tools, the pickings are very slim on cross-browser capability. Commonly, browser test automation focuses on web page functionality. While this can be achieved using tools like Selenium or BrowserStack, it takes considerable effort to do it properly—and those solutions require plenty of development work to achieve good results. Then, you have to put in additional effort to handle cross-browser testing.
Many testers haven’t given much thought to test automation. While it’s possible to do this by detecting changes in layout screenshots, this is tedious and very complex. To mention only one of the problems, a screenshot is fully dependent on the resolution at which it is taken and also the distinctive UI elements that a browser presents.
With respect to cross-browser testing, the biggest omission in nearly all automation tools is the ability to maintain a solid browser version inventory, assimilate all of the latest browser features and quirks, and keep up with all the new browser updates from all vendors. Any tool that claims to automate browser compatibility verification must have these core abilities. Check out our complete guide on cross browser testing tools too discover some of the best tools in the market.
Can we really test them all?
The necessity of testing many combinations of browsers, versions, operating systems can be daunting—but it need not be so. With Functionize, a user can automate browser testing on any combination of browser or operating system—and most mobile devices. There is a bit of configuration, but the benefit is that you’ll be able to quickly execute your entire test suite across all browser versions.
Broadly parallel browser testing
Parallelization is a very strong feature in Functionize. Parallel browser testing gives your team the ability to test a large number of browser-version-OS combinations simultaneously. With parallel execution, you’ll enjoy much shorter test execution time by running tests for all target browsers at the same time—instead of running it overnight in one long sequence. Even if your current need is to test against two browser types/versions, you’ll be testing twice as much. This equates to twice the productivity. Very simply, you’ll discover those inevitable bugs more quickly so that the team can begin immediate work on the remedies so that you can ship faster.
Functionize greatly reduces the burden
Functionize includes a highly accurate and incisive cross-browser engine, which is supported by its unparalleled data collection apparatus. Functionize analyzes every element and remembers every single thing.
Easily add browsers and Functionize handles everything else.
For virtually any web application, Functionize employs machine learning to autonomously identify and track hundreds of selectors for every element and discerns any application anomalies in real-time. There’s no complex configuration or setup. It’s as simple as adding the target browser types to your testing account and click a button to begin testing.
Details on browser runtime errors and browser-specific anomalies
Functionize gracefully handles any browser runtime errors and attempts to resume testing. If the same error occurs repeatedly, an alert is sent to the tester with details on the point of failure. When a problem is found in one browser, Functionize cross-checks to verify if the same problem occurs in any of the other browsers. Whether the problem is found elsewhere or not, detail reporting is available on the outcome and the results.
Highly flexible and scalable
In concert with Google's nested virtualization, Functionize offers, you the flexibility to scale on demand. Whenever you determine that it’s necessary. At any time, you can switch between running single-browser tests or parallel tests that will execute simultaneously across all target browsers—in a matter of minutes. For each of our customers, an entire virtual machine is dedicated to each target browser to enable high-speed parallel execution.
Functionize software architects have an extensive amount of testing tool experience and extensive industry knowledge. So, we declare it with the highest confidence: With all other solutions and in-house automation efforts, maintaining a cross-browser testing suite requires significant manual labor for all non-trivial web applications. Functionize alleviates many browser cross-compatibility headaches, so it’s likely to exceed the expectations of an industry veteran who’ve had a long struggle with conventional approaches to browser compatibility testing. Get in touch with us so that we can answer any questions about how to improve your cross-browser testing.