Kubernetes in Automated Software Testing
Kubernetes takes deployment and operation of applications one step closer to total automation. Learn more about Kubernetes automation.
Though many consider Kubernetes to be a technical developer topic, many product and testing professionals should examine how it relates to automated software testing.
Viewed from the cloud, Kubernetes automates the use of containers such as Docker—a practice commonly known as “orchestration.”
Kubernetes takes deployment and operation of applications one step closer to total automation. If your objective is QA and testing, Kubernetes is the great overseer in the whole CI / CT / CD pipeline, because it effectively enables you to script every aspect of virtualization. For those with conceptual knowledge of how containers work in this environment, this article will examine Kubernetes’ role in automated testing as well as why containers and virtual machines need another layer of scripting.
Kubernetes Signals Greater Testing Automation Integrations
Kubernetes impact on testing and advanced container orchestration heralds a maturation in the testing automation marketplace. In some ways, the evolution of testing automation is analogous to the transformation of GPS. GPS began with spartan handheld units that displayed basic positional data, such as longitude and latitude coordinates and included primitive maps. Years later, the majority of phones have integrated GPS, and its use exceeds navigation, such as geotagging photos. Synthesizing different technologies signaled indisputable potential, but it required years of development and testing to determine the best way to interface their functionalities. A similar evolution continues in software application development tools, but this evolution isn’t visible to the market. Furthermore, the benefit of investment in the development of such devices is not as evident as it was with GPS. Kubernetes is a recent example of the trend to integrate developer tools and expand their benefits.
Life Without Kubernetes
Two arduous development tasks which inspired the evolution toward Kubernetes were setting up container clustering across multiple networks and automating deployment. A detailed list of services was scripted in-house, ad hoc, and at significant expense. These include:
- Load balancing
- Replicating application
- Mounting storage
- Resource monitoring
- Distributing secrets
- Application op status
- Logs and access
- Authentication
- Automatic scaling
- Naming and discovery
- Rolling updates
- Introspection and debugging
Kubernetes Load balancing
As developer teams scripted the above services ad hoc with difficulty, the need for a general solution arose, and diverse pieces began to assemble into what ultimately became Kubernetes. For example, how is a deployed application accessed from the Internet? The service facing the deployment has a virtual IP address within a Kubernetes cluster, but how do we expose it for access to other services and apps?
Kubernetes can configure load balancers if it’s running in a Google Cloud Engine, and in this manner, access the application. Otherwise, load balancing is time-consuming. Host machine ports can expose the app but at the expense of negating other Kubernetes virtualization functionality and benefits. Implementing ports on host machines leads to port conflicts when running multiple applications. Scaling clusters and replacing host machines are tasks made more complicated as well. One solution is to set up a load balancer like HAProxy, which is configurable with a backend for each Kubernetes cluster. Then Kubernetes clusters can be run inside a VPN on any cloud provider like AWS, and in this case, AWS Elastic Load Balancer can route web requests to the HAProxy cluster. It is easy to see, then, how such solutions arising in a developer community spread and eventually get absorbed into a mothership of tool integrators like Kubernetes—which itself a spawn of the greatest absorber of all, Google.
Kubernetes Strategy
Automating the use of containers in app deployment is the core functionality of Kubernetes. Implementation is started by defining the nodes -- defined as physical or virtual machines, such as AWS, MS or Google cloud hosts -- of which Kubernetes will manage your containers within. Beyond nodes, there are pods, which are the containers and clusters which Kubernetes orchestrates for you.
Kubernetes spins up your app inside these containers and clusters, and each instance is known as a Kuberlet. A container is an abstracted virtual machine, many of which can run simultaneously on a physical device or virtual machine. The many advantages of containers include the elimination of device drivers which bog down the OS in a physical machine. Applications can be tested purely in cloud-based virtualized containers, eliminating the computers that would otherwise be necessary, reducing costs budget and reclaiming geography. Docker is now the most popular provider of container technology, and this concept is likely to replace virtual machines altogether from the developer’s point of view.
Diagram
In practice, containers are simulators which contain the OS and run an instance of the app, and can therefore async as many apps simultaneously as the machine can physically support.
To manifest this abstraction in the real world, enterprises can implement a web server like Nginx in a Docker container on a Linux based server. Doing so requires considerably less scripting overhead than using traditional virtual machines (distinguishing virtualization from virtual machine). The optimized container version of the OS means that a Windows app can run on an Apache server. This example illustrates machine-independent virtualization of an app which vastly improves performance and replication potential.
Google’s Container Management Tool
Every new solution is packaged with a new problem. Tracking containers is a complicated and essential problem when web resources are billed on CPU time and there are hundreds of apps running simultaneously. Containerized and virtualized apps can be an expensive miracle. Kubernetes was conceived and designed to “orchestrate” containers and deployment, containers and clusters. One of the important features of Kubernetes is that it monitors “app health,” to use the new dramatic jargon. If a containerized app under test fails, then Kubernetes will automatically reset the configuration and restart the app in a new container instance. Conversely, Kubernetes ensures that containerized apps likewise shut down as scheduled to avoid overruns and accidental replication which leads to wasted resources.
Kubernetes in Testing
Now that development, testing, and deployment are fully integrated for all practical purposes; it is clear that progress in development strategy is equal to growth in QA and testing strategy. Kubernetes manifests this axiom with efficiency and alacrity. Likewise, as Functionize heralds the age of truly intelligent and autonomous software testing, so will its benefits be reaped by developers who are the first line of testing their code.
Combining related software development tools in one package is a trend which results in a testing framework like Cypress for example, which is an extension of Selenium, JMeter, Jenkins, and Cucumber, sewn together like Frankenstein’s monster. Until a truly intelligent solution is adopted, we can claim that this is a necessary evil.
Microsoft and Amazon Web Services now support Kubernetes automation, which illustrates the gathering momentum behind automated containerization of app development. If Kubernetes automates containers today, then the next thing to appear on the horizon should be a tool to automate Kubernetes.