In our previous blog, we begin to set the scene for ‘Software Testing in the Cloud Era’, the pinnacle of which is defined as ‘Chaos Engineering’. We also describe it as ‘Cloud Scale Testing’, as it represents the fusion of hyperscale Cloud engineering and software testing practices.
In their blog for DevOps.com Casey Rosenthal provides a brief introduction to Chaos Engineering and explores the key dynamics of the practice, in particular sharing the key insight, the difference between testing and experimentation: Testing is usually based on a very specific understanding of what is being tested and the anticipated outcomes, whereas as the name suggests Chaos Engineering is about experimenting with unknowns.
As we highlight in the previous blog, pioneers like Netflix routinely instigate major outage situations to understand the cascade effect and recycle that learning back into their architecture.
Chaos Engineering represents the maturity pinnacle of Cloud engineering practices, and ultimately software testing too. It would be unwise for any organisation to leap into it without already having a very well developed Cloud capability, and we can see the preceding steps as the maturity journey they can undertake to evolve their testing function to its fullest potential.
For example, in the Cloud Era blog and other previous articles, we examine the fundamental intersection of testing and Cloud engineering, the domain of ‘infrastructure as code’ and the use of the main building blocks like Kubernetes. With the core principle of managing infrastructure provisioning as software, then so software testing practices can be applied.
This sets the foundation for Chaos Engineering, namely expanding the scope of what testing addresses, evolving from only testing application code to regulating the whole IT environment. In our blogs ‘Infrastructure as Code’ and Testing Infrastructure Code on AWS, we introduce the principle concepts and their application on platforms like AWS.
This includes tools like Terratest, a Go library that makes it easier to write automated tests for your infrastructure code, which Yevgeniy Brikman explains in this talk, providing a thorough walkthrough of how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI/CD for infrastructure code.
Matt Young describes the fundamental challenge of developing this capability in this blog, most notably cultivating an integrated toolchain that can handle all of these different layers and activities:
“Combining related software development tools in one package is a trend which results in a testing framework like Cypress for example, which is an extension of Selenium, JMeter, Jenkins, and Cucumber, sewn together like Frankenstein’s monster. Until a truly intelligent solution is adopted, we can claim that this is a necessary evil.”
This is where 2i is ideally positioned to support organisations seeking to progress along this path of testing maturity.
This journey requires a holistic understanding of a large, complex enterprise environment, including multiple technologies, departments and workflow interactions. 2i specialises in mapping this complexity and from that defining a DevOps blueprint that synthesises them together to achieve faster throughput of successful code deployment.
2i can assist organisations to adopt these techniques to also develop the qualities, practices and approaches that your people need to form high-performing Agile Delivery Teams. As testing experts, we can help your organisation embed best practices throughout your DevOps life-cycle and infrastructure management.
Follow Us on LinkedIn
For more industry-leading insights and engagement with like-minded testing professionals, be sure to follow our 2i LinkedIn page.