As companies start to learn from each other’s tech disasters the move to Agile has quickened. This has left traditional agile testing services in a tricky position.
In this case study piece, Adam Pettman breaks down how testing as an integral and pivotal part of the DevOps process can sometimes be forgotten about and in some cases lead to major incidents and how they should be implemented with more care and diligence by organisations that wish to launch their products more smoothly.
Discover his thoughts on how these testing oversights can actually happen and how you can stop them from slowing down your business productivity.
The Disaster
It was April 2018 when, after more than 1.5 years of careful planning and preparation, TSB migrated billions of customer records onto a new IT system. The project was hailed as unprecedented, with more than 1,000 professionals participating in the run up to the migration.
But just 20 minutes after the switch, the first reports of issues started to come through. People’s life savings were missing from their accounts, or the values of purchases and past transactions were inaccurately displayed. Weeks of disruption caused enormous reputational and financial damage to the organisation. The failure cost the bank £330m, while 80,000 customers switched their accounts elsewhere.
And what was the root cause? A lack of rigorous testing.
This is by no means a unique example of a poorly executed IT transformation or migration project. Too often digital leaders, faced with pressures from senior leadership and hard-and-fast delivery deadlines, push ahead without an adequate time-frame dedicated to testing and the application of go-live criteria to prove production readiness. Where there are complex IT legacy systems involved, the risk is further compounded.
Tests are Pests
As the story of the TSB migration proves, testing is often perceived as an afterthought. It is the ‘troublemaker’ that comes in last, throwing up issues and causing delays to a project.
Testing is indeed designed to identify failures and inefficiencies, but, if conducted just before a go-live date, chances are it will flag issues that won’t get fixed because of time pressures.
The move to Agile has transformed this ‘last minute’ approach significantly – for the better. In Agile, a working product is delivered very quickly, with a succession of releases following iteratively. Testing of each release informs the development of the next version.
By embedding testing experts into Agile teams, testing is conducted in sprints throughout the software development life cycle, rather than at the last moment.
Testers in Agile teams generate feedback not only on test status, test progress, and product quality but also on process quality, driving the overall quality assurance and adding value to each stage of the software delivery.
Within an Agile team, every team member is responsible for product quality and performs test-related tasks (‘a whole team approach’). Conversely, testers within Agile teams have a broader skill set, including strong teamwork and communication skills.
Move Quick – Automate Lots!
The most critical enabler is making the most of your teams’ talent by eliminating time-sapping repetitive tasks, reducing the need for traditional test resource.
There are several tasks that can be automated to enable an Agile approach, including most types of tests. By automating manual regression testing and smoke testing, testers’ time is freed up to do more exploratory testing and the delivery pipeline can move at an increased velocity. In the age where being first to market can make or break, Automated Testing allows you to arrive first and not fail at the first sign of trouble.
Test automation requires an investment in people, technology and processes but it also requires discipline and expertise in testing that should be embedded into the approach and continuously improved.
Testers should be developing new automation skills and learning to use new tools and technology. They also need to work closely with developers to create the code that forms automated tests. These collaborative synergies in turn create more thorough tests that offer greater coverage.
Organisations of all sizes are generating significant ROIs as a result of test automation. For example, a large telecoms provider needed two weeks to run 8,000 tests using two full-time testers at a cost of roughly £7,000. After implementing automated testing, the same workload was completed overnight at a total cost of a cup of coffee.
The implications for cost and speed of delivery in the long term are obvious but there is another key consideration to be noted here – the two testers will now have more time to improve processes and add even more value to the delivery pipeline and bring greater certainty of delivery in line with business expectations.
Dynamic Test Automation: The devil is in the data
Dynamic Test Automation is one prime example of Test Automation bringing additional benefits on top of cost savings and speed. Relying entirely on QA Engineers to generate the code as the product is built ensures that incomplete last-minute Testing is never the blocker. Test Automation usually focuses on process and takes the approach of running through tests in a linear manner.
This approach creates test data and allows that data to dictate the flow of the testing. As the data arrives, it has indicators that dictate which tests should run and what the expected outcome is, positive or negative. This approach ensures that the solution is tested thoroughly because any unknown dependencies are more likely to be revealed.
“If we always work through our automated tests in the order 1 ,2, 3, 4, 5 we will never find out what happens if we test 1, 2, 3, 5, 4.”
This is much more likely to expose errors that would otherwise have been missed by traditional testers.
How do we make the Data to Drive the Tests?
Generating test data is often difficult and, in many cases, live data cannot be used within a test environment. Either anonymising data or generating data from scratch can lead to success.
Approach 1 – Anonymising
Our client data can be anonymised by transforming live data into generic data to ensure that testing covers realistic scenarios. There are difficulties in doing this at scale across relational databases, so this approach may not be applicable everywhere.
Example
Bob Smith Accountant becomes James Brown, a finance associate.
Approach 2 – Generating from Scratch
When generating test data from scratch, it is important to work closely with business stakeholders or Product Owners to ensure that all key scenarios are covered. Brief workshops to create rows of test data may seem time-consuming but this time will be recovered very quickly as it drastically reduces defects that would usually be caught in User Acceptance Testing.
Break the Data
The two approaches above aim to show that a test should be passed because the data is accurate. The other side of this when the data does not match the standards set out in the requirement. Either because the data is inaccurate or because a scenario has not been followed accurately.
Adopting a generic manner to break the underlying data offers a quick way of testing software for how it handles exceptions or if it catches the issues at all.
Applying the Theory – Case Study for an Eco-Tech Start Up
A theory isn’t worth anything until its been tested thoroughly. In this case 2i partnered with a local start-up who are getting international recognition for developing software with ambitious targets. Their entirely cloud-based infrastructure provided an ideal testing ground for Dynamic Test Automation.
2i engaged with everyone from CTO to Junior Developer to embed Ui automation into their delivery pipeline.
We generated a bespoke Dynamic Automated Testing solution built on their cutting edge tech stack.
Generate Test Data
Test Data was generated from scratch using Python, which produced a series of Json files ready to upload through the App.
Break the Data
2i created Python script to randomly inject values into the Json file. These file names are tagged as containing values should cause tests to fail.
UI Test Automation
Cypress was chosen as the automation framework to interact with the web app. It was used to drive the execution of the tests. All Tests were written in a BDD (Behaviour Driven Development) format. Cucumber was used to support BDD and enabled test cases to be written using keywords, while the implementation of the test is hidden. This supported business users in creating and understanding high-level test cases.
Test cases were written to cover the happy and sad paths through the system. These will be used to check the functionality and end-to-end of the app, as well as make part of the regression testing suite. The happy path will be performed with files containing valid data. Files tagged as “Broken” were used in the sad path, which checks that incorrect files cannot be submitted through the App.
Embedding in the Pipeline
The entire process of using Dynamic Test Automation to test the User Interface of the App shall be embedded within the GitOps CI/CD Pipeline. This will ensure that the engineering team can continue to develop at an increasing velocity whilst having continued Assurance that they are pushing a Quality product.
Where do we go from here?
Automation will continue to disrupt the quality assurance and testing industry, with more and more test cases being automated to increase the speed and extend the coverage of testing. Barriers to entry will continue to fall, with low-code or no-code drag and drop functionalities enabling simple development of automated tests. The efficiency and accuracy of testing will be further enhanced with the power of AI.
However, our experience shows that there will always be a need for quality assurance and testers to provide expertise, oversight, improve the development process, develop relevant scenarios and test case design.
The perfect blend of strategic steering, carefully planned test cases and the right technology embedded in the delivery pipeline will define the quality assurance industry of the future.