Our previous blog Choosing the Right DevOps Metrics suggests an overall framework for monitoring the performance of the software development pipeline.
It’s helpful to also zoom in and look specifically at the function of testing within this pipeline, and to craft specific metrics there too.
Writing for DevOps.com Alex Husar offers a great starting point for this. He starts with the key insight, that testing can be guilty of focusing on how tests are performed, not what result they produce, meaning situations can arise where you are scoring 100% test pass rates but still they are not really strong enough.
Alex proposes five categories of metrics. The first two could really be considered part of the broader DevOps framework - User satisfaction and process performance, such as how long it takes to translate tasks into deployed code.
In the other three he then zooms in on testing specifically, describing:
- Coverage metrics - The amount of code that is tested.
- Code quality - Define an effective method for assessing quality, such as identifying legacy debt.
- Bug or incident metrics - Develop an accurate system for handling and reporting incidents.
What is important about this approach is that while they deal with testing they do so in a manner that contributes to the overall DevOps performance, acting to yield insights that can help improve the quality and throughput of software development.
Tricentis also provides a guide to developing testing metrics, offering a detailed system that expands to a larger, granular level of reporting ideal for a large enterprise dealing with the complexity that arises from their size.
They break them down into two main types: Result Metrics - an absolute measure of an activity/process completed, and Predictive Metrics - metrics that are derivatives and act as early warning signs of an unfavourable result, and then populate these with 64 specific measures, such as the number of defects found and the number of bugs found after shipping.
They also suggest metrics such as coverage, quality and testing effectiveness, and as they explain through various formulae all of these data points can then be utilised to report on desired outcomes including financial and business factors as well as software.
This is particularly useful as large enterprise CIO’s need to be able to quantify important business factors such as the total budget for testing, the cost per test, and so forth so that they can best manage the ROI and report to the board in a meaningful manner.
Furthermore, their approach addresses other complexity challenges, such as managing the changes made to large scale systems, where they define metrics such as the ‘Defect Injection Rate’ - the number of problems attributable to new changes made.
Knowing this number will help predict the number of defects that could be expected per new change, enabling test teams to strategically use retrospective meetings to understand their capacity to help identify and fix defects coming from new changes.
These insights can form the foundations of a high-performance software organisation.
In the State of DevOps 2021 report, more than 32,000 professionals contributed to research that identified four metrics to classify teams as elite, high, medium or low performers based on their software delivery: deployment frequency, lead time for changes, mean-time-to-restore, and change fail rate. Elite performers have control over their environment such that only 0-15% of their new changes cause failures, versus 16-30% for the others.
They also deploy new releases much more frequently and can do so in less than an hour. The metrics we described above provide the measurement and management framework needed to improve along with these scales and mature from Low to Elite performance levels.
2i offers the DevOps expertise and consulting services to help organisations establish these frameworks, and our Test Automation service supports the implementation of the required technologies and practices to increase deployment throughput while also reducing error rates.