In our previous blog, we described ‘Algorithms for High-Performance DevOps’, a systematic formula for quantifying and improving the rate of software development throughput, the principle goal of adopting a new DevOps approach to deliver improvements,  such as faster time-to-market, responsiveness to customers, and predictable release timeframes.

Another great take on this is from the DevOps vendor Tasktop, describing a methodology of  'Flow Metrics', made up for key variables including:

  • Flow Velocity gauges whether value delivery is accelerating also referred to as throughput.
  • Flow Time measures the time it takes for Flow Items to go from ‘work start’ to ‘work complete’, including both active and wait times.
  • Flow Efficiency is the ratio of active time vs. wait time out of the total Flow Time and
  • Flow Load monitors the number of Flow Items currently in progress (active or waiting) within a particular value stream.

There are four 'Flow Metrics' that measure how value flows through a product's value stream. They are calculated on four Flow Items - units of work that matter to a business: features, defects, debt, and risk. Any task or effort a software delivery organisation undertakes can be categorised as one of these core Flow Items.

This is identical in principle to the Theory of Constraints approach we describe, a management science developed in manufacturing to speed the throughput of production lines. ‘Flow Velocity’ also seeks to quantify this throughput, gauging whether value delivery is accelerating through calculating the number of Flow Items of each type completed over a particular period of time.

Organisations need the ability to measure the entire system - end-to-end - to understand how value flows and where it is constrained, and most importantly, to correlate those metrics with desired business outcomes. This approach allows for continuous optimisation in the pursuit of delivering greater and greater value to the organisation, faster.

They describe the central challenge and the need for a new, better approach for capturing and reporting on productivity:

"While IT frequently collects and presents an abundance of technical metrics regularly, quite often they measure the process and not the outcome. Also referred to as ‘proxy metrics’, these metrics include things like story-points delivered, commits per project, test case coverage, build success rate, release duration, and deployments per day. Good measures of a process are helpful to monitor the performance of a specific activity, but they usually don’t help IT and business leaders optimise the system’s performance as a whole, from customer request to delivery. Furthermore, optimising something that is not the system’s bottleneck is actually inefficient and counter-productive."

Value Stream Mapping - Transforming Team Workflow

Tasktop also describes the use of Value Stream Mapping to chart and understand the critical steps in a specific process and quantifies easily the time and volume taken at each stage, identifying key constraints like slow handoff interactions.

As they explain in this blog the disconnects between systems and departments are typically the main focal point of errors and lost time and delays. The handoffs generate a considerable wastage of time and effort, slowing the DevOps Flow.

Central to realising the benefits of a DevOps transformation is to embrace and implement new team and working models, not just new technologies.

DevOps also encompasses the organizational and team practices, referring to the fusion of previous distinct departmental functions of software development and IT operations, a distinction that often leads to the kinds of challenges that silos usually create. It sets out to break down the artificial boundaries that develop profusely in large, hierarchical organizations, and instead self-organise around a 'delivery pipeline' of the work required to deploy code faster and with fewer errors.

IT Revolution, one of the leading experts in the field, captures these challenges and describes the transformation to new models very effectively, citing the ‘Inverse Conway Manoeuvre’ as the technique for designing DevOps team flows.

Conway published revealing research in the ’60s that showed organisational performance is directly related to the hierarchical department structures that management choose to organise these teams. For example, cost-centric functional approaches, such as grouping software development and IT operations into their own departments, results in local optimizations but long lead times overall, caused by the bottlenecks that arise through slow handoffs between them.

Agile DevOps teams have instead focused on the end-to-end process required to deliver new software and organized around these, implementing ‘Business Capability Teams’ - Multi-discipline teams that work together across the entire lifecycle. Martin Fowler closes the loop, describing how the approach goes hand in hand with the new Cloud Native software architecture, and Scott Prugh of AMC explores this transformation in detail in this Slidedeck.

2i Services

2i can assist organisations to adopt these techniques to also develop the qualities, practices and approaches that your people need to form high-performing Agile Delivery Teams. As testing experts, we can help your organisation embed best practices throughout your DevOps life-cycle and infrastructure management.

This requires a holistic understanding of a large, complex enterprise environment, including multiple technologies, departments and workflow interactions. 2i specialises in mapping this complexity and from that defining a DevOps blueprint that synthesises them together to achieve faster throughput of successful code deployment.