Charlie Ponsonby
Co-founder & CEO, Plandek
Value stream metrics to drive software delivery productivity
The increased cost of capital and riskier economic environment in 2023 is changing how organisations think about technology investment.
As belts are tightened, CTOs are under pressure to do more with less and to provide clear evidence that their software delivery teams are delivering value as efficiently and predictably as possible.
As such, it is no surprise that Value Stream Management is seen as ‘the next big thing’ in DevOps. The concept is simple (and borrowed from lean management principles) in that it encourages software delivery teams to accelerate the delivery of value by identifying and mapping their key value streams (which in Agile terms often relate to products) and then using metrics and related process improvements to optimise the efficiency of these value streams and accelerate the delivery of value to customers.
Metrics-led technology delivery – the size of the prize
Experience at Plandek across private and public organisations globally shows that significant improvement in delivery productivity is possible in a relatively short period (2-6 months).
We have observed five characteristics of organisations that have achieved very significant improvement through data-led software delivery:
- The sponsorship and drive of technology leadership.
- The organisation around products or value streams.
- The recognition that this is NOT about measuring individuals in a ‘Big Brother is watching you’ way – instead, it is about measuring and optimising an end-to-end PROCESS (the software delivery lifecycle), from design all the way through to delivery to live.
- Selecting a ‘balanced scorecard’ (to coin a management cliché) of software delivery metrics to optimise the process around which the organisation can align.
- The provision of intelligent insight and customised dashboards for Product Managers and Team Leads to empower them to drive the continuous improvement required across the SDLC to realise the improved outcomes.
Indeed, it is our firm belief that the metrics-led approach must be ‘loved by teams and relied on by managers to succeed.
A top-down approach that does not bring the whole organisation with it will not succeed in driving improvement and may result in the unintended consequences of alienated engineering teams and distrusted/ignored metrics.
Key metrics to drive software delivery productivity
When selecting the ‘balanced scorecard’ of metrics around which to align (which we call ‘North Star’ metrics), it is helpful to think about answering two critical questions related to efficient value delivery:
- Is our technology team focused on our highest priority/value-creating initiatives? i.e. is it aligned with our strategic goals/roadmap?
- And if so, is our technology team delivering as efficiently as possible?
Strategic alignment metrics
Delivery analytics tools like Plandek are designed to enable you to answer question 1 above easily. Tech leadership can start to consistently track key strategic alignment metrics such as:
- The proportion of time, resource and cost expended relative to key identified value streams or product areas and how this is trending over time – i.e. are we expending effort on the things that matter?
- The balance between effort expended on technical debt and bug fixing versus building new features.
- The proportion of backlog accounted for by different value streams/strategic priorities and how this is trending over time – to see if we are keeping on top of our roadmap in critical areas.
- The relationship between value delivered and complexity (story points) delivered and how this, in turn, relates to quality (defects). So that leadership can ensure that effort is not wasted on complex tasks which do not deliver value efficiently and/or generate quality issues.
In addition to considering whether our delivery capability is aligned with our strategic priorities (which is essential to maximise delivery ROI) – we can also consider the second question posed above – namely, whether our technology team is delivering as effectively as possible.
Delivery productivity metrics
The example ‘North Star’ metrics dashboard below shows a typical set of ‘North Star’ delivery capability metrics that objectively assess overall Agile DevOps maturity and delivery effectiveness.
As such, they represent operational KPIs that are relatable to the delivery and engineering team and which ultimately determine delivery productivity. Example metrics include:
Value Delivered
Value Delivered (R&D versus other). This is a core measure of value output for those organisations measuring the value of R&D effort, which is a fundamental measure of an organisation’s ability to deliver software and how that is changing over time.
Plandek enables companies to understand where R&D resource is being allocated – to understand better the proportion of tech effort/value delivered on vital roadmap work (R&D) – versus unplanned work, infrastructure improvement and technical debt (for example).
Lead Time to Value
Lead Time is a core agile software delivery metric which tracks an organisation’s ability to deliver software early and often and provides a solid foundation upon which to assess where further investment will realise value most efficiently.
The concept of Lead Time is borrowed from lean manufacturing and captures the overall time to deliver an increment of software from initial idea through to deployment to live – i.e. the complete end-to-end software delivery life cycle (SDLC). As such, it is probably the first metric that the C-Suite should ask for to understand better how effectively a technology team is delivering.
The shorter the Lead Time, the higher the ‘velocity’ of the delivery team and the tighter the feedback looks. Hence, the quicker the organisation will receive new features and respond to customer needs. Again, this is a vital KPI when assessing technology delivery efficiency.
Deployment Frequency
Deployment Frequency is another fundamental measure of Agile software delivery (or Agile DevOps maturity). A core objective of Agile delivery is the ability to develop and deploy to live small software increments rapidly. Deployment Frequency tracks that base competence and is a powerful metric around which to focus effort at all levels in the delivery organisation. Hence, it is another key KPI for a ‘balanced scorecard’ of technology delivery capability.
Escaped Defects
Escaped Defects are a simple but effective measure of overall software delivery quality. It can be tracked in several ways, but most involve tracking defects by criticality/priority.
Any analysis of delivery efficiency should include consideration of Escaped Defect rates as it is undesirable to nominally increase velocity and ‘productivity’ whilst reducing quality.
Throughput
Throughput can be measured in several ways (e.g. story points or tickets) and is the best proxy measure of ‘output’ in the broadest sense. This can then be compared relative to input metrics such as time, headcount (of various types) and input costs (e.g. staffing cost).
Many engineering teams are reluctant to measure Throughput relative to proxy measures of input, but at a time of cost pressure (or at a time when investment is increasing over time), it is a critical measure to consider.
Sprint Target Completion
‘Scrum Teams’ (also known as squads) and ‘Sprints’ are the basic building blocks of Scrum Agile software delivery. If Scrum Teams consistently deliver their Sprint goals (a ‘Sprint’ typically involving a two-week increment of work), Agile software delivery becomes relatively predictable.
On the other hand, if Scrum teams fail to deliver their planned sprint goals, then it becomes impossible to predict delivery outcomes across multiple teams and longer periods. Scrum team predictability (often referred to as ‘dependability’) is, therefore, a critical success criterion in Agile software delivery and should be assessed as part of a delivery productivity review.
High-performing Scrum teams will consistently have Sprint Target Completion rates over 85%.
The above are some examples of relevant metrics to consider when tracking delivery productivity. The key is to take a balanced set of metrics that consider software delivery holistically as a complex process. Many other metrics are commonly used (and can be surfaced through Plandek) to assess a technology delivery capability, including the DORA metrics and Flow metrics.
Final thoughts on adopting metrics to drive delivery efficiency
When the pressure is on, there is a natural tendency to de-prioritise improvement initiatives to focus on firefighting. Our experience shows that this is the most likely way to perpetuate or exacerbate the current issues.
The investment of time and effort in implementing a data-led approach to improved delivery efficiency is never regretted. It can reap rewards within a quarter.
It’s similar to never getting around to making that new hire that you don’t have time to interview for. It’s a false economy to put it into the ‘tomorrow pile’!
About Plandek
Plandek is an intelligent analytics and performance platform to help software delivery teams deliver valuable software faster and more predictably.
Plandek enables technology teams to track and drive their improvement and share understandable KPIs with stakeholders interested in accelerating value creation/ improving delivery efficiency.
Plandek works by mining data from delivery teams’ toolsets (such as issue tracking, code repos and CI/CD tools) to provide actionable and intelligent insight across the end-to-end software delivery process.
Plandek is recognised as a top global vendor in the DevOps Value Stream Management space by Gartner and Forrester and is used by private and public organisations globally to optimise their technology delivery and accelerate R&D ROI.
For more information, please visit www.plandek.com
Examples include:
- A 400% improvement in the Deployment Frequency of new features.
- A 55% reduction in Time to Value (the time taken to deliver an increment of software).
- An 80% improvement in Sprint accuracy – the key determinant of delivering new features in a predictable way.
- A 54% reduction in Escaped Defects to ensure that software quality improves whilst velocity is increased.
- A 20% improvement in Throughput relative to time spent.
These are substantive improvements in delivery effectiveness rather than marginal increases. They are achievable by any organisation willing to focus sustained effort on increasing delivery effectiveness by becoming metrics-led.
Key success factors in adopting a metrics-led approach
We have observed five characteristics of organisations that have achieved very significant improvement through data-led software delivery:
- The sponsorship and drive of technology leadership.
- The organisation around products or value streams.
- The recognition that this is NOT about measuring individuals in a ‘Big Brother is watching you’ way – instead, it is about measuring and optimising an end-to-end PROCESS (the software delivery lifecycle), from design all the way through to delivery to live.
- Selecting a ‘balanced scorecard’ (to coin a management cliché) of software delivery metrics to optimise the process around which the organisation can align.
- The provision of intelligent insight and customised dashboards for Product Managers and Team Leads to empower them to drive the continuous improvement required across the SDLC to realise the improved outcomes.
Indeed, it is our firm belief that the metrics-led approach must be ‘loved by teams and relied on by managers to succeed.
A top-down approach that does not bring the whole organisation with it will not succeed in driving improvement and may result in the unintended consequences of alienated engineering teams and distrusted/ignored metrics.
Key metrics to drive software delivery productivity
When selecting the ‘balanced scorecard’ of metrics around which to align (which we call ‘North Star’ metrics), it is helpful to think about answering two critical questions related to efficient value delivery:
- Is our technology team focused on our highest priority/value-creating initiatives? i.e. is it aligned with our strategic goals/roadmap?
- And if so, is our technology team delivering as efficiently as possible?
Strategic alignment metrics
Delivery analytics tools like Plandek are designed to enable you to answer question 1 above easily. Tech leadership can start to consistently track key strategic alignment metrics such as:
- The proportion of time, resource and cost expended relative to key identified value streams or product areas and how this is trending over time – i.e. are we expending effort on the things that matter?
- The balance between effort expended on technical debt and bug fixing versus building new features.
- The proportion of backlog accounted for by different value streams/strategic priorities and how this is trending over time – to see if we are keeping on top of our roadmap in critical areas.
- The relationship between value delivered and complexity (story points) delivered and how this, in turn, relates to quality (defects). So that leadership can ensure that effort is not wasted on complex tasks which do not deliver value efficiently and/or generate quality issues.
In addition to considering whether our delivery capability is aligned with our strategic priorities (which is essential to maximise delivery ROI) – we can also consider the second question posed above – namely, whether our technology team is delivering as effectively as possible.
Delivery productivity metrics
The example ‘North Star’ metrics dashboard below shows a typical set of ‘North Star’ delivery capability metrics that objectively assess overall Agile DevOps maturity and delivery effectiveness.
As such, they represent operational KPIs that are relatable to the delivery and engineering team and which ultimately determine delivery productivity. Example metrics include:
Value Delivered
Value Delivered (R&D versus other). This is a core measure of value output for those organisations measuring the value of R&D effort, which is a fundamental measure of an organisation’s ability to deliver software and how that is changing over time.
Plandek enables companies to understand where R&D resource is being allocated – to understand better the proportion of tech effort/value delivered on vital roadmap work (R&D) – versus unplanned work, infrastructure improvement and technical debt (for example).
Lead Time to Value
Lead Time is a core agile software delivery metric which tracks an organisation’s ability to deliver software early and often and provides a solid foundation upon which to assess where further investment will realise value most efficiently.
The concept of Lead Time is borrowed from lean manufacturing and captures the overall time to deliver an increment of software from initial idea through to deployment to live – i.e. the complete end-to-end software delivery life cycle (SDLC). As such, it is probably the first metric that the C-Suite should ask for to understand better how effectively a technology team is delivering.
The shorter the Lead Time, the higher the ‘velocity’ of the delivery team and the tighter the feedback looks. Hence, the quicker the organisation will receive new features and respond to customer needs. Again, this is a vital KPI when assessing technology delivery efficiency.
Deployment Frequency
Deployment Frequency is another fundamental measure of Agile software delivery (or Agile DevOps maturity). A core objective of Agile delivery is the ability to develop and deploy to live small software increments rapidly. Deployment Frequency tracks that base competence and is a powerful metric around which to focus effort at all levels in the delivery organisation. Hence, it is another key KPI for a ‘balanced scorecard’ of technology delivery capability.
Escaped Defects
Escaped Defects are a simple but effective measure of overall software delivery quality. It can be tracked in several ways, but most involve tracking defects by criticality/priority.
Any analysis of delivery efficiency should include consideration of Escaped Defect rates as it is undesirable to nominally increase velocity and ‘productivity’ whilst reducing quality.
Throughput
Throughput can be measured in several ways (e.g. story points or tickets) and is the best proxy measure of ‘output’ in the broadest sense. This can then be compared relative to input metrics such as time, headcount (of various types) and input costs (e.g. staffing cost).
Many engineering teams are reluctant to measure Throughput relative to proxy measures of input, but at a time of cost pressure (or at a time when investment is increasing over time), it is a critical measure to consider.
Sprint Target Completion
‘Scrum Teams’ (also known as squads) and ‘Sprints’ are the basic building blocks of Scrum Agile software delivery. If Scrum Teams consistently deliver their Sprint goals (a ‘Sprint’ typically involving a two-week increment of work), Agile software delivery becomes relatively predictable.
On the other hand, if Scrum teams fail to deliver their planned sprint goals, then it becomes impossible to predict delivery outcomes across multiple teams and longer periods. Scrum team predictability (often referred to as ‘dependability’) is, therefore, a critical success criterion in Agile software delivery and should be assessed as part of a delivery productivity review.
High-performing Scrum teams will consistently have Sprint Target Completion rates over 85%.
The above are some examples of relevant metrics to consider when tracking delivery productivity. The key is to take a balanced set of metrics that consider software delivery holistically as a complex process. Many other metrics are commonly used (and can be surfaced through Plandek) to assess a technology delivery capability, including the DORA metrics and Flow metrics.
Final thoughts on adopting metrics to drive delivery efficiency
When the pressure is on, there is a natural tendency to de-prioritise improvement initiatives to focus on firefighting. Our experience shows that this is the most likely way to perpetuate or exacerbate the current issues.
The investment of time and effort in implementing a data-led approach to improved delivery efficiency is never regretted. It can reap rewards within a quarter.
It’s similar to never getting around to making that new hire that you don’t have time to interview for. It’s a false economy to put it into the ‘tomorrow pile’!