Opening the technology ‘black box’: Software delivery KPIs that the C-suite should ask for
Organisations increasingly rely on their technology ‘department’ to deliver technology initiatives (e.g. websites, apps, billing systems etc.) that are critical to their future.
Despite this reliance on technology teams to deliver, most senior managers outside technology have a limited understanding of the process of delivering technology projects (i.e. software). As a result, they don’t know what questions to ask. Dialogue can remain constrained, meaning misunderstandings and frustrations can proliferate.
This short paper aims to open the little-understood ‘black box’ that is technology delivery to enable a better relationship between technology teams and their senior internal stakeholders. As such, it raises two key questions:
- How can senior leadership outside technology better understand the technology delivery process?
- What simple metrics can be shared that will help internal customers outside technology better understand the efficacy of their software delivery colleagues?
This short paper answers both of these questions and should help improve the relationship between technology teams and their internal customers, which is critical in a world where effective technology delivery is increasingly critical for success.
Some key background – ‘Agile’ software delivery
In the old days, technology teams delivered software projects in a similar way to construction teams. This methodology is known as ‘Waterfall’.
Owing to its similarity to a traditional house-build methodology, it is an approach that is easy for non-technology folks to understand and is summarised in the graphic below.
The problem is that the Waterfall methodology has some major drawbacks for software delivery, the key one being that projects are often delivered months – or even years – after they were first designed. Hence, these projects often do not meet the constantly changing needs of the internal customer. In some cases, the people who signed off the original spec may no longer even work in the business.
The creation of the Agile Manifesto
As a result, the technology world looked for a better way to function, and the ‘Agile Manifesto’ was born in 2001. It was conceived in a Utah ski resort and written down as a 12-point Manifesto.
It addresses the key drawbacks of Waterfall and has been a hugely successful concept. It is now adopted (in some shape) by around 90% of all organisations globally and is the de facto standard for software delivery.
It is critical that non-technology senior execs understand it: without that understanding, it is virtually impossible to have a sensible discussion with a technology delivery team.
The key principles of Agile are very different to the Waterfall methodology:
- Instead of one big team, we will split into efficient and motivated teams – or ‘squads’ – of 6-10 people
- Instead of delivering in one long increment of many weeks, we will work in short ‘sprints’ of around 2 weeks (so-called ‘Scrum Agile’)
- Instead of little contact with the client and testing at the end, we will test as we go and constantly engage with the client
- Instead of defining everything in detail up front, we will take on board changing circumstances and customer needs by defining additional increments as we go
- Instead of releasing everything in one go at the end, we will release ‘little and often’ – regular increments as we progress.
Hence, the core commitment in the Agile Manifesto is: “..the early and continuous delivery of valuable software.”
The key watch-out when going Agile
Agile software delivery has brought some huge benefits, hence its global adoption. However, it is not without its drawbacks – particularly at scale.
Its very nature means it can breed frustration with internal customers who are used to the Waterfall methodology’s perceived (and often illusory) clarity, which in theory promises a finished product on a particular date.
So the key watch-out that internal stakeholders need to understand is that:
‘In Agile, projects are not scoped in detail upfront, so committing to a delivery date of a ‘finished product’ is very difficult – instead, internal clients should expect regular delivery of small increments of progress.’
As a result, if a CMO asks, ‘Will we get the new app live in time for the start of Christmas trading?’ many Agile software teams will struggle to answer.
So what is a sensible set of metrics (KPIs) that internal clients should expect from Agile delivery teams so that they better understand each other?
Simple metrics the C-Suite should ask for
Fortunately, the Agile delivery process is easily measurable as there is a rich digital footprint in the tool sets used across the process – from pre-development, development, integration & deployment, and out into live software management.
As such, it is possible with Value Stream Management tools like Plandek to surface a set of metrics that track the end-to-end software delivery process and give the internal client a much clearer understanding of likely delivery timing, as well as confidence in the effectiveness of the technology delivery team itself.
Our six selected metrics focus on simple measures that reflect the core aims of Agile software delivery and are easy to understand inside and outside technology.
1. Lead Time
Lead Time is a core Agile software delivery metric which tracks an organisation’s ability to deliver software early and often. The concept of Lead Time is borrowed from lean manufacturing.
Lead Time refers to the overall time to deliver an increment of software from initial idea through to deployment to live – i.e. the complete end-to-end software delivery life cycle (SDLC). As such it is probably the first metric that the C-Suite should ask for to better understand how effectively a technology team is delivering.
The shorter the Lead Time, the higher the ‘velocity’ of the delivery team and hence the quicker the organisation is going to receive new features…
2. Deployment Frequency
Deployment Frequency is another fundamental measure of Agile software delivery. A core objective of Agile delivery is the ability to develop and deploy to live small software increments rapidly.
As such, Deployment Frequency is another key KPI for the C-Suite: it tracks that base competence and is a powerful metric around which to focus effort at all levels in the delivery organisation.
3. Flow Efficiency
Flow Efficiency looks at the number of time Tickets (being worked on by the technology team) spend in an ‘active’ versus ‘inactive’ status.
Clearly, the less time they spend in an ‘inactive’ status, the more efficient the end-to-end process and the quicker software will be delivered. Typical opportunities to remove inactive bottlenecks include time spent with Tickets awaiting definition (e.g. Sizing) and Tickets awaiting QA (testing).
4. Delivered Story Points
Delivered Story Points is often considered a problematic metric due to the potential inconsistencies in the calculation of story points and how much effort they represent. However, as a basic measure of output and how that is changing over time, it is the most effective measure of the ‘throughput’ of the technology team – i.e. how much work they are completing over a period of time.
5. Escaped defects
Escaped Defects is a simple but effective measure of overall software delivery quality. It can be tracked in a number of ways, but most involve tracking defects by criticality/priority.
When these simple Agile delivery metrics are viewed together, the C-Suite can get a well-balanced view of how effectively the technology team is delivering. The key is to see improvements over time – as continuous improvement is another key principle of the Agile Manifesto.
6. Sprint Target Completion
‘Scrum Teams’ (also known as squads) and ‘Sprints’ are the basic building blocks of Scrum Agile software delivery. If Scrum Teams consistently deliver their Sprint goals (a ‘Sprint’ typically involving a two-week increment of work), Agile software delivery becomes relatively predictable.
On the other hand, if Scrum Teams fail to deliver their planned sprint goals, then it becomes impossible to predict delivery outcomes across multiple teams and longer time periods. Scrum Team predictability (often referred to as ‘dependability’) is, therefore, a critical success criterion in Agile software delivery.
Sprint Target Completion is the basic measure of a Scrum Team’s ability to hit their self-imposed sprint goals – and hence their dependability. It is a simple metric calculated by subtracting the percentage of Tickets completed within a sprint from the Tickets that started the sprint.
High-performing Scrum Teams will consistently have Sprint Target Completion rates in excess of 85%.
Technology teams have many other responsibilities that these metrics do not cover (i.e. managing the resilience of live systems). However, in terms of understanding their core raison d’etre – which is ‘building new stuff’ (i.e. building software) – these metrics are an excellent place to start.
These software delivery metrics represent a balanced scorecard that addresses the key elements of Agile software delivery. Simply discussing them opens up a meaningful dialogue between stakeholders and the technology team in an area that is almost always vital for the future success of the organisation.
Plandek works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools and Slack) to provide end-to-end delivery metrics, analytics and reporting to optimise software delivery predictability, risk management and process improvement.
Plandek is a global leader in this fast-growing field, recognised by Gartner as a top nine global vendor in their DevOps Value Stream Management Market Guide (published in Sept 2020).
Plandek is based in London and works with clients globally to apply predictive data analytics to deliver software more effectively.