The start point
Organisations increasingly rely on their technology ‘department’ to deliver technology initiatives (e.g. websites, apps, billing systems etc.) that are critical to their future.
Despite this reliance on technology teams to deliver, most senior managers outside technology have a limited understanding of the process of delivering technology projects (i.e. software). As a result, they don’t know what questions to ask, dialogue can remain constrained and misunderstanding and frustrations may proliferate.
This short paper aims to open the little understood ‘black box’ that is technology delivery to enable a better relationship between technology teams and their senior internal stakeholders. As such, it raises two key questions:
- How can senior leadership outside technology better understand the technology delivery process?
- What simple metrics can be shared, that help internal customers outside technology better understand the effectiveness of their software delivery colleagues?
This short paper answers both these questions and should help improve the relationship between technology teams and their internal customers which is critical in a world where effective technology delivery is increasingly critical for success.
Some key background – ‘Agile’ software delivery
In the ‘old days’ technology teams delivered software projects in a similar way to construction teams. This methodology is known as ‘Waterfall’. Owing to its similarity to a traditional house build methodology, it is an approach that is easy for non-technology folks to understand and is summarised in the graphic below.
The problem is that the waterfall methodology has some major drawbacks for software delivery – the key one being that projects are often delivered months/years after they were first designed and hence often do not meet the constantly changing needs of the internal customer. Indeed the people who signed off the original spec may no longer even work in the business…
As a result, the technology world looked for a better way and the ‘Agile Manifesto’ was born in 2001. It was conceived in a ski resort in Utah and was written down as a 12 point Manifesto. https://agilemanifesto.org/
It aims to address the key drawbacks of waterfall and has been a hugely successful concept and is now adopted (in some shape) by around 90% of all organisations globally. As such it is the de facto standard for software delivery.
It is critical that non-technology senior execs understand it, as without that understanding it is virtually impossible to have a sensible discussion with a technology delivery team
The key principles of Agile are very different to the waterfall methodology:
- Instead of one big team, we will split into efficient and motivated teams or ‘squads’ of 6-10 people
- Instead of delivering in one long increment of many weeks, we will work in short ‘sprints’ of around 2 weeks (so called ‘Scrum Agile’)
- Instead of little contact with the client and testing at the end, we will test as we go and constantly engage with the client; and crucially
- Instead of defining everything in detail up front, we will work more incrementally, taking on board changing circumstances and customer needs and only define a small increment upfront and keep defining additional increments as we go
- As such instead of releasing everything to live in one go at the end, we will release ‘little and often’ – regular increments as we progress.
Hence the core commitment in the Agile Manifesto which is “..the early and continuous delivery of valuable software.”
Agile – The key watch-out
Agile software delivery has brought some huge benefits (hence its global adoption) – but it is not without its drawbacks (particularly at scale) – and its very nature means it can breed frustration with internal customers used to the perceived and often illusory clarity of the waterfall methodology, that in theory delivered a finished product on a particular date.
So the key watch-out that internal stakeholders need to understand is that:
‘In Agile, projects are not scoped in detail upfront, so committing to a delivery date of a ‘finished product’ is very difficult – instead internal clients should expect regular delivery of small increments of progress.’
As a result, if a CMO asks “Will we get the new app live in time for the start of Christmas trading?” – many Agile software teams will struggle to answer.
So what is a sensible set of metrics (KPIs) that internal clients should expect from Agile delivery teams so that they better understand each other?
Simple metrics for shared understanding – the metrics the C-Suite should ask for
Fortunately the Agile delivery process is easily measurable as there is a rich digital footprint in the tool-sets used across the process – from pre-development; development; integration & deployment; and out into live software management.
As such, it is possible with Value Stream Management tools like Plandek to surface a set of metrics that track the end-to-end software delivery process and give the internal client a much clearer understanding of likely delivery timing and confidence in the effectiveness of the technology delivery team.
Our six selected metrics focus on simple measures that reflect the core aims of Agile software delivery and are easy to understand inside and outside technology.
Lead Time is a core agile software delivery metric which tracks an organisation’s ability to delivery software early and often. The concept of Lead Time is borrowed from lean manufacturing.
Lead Time refers to the overall time to deliver an increment of software from initial idea through to deployment to live – i.e. the complete end-to-end software delivery life cycle (SDLC). As such it is probably the first metric that the C-Suite should ask for to better understand how effectively a technology team is delivering.
The shorter the Lead Time, the higher the ‘velocity’ of the delivery team and hence the quicker the organisation is going to receive new features….
Deployment Frequency is another fundamental measure of Agile software delivery. A core objective of Agile delivery is the ability to develop and deploy to live small software increments rapidly.
Deployment Frequency tracks that base competence and is a powerful metric around which to focus effort at all levels in the delivery organisation. Hence it is another key KPI for the C-Suite.
Flow Efficiency looks at the proportion of time tickets (being worked on by the technology team) spend in an ‘active’ versus ‘inactive’ status. Clearly, the less time they spend in an ‘inactive’ status the more efficient the end-to-end process and the quicker software will be delivered. Typical opportunities to remove inactive bottlenecks include time spent with tickets awaiting definition (e.g. Sizing) and tickets awaiting QA (testing).
Delivered Story Points
Delivered Story Points is often considered a problematic metric due to the potential inconsistencies in the calculation of story points and how much effort they represent. However, as a basic measure of output and how that is changing over time, it is the most effective measure of ‘throughput’ of the technology team – i.e. how much work they are completing over a time period.
Escaped Defects is a simple but effective measure of overall software delivery quality. It can be tracked in a number of ways, but most involve tracking defects by criticality/priority.
When these simple Agile delivery metrics are viewed together, the C-Suite can get a good balanced view of how effectively the technology team is delivering. The key is to see improvements over time – as continuous improvement is another key principle of the Agile Manifesto.
Sprint Target Completion
‘Scrum Teams’ (also known as squads) and ‘Sprints’ are the basic building blocks of Scrum Agile software delivery. If Scrum Teams consistently deliver their Sprint goals (a ‘Sprint’ typically involving a two-week increment of work), Agile software delivery becomes relatively predictable.
On the other hand, if Scrum teams fail to deliver their planned sprint goals, then it becomes impossible to predict delivery outcomes across multiple teams and longer time periods. Scrum team predictability (often referred to as ‘dependability’) is therefore a critical success criterion in Agile software delivery.
Sprint Target Completion is the basic measure of a Scrum Team’s ability to hit their self-imposed sprint goals – and hence their dependability. It is a simple metric calculated as the percentage of tickets completed within a sprint from the tickets that started the sprint.
High performing Scrum teams will consistently have Sprint Target Completion rates in excess of 85%.
Technology teams have many other responsibilities that these metrics do not cover (e.g. managing the resilience of live systems) – but in terms of understanding their core raison d’etre which is ‘building new stuff’ (i.e. building software) – these metrics are an excellent place to start.
They represent a balanced scorecard that addresses the key elements of Agile software delivery and just discussing them starts to open a meaningful dialogue between stakeholder and technology team in an area that is almost always vital for the future success of the organisation.
Plandek works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools and Slack), to provide end-to-end delivery metrics, analytics and reporting to optimise software delivery predictability, risk management and process improvement.
Plandek is a global leader in this fast-growing field, recognised by Gartner as a top nine global vendor in their DevOps Value Stream Management Market Guide (published in Sept 2020).
Plandek is based in London and works with clients globally to apply predictive data analytics to deliver software more effectively.