4. Overall delivery health and alignment (‘North Star’ metrics)
At the outset, it is important to agree on a set of metrics that sets the direction for the whole delivery capability and creates a shared view of ‘what good looks like’ in software delivery.
The starting point for these ‘North Star’ metrics may be the popular DORA metrics or the Flow metrics. However, in our experience, a more customised set of metrics is often best, which better reflects the current delivery capability, stage of Agile DevOps maturity and particular objectives of the technology leadership team.
Of key importance also is the ability to surface the ‘determinant’ metrics that drive improvement in the selected North Star metrics at the Manager and Team levels. For example, Deployment Frequency (a DORA metric) may be selected as one of the North Star metrics, but there is no point in measuring it if you do not also track and drive improvement in the related determinant DevOps metrics such as Change Failure Rate (another DORA metric), Mean Build Time, Mean Failed Build Time and Mean Time to Recover from Build Failures.
When selecting ‘North Star’ metrics, it is helpful to keep two key questions in mind:
- Is our technology team focused on our highest priority/value-creating initiatives? i.e. is it aligned with our strategic goals?
- And if so, is our technology team delivering as effectively as possible?
Examples of suitable ‘North Star’ metrics that give an overall picture of delivery health and alignment to strategic goals are shown in Figure 4 below.
Lead Time to Value
Lead Time to Value (also known as Lead Time) is perhaps the core Agile delivery metric as it measures the total time taken (in days) to deliver an increment of software, from design through to delivery to live. It, therefore, measures the SDLC (software delivery lifecycle) in its entirety and is a broader metric than Cycle Time, which measures the time taken from Dev start to delivery to live. It is a key metric as the fundamental aim of Agile software delivery is to deliver software ‘early and often’.
Deployment Frequency is another core Agile delivery metric as it measures the time between new deployments to live. It hence measures the ability of a team to deliver software ‘early and often’ and is a good overall measure of Agile DevOps maturity. Many studies have shown a correlation between customer satisfaction (NPS scores) and Deployment Frequency – the more regularly you deploy, the happier your customers.
Escaped Defects is a good ‘North Star’ metric as it tracks the overall quality of the software delivered in terms of bugs that have been spotted in the live environment. It is important to closely track quality relative to Deployment Frequency and Lead Time to ensure that quality is not suffering as velocity increases.
Sprint Target Completion
Sprint Target Completion is another excellent ‘North Star’ metric for any delivery team following scrum Agile. It tracks the ability of individual teams (squads) to complete their sprint goals accurately. This is a key measure of the dependability of an organisation’s delivery capability. If teams cannot reliably deliver their (self-imposed) targets for short sprints (typically ten working days), then it is impossible to predict delivery milestones across multiple teams and longer time periods.
Mean Time to Restore Service (MTTR)
Mean Time to Restore (MTTR) is one of the four DORA metrics and is an important measure of an organisation’s ability to track outages, resolve bugs and restore service after a system failure. This is clearly a key customer-centric metric.
Completed Tickets by key workstream
Completed Tickets by workstream or Value Stream is a key measure of a delivery team’s strategic alignment – i.e. whether resources are focused on the strategic priorities of the business (see Question 1 on page 11). Clearly, more time spent on things that matter is key – rather than time wasted on non-strategic priorities and unplanned work.
Value Creation vs. Technical Debt
A related Strategic Alignment metric is Value Creation vs. Technical Debt. As the name suggests, this tracks the proportion of resources and throughput (e.g. completed tickets) related to building new features relative to technical debt and bugs.
Analytics tools like Plandek also enable you to track your selected ‘North Star’ metrics and the relationships between them as it is important to understand that a focus on a certain metric is not adversely affecting another.
The example below shows a ‘North Star’ dashboard metric overlay, which considers the relationship between Value Delivered (in Value Points), Complexity (measured in Story Points) and Escaped Defects. The aim is to maximise value delivered, whilst minimising defects – and also to ensure that time is not expended building ‘complex stuff’ that does not in fact deliver real value.