Your free practical guide to Software Engineering Intelligence platforms. Download now >

Plandek Academy logo

Ultimate Guide – 16

Download the Ultimate Guide

Table of Contents

Case studies

16. Case Study 3: Improving software delivery predictability

The client context: This highly successful provider of SaaS accounting software uses Plandek as a key element of their Value Stream Management across their distributed software delivery teams in multiple locations. Technology leadership and the delivery teams themselves use Plandek’s customisable dashboards to track and continuously improve end-to-end delivery metrics.

A key focus for the client is the dependability of the scrum teams. Enabling teams to meet sprint goals (over a two-week period) accurately is seen as a key building block to dependable software delivery. With multiple teams working on complex product workstreams over many weeks – the individual team’s failure to regularly meet sprint delivery goals would quickly result in highly unpredictable programme increment outputs.

Using a variety of delivery and engineering metrics available within the Plandek platform, the teams drove a number of process improvement initiatives and improved sprint delivery accuracy from c70% to >80% over a six-month period (an increase of c15%).

 

Introduction – the client’s context

The client operates a scrum Agile software delivery capability onshore and offshore across multiple workstreams.

The company is metrics-led and adopted Plandek as the group-wide solution to surface end-to-end software delivery metrics across all teams in order to improve visibility across teams greatly and thereby:

  • reduce software delivery risk (improve delivery dependability);
  • improve software delivery productivity and quality;
  • demonstrate the success of their Agile transformation with a balanced scorecard of improving metrics over time.

 

Plandek was first adopted in early 2020, and the Plandek Customer Success team have since worked closely with the client to help them create and embed customised dashboards for all teams (squads), Delivery Managers and technology leadership.

Scrum team dependability, reflected in their ability to deliver sprint goals over a two-week sprint period accurately, was quickly identified as a high priority as some critical delivery milestones (linked to key commercial business objectives) were upcoming.

The question was therefore raised: “Which metrics should we look at during our sprint retrospectives and daily stand-ups to help ensure that we meet our sprint goals?”

For Scrum teams, there are a few key areas that determine both short and long-term success. The client’s aim was not only to meet their current sprint goals but also to build and maintain healthy patterns of work and collaboration that will lead to future success.

Working alongside the Plandek Customer Success team, the key questions asked about sprint performance were:

  • Are we able to meet our commitments/goals reliably?
  • Is our work flowing smoothly throughout the sprint?
  • Are there any risks emerging that may impact our ability to meet our sprint goals?
  • Has this sprint improved our overall delivery performance?

 

Below we explore each of these questions further and outline how the client used Plandek to track some simple sprint metrics to help answer the questions and improve the dependability of all their scrum teams.

 

Meeting Sprint Commitments

The client selected three key metrics from the Plandek metric library to track teams’ overall sprint accuracy: Sprint Completion, Sprint Target Completion, and Sprint Work Added Completion.

Perhaps the most important of the three, Sprint Target Completion, looks at the scope you agreed to during sprint planning and tracks how much was completed, showing you how effective the team is at establishing the right priorities and delivering them.

Sprint Work Added Completion focuses only on work that was added to a sprint after it started (which is a very common problem for scrum teams), whilst Sprint Completion looks at the whole picture, regardless of whether work was planned for the sprint or added afterwards.

Figure 44: Three key sprint completion metrics
Figure 44: Three key sprint completion metrics

In our experience across multiple clients, Sprint Target Completion rates lower than 80% can start to cause serious dependability problems, especially in Scaled Agile environments with many teams and Programme Increments to navigate. Indeed, predicting the delivery status at the end of a single PI becomes extremely difficult if multiple teams are involved and many are consistently not meeting their Sprint Target Completion.

This can be for many different reasons, but very often, it is a result of consistent problems with ticket sizing, which can be worked on with objective review in sprint retros.

Alternatively, if you use Sprint Goals to define specific objectives above and beyond the work planned in the Sprint, you may also find this to be a useful metric for your retrospectives.

Figure 45: Example Sprint Goals Delivered Metric
Figure 45: Example Sprint Goals Delivered Metric

Delivering efficiently within a sprint

Client teams also adopted a number of ‘determinant’ sprint metrics that together can significantly improve overall sprint accuracy (as tracked by Sprint Target Completion). Clearly, in sprints, you only have a couple of weeks to deliver a specific scope of work, so it’s critical that:

  1. workflows smoothly throughout the sprint;
  2. bottlenecks/delays are spotted and addressed immediately, and
  3. user/PO feedback is provided to the team as quickly as possible so any issues can be resolved within the sprint.


The client selected Ticket Timeline (see below) from the Plandek metric library to track how their work was flowing throughout the Sprint and easily spot any delays or bottlenecks emerging over the two-week period that may have potentially put their commitments at risk. As such, it proved an incredibly useful metric in sprint retros and daily stand-ups.

Figure 46: Example Ticket Timeline analysis
Figure 40: Example Flow Efficiency metric within the Plandek dashboard

Identifying risk and mitigating it

Whilst the Ticket Timeline above proved a powerful way of surfacing the impact of risks on delivery, there are a number of other metrics that we recommend that combat some common challenges that teams face. The client Team Leads adopted a range of these metrics (based on their own preferences) to closely track performance over time.

 

Moving goalposts

Whilst the client embraced changing priorities, too much change within an active sprint will compromise a team’s ability to deliver effectively (and should raise questions on the planning process). With Ticket Scope, client Team Leads tracked any key tickets being added or removed from a sprint, which was great for retros and stand-ups.

Figure 47: Example Sprint Scope graphic
Figure 41: Example Mean Time to Resolve Pull Request metric within the Plandek dashboard

Figure 47 shows a typical sprint with a manageable number of tickets being added and removed throughout the duration of the sprint, reflecting the agility of a mature scrum team. However, it was common for multiple tickets to be added later in the sprint with the result that ‘agility’ becomes ‘conflicting priorities and potential inefficiency’.

 

Unplanned bugs

New bugs/defects, particularly those from Production, can derail teams very quickly. It’s important to track the arrival of new bugs that can side-track teams. Even if bugs are not immediately resolved, the triage process may (and often does) distract teams from their core focus on delivering sprint work.

As a result, client teams filtered by critical bugs (e.g. P1 and P2), as well as distinguishing between bugs originating from production vs your “QA/UAT” process.

Figure 48: Example timeline of unplanned bugs
Figure 48: Example timeline of unplanned bugs

Keeping the ‘big picture’ in mind

We recommend that every organisation has a set of ‘North Star’ metrics that they use to measure their overall delivery effectiveness and agility. The client adopted this approach with a simple set of four agile metrics championed by technology leadership to give the entire delivery organisation a set of key metrics around which to align.

The client found that sprint retrospectives provided a great opportunity to reflect on how the work delivered in that sprint has contributed to the overall progress against these ‘North Star’ metrics.

Two of these ‘North Star’ metrics were Lead Time and Cycle Time, chosen by technology leadership as they reflect one of Agile’s core values: the “early and continuous delivery of valuable software”.

During each team’s retrospective, they reflected on how the sprint’s deliverables had impacted the trend over time and examined where there are opportunities to improve in future sprints.

Figure 49: Example graphic showing Cycle Time variance over time
Figure 49: Example graphic showing Cycle Time variance over time

The Cycle Time metric in Figure 49 refers only to the development cycle time and, therefore, excludes additional time taken to integrate, test and deploy to live. This more complete view of the end-to-end delivery process is reflected in the Lead Time metric, which is ultimately the more representative measure of true agility, though it is not so suited for a scrum team as it takes into account delivery stages beyond the scrum team’s control.

 

Where can we improve?

The client also adopted Flow Efficiency as a ‘North Star’ metric as it requires teams to focus on areas where process inefficiencies may lie that adversely affect Lead and Cycle Times (and hence velocity).

Teams could see precisely where they were spending the most inactive time, e.g. ‘Awaiting QA’, ‘To Do’, ‘Awaiting sign-off’, and then agree on some focused actions to reduce this waste in future sprints.

It is not uncommon for teams to have a Flow Efficiency of less than 20%, meaning that over 80% of the team’s Cycle Time is taken up with tickets in potentially avoidable ‘inactive’ statuses.

Figure 50: Example Flow Efficiency graphic
Figure 50: Example Flow Efficiency graphic

Bringing it all together

We believe that the metrics above should form the backbone of any team’s sprint retrospectives, and indeed, they were very effective in increasing the client’s overall sprint accuracy measured by their Sprint Target Completion – increasing Sprint Target Completion from c70% to >80% over the six-month period of continuous improvement.

However, they are not the only metrics you may want to consider. Teams will face different challenges over time and may have different self-improvement initiatives in flight during their sprints, so any metrics you are using to track these should also be included.

In terms of what you might have in your retrospective versus stand-ups, we believe the answer is simple: the same! If the metrics you chose for a retrospective reflect success for your team, then the stand-up is merely a good opportunity to check your progress against your targets so that you can ensure success, intervening if and where possible.