Plandek Academy logo

Ultimate Guide – 15

Download the Ultimate Guide

Table of Contents

Case studies

15. Case Study 2: Building an insight-driven delivery organisation

The client

The client is one of Europe’s technology-led travel success stories operating in twelve countries across Europe and in the US.

The client values an insight-led approach to software delivery and uses Plandek as a key element of its DevOps Value Stream Management across all its software delivery teams. Plandek’s customised dashboards are used across the delivery organisation to provide metrics, analytics and reporting to underpin a robust Continuous Improvement process. The process is led by individual Team Leads, managed, and sponsored by technology leadership.

This metrics-led approach to continuously improving the software delivery process has been highly successful, with major improvements seen in key metrics over the last 24 months.

4 Key Takeaways

  1. A continuous improvement initiative underpinned by a simple set of ‘North Star’ metrics that teams understand and trust can deliver rapid, sustainable and significant improvement in software delivery outcomes at scale.
  2. Plandek is an ideal BI tool to provide the necessary end-to-end software delivery metrics to underpin the collective effort to deliver rapid improvement in delivery outcomes.
  3. Over the past 24 months, using Plandek to underpin a robust continuous improvement process, the client has: Reduced Cycle Time by 75%; Reduced hotfixes in Prod by 54%; Doubled commit frequency by engineers; 15% increase in deployments per day (per pipeline)
  4. The improvements seen were only possible as teams trust the quality of the metrics/ analytics as Plandek enables them to see the ‘provenance’ of the metric (how it is calculated) and to configure metrics to match their precise team circumstances (via Filtering functionality).


Creating a hierarchy of simple metrics that everyone understands

Plandek can surface a myriad of metrics. The Plandek Customer Success team worked closely with the client to identify a simple set of ‘North Star’ metrics (selected from this broader potential metrics set) around which to set their delivery goals.

At the client, the ‘North Star’ metrics were carefully selected to be meaningful when aggregated and illustrative of effective Agile software delivery.

‘North Star’ Metric

These North Star metrics were adopted by the technology leadership team as key priorities and cascaded across the delivery organisation – along with a set of determinant metrics that drive improvement in these critical North Star metrics (KPIs).

As such, key players across the delivery organisation have their own Plandek customised dashboards with the determinant metrics relevant to their area. These key players include Team Leads, Delivery Managers, Scrum Masters and DevOps/Engineering Managers.

The sections below consider the most popular determinant metrics used by the client to drive continuous improvement in the North Star KPIs.


Driving continuous improvement in Cycle Time

As a ‘North Star’ metric, Cycle Time was quickly adopted as a key focus for delivery teams, the Plandek network of dashboards allowed each team to closely analyse their own Cycle Time and understand where in the Cycle there was an opportunity to drive down time to value.

As per Figure 39 below, the Plandek Cycle Time metric view allows teams to understand the time spent in each ticket status within the development cycle. The flexible analytics capability and powerful filtering allow analysis by Status, Issue Type, and Epic (and any other standard or custom ticket field), all plotted over any time range required.

Figure 16: Example Plandek Cycle Time metric view
Figure 39: Example Plandek Cycle Time metric view

Working with the Plandek Customer Success team, Plandek was used by scrum teams to identify key determinant metrics that would have the biggest impact on reducing Cycle Time without impacting quality or requiring additional resource allocation.

Analysis showed three metrics that could unlock significant shortening of Cycle Times across almost all scrum teams. These were:

  • Flow Efficiency (which looks at the proportion of time tickets spend in an ‘active’ versus ‘inactive’ status)
  • Mean Time to Resolve Pull Requests (hrs)
  • First Time Pass Rate (%).

Each scrum team and related Scrum Masters and Delivery Managers updated their Plandek dashboards to surface these critical metrics so that they could be tracked and analysed in daily stand-ups, sprint retrospectives and management review meetings.

The Flow Efficiency analysis (see Figure 40 below) enables Team Leads to isolate and analyse each ‘inactive’ status in the workflow and consider if there is scope to reduce or eliminate it. The analysis shows the relative size of each ‘inactive’ status opportunity in terms of time spent in the inactive state and the volume of tickets affected.

Figure 21: Example Flow Efficiency metric within the Plandek dashboard
Figure 40: Example Flow Efficiency metric within the Plandek dashboard

Typical opportunities to remove inactive bottlenecks included time spent with tickets awaiting definition (e.g. Sizing) and tickets awaiting QA. Where waits for QA were considered excessive, Delivery Managers reconsidered QA resource allocation by the team.

Mean Time to Resolve Pull Requests (MTRPR) was also found to be a key bottleneck and hence, a potential area to save time and reduce overall Cycle Time. Very significant variations in time to resolve PRs were seen between teams and individuals, with waits of over 50 hours not uncommon.

Plandek enables drill-down to understand variances by code repository and destination branch (see Figure 41 below). This enabled quick identification of the biggest bottlenecks and targeted intervention, with the result that MTRPR was reduced dramatically by an average of 50%. This has a very significant impact on overall Cycle Time.

Figure 37: Example Mean Time to Resolve Pull Request metric within the Plandek dashboard
Figure 41: Example Mean Time to Resolve Pull Request metric within the Plandek dashboard

Increasing deployment frequency and reducing failed builds

In keeping with Agile principles, Deployment Frequency is a ‘North Star’ metric and, hence, a consistent focus for the delivery organisation at the client. Plandek’s end-to-end view of the delivery process enables delivery teams to closely track deployment frequency and track and manage the bottlenecks that may be slowing the frequency of deployments.

DevOps practitioners can track a range of metrics, including the Number of Builds, Build Failure Rate, Deployment Cycle Time and Flakiest Files (which identify fragile source code files in your codebase that can be targeted for refactoring.)

The client increased deployments per day (per pipeline) by 15% through a better understanding of the root cause of Build Failures and Deployment Cycle Time using Plandek.


Increasing throughput and value delivered

Delivery Team Leads and Managers adopted a range of determinant metrics that help track and drive the delivery of value.

These included Stories Delivered by Epic, Lead Time for Stories and Epic, and Delivered Value Points. Mean Build Time and Deployments by Pipeline were also used to track and improve the rate of deployment of value.

Teams use the Plandek drill-down functionality (and the ability to review individual tickets within Jira) to review progress and unlock bottlenecks continually.


Improving the quality of delivery: reducing hotfixes in production

A key thrust of the client’s insight-led approach is to use trend data to identify where improvements can be made quickly. Quality is a consistent focus – both the security and quality of the delivery process itself and the quality of the software delivered.

Historical data in Plandek revealed trends in the overall Hot Fix Rate (sample data illustrated below as Escaped Defect Rate) and the opportunity to reduce time spent fixing P1 (high priority) bugs to improve the customer experience and reduce time diverted from feature development.

Plandek’s customisable dashboards enabled each team to focus on their own P1 resolution time and to manage better the backlog of Unresolved P1 and P2 bugs and time to resolve key hotfixes.

The net result was a more disciplined approach to bug resolution across teams, with the result that hotfixes in production were reduced by 54% over a 12-month period.

Figure 24: Example quality metrics: P1 Resolution Time and Unresolved Bugs
Figure 42: Example quality metrics: P1 Resolution Time and Unresolved Bugs

The client adopts a broad view of ‘quality’ to include both the software output and the quality and security of the delivery process itself. As such, teams also track and manage Commits without a Pull Request and Commits without a Ticket Reference. See Figure 43.

The former ensures that all code is peer-reviewed before being committed (an important security requirement) – and the latter ensures the clear linkage between committed code and Jira tickets for security compliance.

Figure 43: Example delivery process quality metrics
Figure 42: Example quality metrics: P1 Resolution Time and Unresolved Bugs

Metrics led Continuous Improvement in software delivery – buy-in and trust

The experience with the client shows the power of applying a metrics-led philosophy across an Agile software delivery capability. As described, Team Leads led significant improvement across a range of critical Agile metrics, including Cycle Time, Escaped Defects, Deployment Frequency, and Commit Frequency by engineers.

Key factors in the success of the approach included:

  • The identification and communication of a simple set of ‘North Star’ metrics around which the delivery organisation aligns;
  • The use of Plandek to surface determinant metrics in real-time at all levels within the delivery hierarchy (across Board, team, workstream, PI, tribe, etc.);
  • Collective buy-in and trust in the metrics from the Team Lead upwards. This was critical and was made possible as a result of the total transparency of the Plandek metric presentation.

Experience shows that if Team Leads cannot see exactly how metrics are calculated and that they reflect their team’s context – they will question and ultimately reject the metrics – especially if the metric appears erratic or heavily negative.

Plandek is unique in its ability to show the ‘provenance’ of each metric and to allow individual teams to configure each metric in a way that reflects their particular circumstances, using the powerful Filter functionality. This is ultimately critical to the overall success of the initiative.