Get started

Blog

Building an insight-driven delivery organisation with Plandek

Plandek, November 12, 2020

Using Plandek to reduce Cycle Time by 75% and increase deployment frequency by 15%

The client

The client is one of Europe’s technology-led travel success stories operating in twelve countries across Europe and in the US.

The client values an insight-led approach to software delivery and uses Plandek as a key element of its DevOps Value Stream Management across all its software delivery teams.  Plandek’s customised dashboards are used across the delivery organisation to provide metrics, analytics and reporting, to underpin a robust Continuous Improvement process.   The process is led by individual Team Leads and managed and sponsored by technology leadership.

This metrics-led approach to continuously improving the software delivery process has been highly successful, with major improvements seen in key metrics over the last 24 months.

4 Key Takeaways

  1. A continuous improvement initiative underpinned by a simple set of ‘North Star’ metrics that teams understand and trust can deliver rapid, sustainable and significant improvement in software delivery outcomes at scale.
  2. Plandek is an ideal BI tool to provide the necessary end-to-end software delivery metrics to underpin the collective effort to deliver rapid improvement in delivery outcomes.
  3. Over the past 24 months, using Plandek to underpin a robust continuous improvement process, the client has:
    • Reduced Cycle Time by 75%
    • Reduced hot-fixes in Prod by 54%
    • Doubled commit frequency by Engineers
    • 15% increase in deployments per day (per pipeline)
  1. The improvements seen were only possible as teams trust the quality of the metrics/analytics as Plandek enables them to see the ‘provenance’ of the metric (how it is calculated) and to configure metrics to match their precise team circumstances (via Filtering functionality).
  • transformation with a balanced scorecard of improving metrics over time.

Plandek was first adopted in 2018 and the Plandek Customer Success team have since worked closely with the client to help them create and embed customised dashboards for all teams (squads), Delivery Managers and technology leadership.

Teams have adopted a simple set of metrics around which to drive their continuous improvement effort – and Plandek has been embedded in the daily and weekly work practices (e.g. stand-ups and sprint retros).

Getting started with Plandek – using metrics to understand the health of your delivery capability

Plandek works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools, and Slack), to provide end-to-end delivery metrics/analytics to optimise software delivery dependability, risk management, and process improvement.

Mining data from multiple toolsets used across the SDLC creates a unique perspective, enabling Plandek to identify bottlenecks and opportunities for improvement throughout the design, development, integration, test, and deployment processes.

Creating a hierarchy of simple metrics that everyone understands

Plandek can surface a myriad of metrics.  The Plandek Customer Success team worked closely with the client to identify a simple set of ‘North Star’ metrics, (selected from this broader potential metrics set), around which to set their delivery goals.

At the client, the ‘North Star’ metrics were carefully selected to be meaningful when aggregated and illustrative of effective Agile software delivery:

These North Star metrics were adopted by the technology leadership team as key priorities.

Driving continuous improvement in Cycle Time

As a ‘North Star’ metric, Cycle Time was quickly adopted as a key focus for delivery teams.  The Plandek network of dashboards allowed each team to closely analyse their own Cycle Time and understand where in the Cycle there was an opportunity to drive down time to value.

As per figure 1 below, the Plandek Cycle Time metric view allows teams to understand the time spent on each ticket status within the development cycle. The flexible analytics capability and powerful filtering allow analysis by Status, Issue Type, Epic (and any other standard or custom ticket field) all plotted over any time range required.

Figure 1. Example Plandek Cycle Time metric view

Working with the Plandek Customer Success team, Plandek was used by scrum teams to identify key determinant metrics that would have the biggest impact on reducing Cycle Time without impacting quality or requiring additional resource allocation.

The analysis showed three metrics that could unlock significant shortening of Cycle Times across almost all scrum teams.  These were:

  1. Flow Efficiency (which looks at the proportion of time tickets spend in an ‘active’ versus ‘inactive’ status)
  2. Mean Time to Resolve Pull Requests (hrs)
  3. First Time Pass Rate (%).

Each scrum team and related Scrum Masters and Delivery Managers updated their Plandek dashboards to surface these critical metrics so that they could be tracked and analysed in daily stand-ups, sprint retrospectives, and management review meetings.

The Flow Efficiency analysis (see Figure 2 below), enables Team Leads to isolate and analyse each ‘inactive’ status in the workflow and consider if there is scope to reduce or eliminate it. The analysis shows the relative size of each ‘inactive’ status opportunity in terms of time spent in the inactive state and volume of tickets affected.

Figure 2. Example Flow Efficiency metric within Plandek dashboard

Typical opportunities to remove inactive bottlenecks included time spent with tickets awaiting definition (e.g. Sizing) and tickets awaiting QA.  Where waits for QA were considered excessive,  Delivery Managers reconsidered QA resource allocation by the team.

Mean Time to Resolve Pull Requests (MTRPR) was also found to be a key bottleneck and hence potential area to save time and reduce overall Cycle Time.  Very significant variations in time to resolve PRs were seen between teams and individuals, with waits of over 50 hours not uncommon.

Plandek enables drill-down to understand variances by code repository and destination branch (see Figure 3 below).  This enabled quick identification of the biggest bottlenecks and targeted intervention, with the result that MTRPR was reduced dramatically by an average of c50%. This has a very significant impact on overall Cycle Time.

Figure 3. Example Mean Time to Resolve Pull Request metric within Plandek dashboard

Improving quality: reducing hot-fixes in production

A key thrust of the client’s insight-led approach is to use trend data to quickly identify where improvements can be made.   Quality is a consistent focus – both the security and quality of the delivery process itself and the quality of the software delivered.

Historical data in Plandek revealed trends in the overall Hot Fix Rate (sample data illustrated below as Escaped Defect Rate) and the opportunity to reduce time spent fixing P1 (high priority) bugs, to improve the customer experience and reduce time diverted from feature development.

Figure 4. Example Escaped Defect Metric view

Plandek’s customisable dashboards enabled each team to focus on their own P1 resolution time and to better manage the backlog of Unresolved P1 and P2 bugs and time to resolve key hot fixes.

The net result was a more disciplined approach to bug resolution across teams with the result that hot fixes in production were reduced by 54% over a 12 month period.

Figure 5. Example quality metrics: P1 Resolution Time and Unresolved Bugs

The client adopts a broad view of ‘quality’ to include both the software output and the quality and security of the delivery process itself.   As such teams also track and manage Commits without a Pull Request and Commits without a Ticket Reference.  See Figure 6.

The former ensures that all code is peer-reviewed before being committed (an important security requirement) – and the latter ensures the clear linkage between committed code and Jira tickets, for security compliance.

Figure 6. Example delivery process quality metrics

Increasing deployment frequency and reducing failed builds

In keeping with Agile principles, Deployment Frequency is a ‘North Star’ metric and hence a consistent focus for the delivery organisation at The client.  Plandek’s end-to-end view of the delivery process enables delivery teams to closely track deployment frequency and track and manage the bottlenecks that may be slowing frequency of deployments.

DevOps practitioners can track a range of metrics including: Number of Builds, Build Failure Rate, Deployment Cycle Time and Flakiest Files (which identifies fragile source code files in your codebase which can be targeted for refactoring.)

Figure 6. Example Build Failure Rate metric

The client increased deployments per day (per pipeline) by 15% through a better understanding of the root-cause of Build Failures and Deployment Cycle Time using Plandek.

Metrics-led Continuous Improvement in software delivery – buy-in and trust

The experience at the client shows the power of applying a metrics-led philosophy across an Agile software delivery capability.  As described, Team Leads led significant improvement across a range of critical Agile metrics including: Cycle Time; Escaped Defects, Deployment Frequency and Commit Frequency by engineers.

Key factors in the success of the approach included:

  1. The identification and communication of a simple set of ‘North Star’ metrics around which the delivery organisation aligns
  2. The use of Plandek to surface determinant metrics in real time at all levels within the delivery hierarchy (across Board, team, workstream, PI, tribe etc)
  3. Collective buy-in and trust in the metrics from Team Leads upwards. This was critical and was made possible as a result of the total transparency of Plandek metric presentation.

Experience shows that if Team Leads cannot see exactly how metrics are calculated and that they reflect their team’s context – they will question and ultimately reject the metrics – especially if the metric appears erratic or heavily negative.

Plandek is unique in its ability to show the ‘provenance’ of each metric and to allow individual teams to configure each metric in the way that reflects their particular circumstances, using the powerful Filter functionality.  This is ultimately critical in the overall success of the initiative.

share this post