Improve dependability, velocity and quality

Improve dependability, velocity and quality

Case study: Implementing end-to-end software delivery analytics and reporting at a global supplier of automotive data and intelligence


The client operates in over 50 countries and provides the world’s most timely, accurate and up-to-date automotive information on vehicle specifications, pricing, sales and registrations. As a highly innovative provider of data and analytics, it is technology-led, with strategically critical software delivery teams located in the US and Europe.

The client uses Plandek as a key element of its DevOps Value Stream Management across all its software delivery teams – to provide complete visibility across all delivery teams and track and drive improvements in delivery effectiveness over time.

As such Plandek is used in two ways: by the delivery teams themselves to continuously self-improve; and by technology leadership to track improvement over time and communicate that improvement to the broader senior leadership team via reporting.


4 Key Takeaways

1. Plandek is an ideal BI tool to provide end-to-end software delivery metrics, analytics and reporting for total visibility across large and complex Agile software delivery capabilities

2. Using Plandek, clients can select a simple set of ‘North Star’ metrics to track and communicate the progress of their Agile transformation. Such metrics are meaningful when aggregated and simple enough to be understood by non-technology stakeholders (e.g. the C-suite)

3. The software delivery team can align around these same metrics and drive rapid, sustainable and significant improvement in software delivery velocity, dependability and quality

4. Improvements in delivery effectiveness are only possible when Team Leads ‘buy in’ to the end-to-end metrics and treat them as KPIs to be discussed (and acted upon) at team meetings, stand-ups and sprint retros

5. Plandek is designed to facilitate this buy-in at the team level. Each team has their own customised dashboard and Plandek enables them to see the ‘provenance’ of the metric (how it is calculated) and to configure metrics to match their precise team circumstances (via Filtering functionality).


Introduction – the client’s context

The client operates a large and distributed scrum Agile software delivery capability across multiple workstreams. The company is metrics-led and adopted Plandek as the global solution to surface end-to-end software delivery metrics across all teams, in order to:

1. improve delivery effectiveness; and crucially

2. demonstrate the success of their Agile transformation with a balanced scorecard of improving metrics over time that could be regularly communicated to the senior leadership team and stakeholders.

Plandek was first adopted in 2019 and the Plandek Customer Success team have since worked closely with the client to help them create and embed customised dashboards for all scrum teams, Delivery Managers and technology leadership.

Teams have adopted a simple set of metrics around which to drive their continuous improvement effort – and Plandek has been embedded in the daily and weekly work practices (e.g. stand-ups and sprint retros).


Using Plandek to provide visibility and reporting across the DevOps Value Stream

Plandek is designed to provide an end-to-end view across the software delivery process or ‘DevOps Value Stream’ (see Gartner ‘DevOps Value Stream Management Platforms Market Guide’, Sept 2020).

As such Plandek provides a metrics, analytics and reporting capability for technology leadership. It provides total visibility across complex, scaled Agile, distributed, multi-workstream delivery environments. This has three immediate benefits:

1. It reduces delivery risk as technology leadership has a better understanding of previously hidden team-level impacts on delivery

2. It enables a culture of metrics-led continuous improvement, where teams adopt their own metrics and targets and self-improve over time (in keeping with the overall delivery goals set by technology leadership); and

3. It provides technology leadership with a reporting structure and a set of meaningful, aggregated metrics that can be used to communicate the health and improvement in software delivery capability over time. Importantly, these meaningful metrics are easily understandable by a non-technology audience (e.g. the C-Suite and internal stakeholders).

Plandek works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools and Slack), to provide end-to-end delivery metrics/analytics.

Mining data from multiple toolsets used across the SDLC creates a unique perspective, enabling Plandek to identify bottlenecks and opportunities for improvement throughout the design, development, integration, test and deployment processes.


‘North Star’ metrics – implementing end-to-end software delivery analytics and reporting

The Plandek Customer Success team worked closely with the client sponsor to identify a simple set of ‘North Star’ metrics, (selected from this broader potential metrics set), around which to set the client’s delivery goals.

The ‘North Star’ metrics were carefully selected to be meaningful when aggregated and illustrative of effective Agile software delivery.

These metrics formed the basis of a new reporting framework for C-Suite and relevant internal stakeholders – enabling technology leadership to demonstrate the effectiveness of their ongoing Agile transformation, through a set of quantitative metrics easily understandable outside the technology organisation.

Plandek enables these North Star metrics to be tracked (and interrogated via powerful drill-down capability) in customised dashboards provided to the client’s technology leadership team. As such, these dashboards have become a key tool in the client’s ongoing Value Stream Management.

Figure 1. Example ‘North Star' metric dashboard for effective Agile delivery governance
Figure 1. Example ‘North Star’ metric dashboard for effective Agile delivery governance


Driving continuous improvement in delivery dependability

The Plandek Customer Success team worked with the client to roll out a cascade of customised dashboards across the delivery organisation – from technology leadership, across Delivery and Engineering management, and into the scrum teams.

This ensured that Plandek is used not only as a reporting/governance tool for technology leadership but also as an embedded continuous improvement tool at the team level.

Client teams selected three key metrics from the Plandek metric library, to track teams’ sprint overall accuracy: Sprint Completion, Sprint Target Completion, and Sprint Work Added Completion.

Perhaps the most important of the three, Sprint Target Completion looks at the scope agreed to during sprint planning and tracks how much was completed, showing how effective the team is at establishing the right priorities and delivering them.

Sprint Work Added Completion focuses only on work that was added to a sprint after it started (which is a very common problem for scrum teams), whilst Sprint Completion looks at the whole picture, regardless of whether work was planned for the sprint or added afterwards.

Figure 2. Three key sprint completion metrics – example data
Figure 2. Three key sprint completion metrics – example data

In our experience across multiple clients, Sprint Target Completion rates lower than 80% can start to cause serious dependability problems, especially in Scaled Agile environments with many teams and Programme Increments to navigate. Indeed, predicting the delivery status at the end of a single PI becomes extremely difficult if multiple teams are involved and many are consistently not meeting their Sprint Target Completion.

Client teams also adopted a number of ‘determinant’ sprint metrics that together significantly improve overall sprint accuracy (as tracked by Sprint Target Completion). Clearly, in sprints, you only have a couple of weeks to deliver a specific scope of work, so it’s critical that:

1. workflows smoothly throughout the sprint,

2. bottlenecks/delays are spotted and addressed immediately, and

3. user/PO feedback is provided to the team as quickly as possible so any issues can be resolved within the sprint.

The client selected Ticket Timeline (see below) from the Plandek metric library, to track how their work was flowing throughout the Sprint, and easily spot any delays or bottlenecks emerging over the two-week period, that may have potentially put their commitments at risk. As such, it proved an incredibly useful metric in sprint retros and daily stand-ups.

Figure 3. Example Ticket Timeline analysis
Figure 3. Example Ticket Timeline analysis


Driving continuous improvement in delivery velocity

Cycle Time was quickly adopted as a key focus for delivery teams. The Plandek network of dashboards allowed each team to closely analyse their own Cycle Time and understand where in the Cycle there is an opportunity to drive down time to value.

As per Figure 4 below, the Plandek Cycle Time metric view allows teams to understand the time spent in each ticket status within the development cycle. The flexible analytics capability and powerful filtering allow analysis by Status, Issue Type, and Epic (and any other standard or custom ticket field) all plotted over any time range required.

Figure 4. Example Plandek Cycle Time metric view
Figure 4. Example Plandek Cycle Time metric view

Working with the Plandek Customer Success team, Plandek was used by the scrum teams to identify key determinant metrics that would have the biggest impact on reducing Cycle Time without impacting quality or requiring additional resource allocation.

Flow Efficiency was adopted as a critical metric at the team level to unlock significant shortening of Cycle Times across almost all scrum teams.

Flow Efficiency analyses the proportion of time tickets spend in an ‘active’ versus ‘inactive’ status.

The Flow Efficiency analysis in Plandek (see Figure 5 below), enables Team Leads to isolate and analyse each ‘inactive’ status in the workflow and consider if there is scope to reduce or eliminate it. The analysis shows the relative size of each ‘inactive’ status opportunity in terms of time spent in the inactive state and the volume of tickets affected.

Typical opportunities to remove inactive bottlenecks included time spent with tickets awaiting definition (e.g. Sizing) and tickets awaiting QA. Where waits for QA were considered excessive, Delivery Managers reconsidered QA resource allocation by the team.

Figure 5. Example Flow Efficiency metric within Plandek dashboard
Figure 5. Example Flow Efficiency metric within Plandek dashboard

Deployment Frequency is another velocity-related metric deemed critical by the client and the metric is tracked using Plandek at an aggregate level as a ‘North Star’ measure of increasing agility over time. It is also closely tracked at the branch and workflow level to identify bottlenecks and opportunities to increase the frequency of deployment.

Figure 6. Example analysis of deployment frequency by workflow
Figure 6. Example analysis of deployment frequency by workflow


Driving continuous improvement in quality

Quality is a consistent focus for the client – both the security and quality of the delivery process itself and the quality of the software delivered. Historical data in Plandek revealed trends in the overall Escaped Defect Rate and the opportunity to reduce time spent fixing P1 (high priority) bugs, to improve the customer experience and reduce time diverted from feature development.

Figure 7. Example Escaped Defect Metric view
Figure 7. Example Escaped Defect Metric view

Plandek’s customisable dashboards enabled each team to focus on their own P1 resolution time and to better manage the backlog of Unresolved P1 and P2 bugs.

The net result was a more disciplined approach to bug resolution across teams with the result that hotfixes in production were reduced significantly over time.

Figure 8. Example quality metrics: P1 Resolution Time and Unresolved Bugs

The Plandek Customer Success team worked with the client to adopt a broad view of quality to include both the software output and the quality and security of the delivery process itself. As such, teams also track and manage metrics such as Commits without a Pull Request and Commits without a Ticket Reference. See Figure 9.

The former ensures that all code is peer-reviewed before being committed (an important security requirement) – and the latter ensures the clear linkage between committed code and Jira tickets, for transparency.

Figure 9. Example delivery process quality metrics
Figure 9. Example delivery process quality metrics


Metrics led Continuous Improvement in software delivery – buy-in and trust

The client experience shows the power of applying a metrics-led philosophy across an Agile software delivery capability. Team Leads led significant improvement across a range of critical Agile metrics including Cycle Time; Flow Efficiency, Escaped Defects and Deployment Frequency by engineers.


Key factors in the success of the approach included:

1. The identification and communication of a simple set of ‘North Star’ metrics around which the delivery organisation aligns

2. The use of Plandek to surface determinant metrics in real-time at all levels within the delivery hierarchy (across Board, team, workstream, PI, tribe etc)

3. Collective buy-in and trust in the metrics from Team Leads upwards. This was critical and was made possible as a result of the total transparency of the Plandek metric presentation.

Experience shows that if Team Leads cannot see exactly how metrics are calculated and that they reflect their team’s context – they will question and ultimately reject the metrics – especially if the metric appears erratic or heavily negative.

Plandek is unique in its ability to show the ‘provenance’ of each metric and to allow individual teams to configure each metric in a way that reflects their particular circumstances, using the powerful Filter functionality. This is ultimately critical to the overall success of the initiative.

Ready to get started?

Try Plandek for free or book a demo with our team