Example Plandek Metrics – Plandek Get started

x | close

Example Plandek Metrics

Plandek’s powerful self-config capability enables users to create a myriad of Agile, delivery, engineering and DevOps metrics.  Here we list some of our favourites.

What value are we delivering to our customers?

Value delivered is often represented in one of three ways:

  • Story points
  • Value points (perhaps £s)
  • Value tickets (e.g. stories or tasks)

Whilst some clients may use a more bespoke solution, it’s vital to have a clear view of what value is being delivered in terms that the business can understand.

Deployments (count and frequency) are increasingly becoming an important benchmark for delivery organisations, primarily because they show:

  • Delivery of valuable software to the business
  • Agility to meet changing business demand
  • DevOps maturity

They also form the basis of those clients who are focused on measuring DevOps maturity using the DORA framework.

Lead time is one of the most important metrics for an Agile organisation, as it shows the total time take for a new idea to be delivered.

Whilst Cycle time is the focus on many engineering teams, the business “feels” Lead Time, as it’s the measure of how long it has taken for the full team to deliver new demand.

Showing this at both the Epic and Story level is essential, as they form the backbone of conversations with the business. Often the Epic will enable them to realise the full business goal, whilst the stories are the actual “containers” for incremental value delivered on the way to the full realisation of benefits.

How efficiently are we delivering software?

A break down of cycle time by status (shown in the top left) enables teams to not only understand how long it takes from dev start to production (9 days in the example), but also where in that process are they spending the most time. This enables teams to focus their efforts on the right areas of the deliver process in order to drive material improvements.


One area that is almost always ripe for improvement is in the Pull Request (PR) process. Understanding the time it takes from Pull Request to merge to master (or whatever the default branch is) is critical to improving cycle times.

First Time Pass Rate is often seen as an engineering measure, however we see it being a far greater reflection of an agile teams’ ability to collaborate, communicate and support each other on the delivery of new features. The failure for a new feature to progress without regression is often due to breakdowns between team members in communication and understanding (as opposed to the code quality itself).

Speeding transitions helps teams measure how often tickets are being “shepherded”, i.e. they are not updated correctly and then someone comes in later and clicks through the workflow statuses to get it to the right status.

Given boards are the central place that teams organise themselves, it’s critical that the tickets are all in the right place so that decisions made are based on the correct information.

How dependable are we as a capability and set of teams?

For scrum teams, meeting sprint commitments is central to delivering on larger commitments, as they form the basis of any larger delivery plan. As such, a team should be able to meet their planned target for the sprint a high majority of the time, and in doing so will enable the larger plan to be delivered. You should not only track how many of your planned sprint work was completed, but also what happened to the incomplete work.

Many organisations use specific objectives in Jira to deliver high-level goals within sprints or more broadly in a PI or quarter. This is quite specific to clients, but nonetheless a key metric we see across many organisations.

How healthy is our backlog?

One of the key metrics for a dev team is the amount of story points ready for development. Typically, teams should aim to have no less than one sprint’s worth of points, but we recommend that they have two sprints (or just slightly more).

On the left you can see how much work is currently groomed and ready for dev, and how that has trended over time for the various teams.


In order to assess a team’s ability to support demand and generate work, understanding the time it takes to maintain the backlog of SPs ready for dev is essential.

The time to design stories metric not only reflects the ability for teams to respond to shortfalls in work, but the breakdown of different design status also enables teams to identify areas where they can speed up the design process.

We have two views recommended for this:

Backlog distribution: ignoring the amount of stories in a backlog, it’s important to view how they are generally distributed so you can understand if you have stale tickets sitting in particular places.

Distribution by count: Similar to the above, but this time introducing the amount of stories so that you can size the issue and also see how an increase/decrease in stories impacts distribution.

What risks do we have that may impact what or how we are delivering?


Escaped defects enables teams to keep track of issues that are reproducible in UAT that (in theory, although not always practical) should have been caught before a release to production. These may help improve testing, whether manual or automated as part of the QA & release process.


Having a view of your backlog of “critical” bugs is important, as they will present a view of the potential impact to the operation of software in production. This may be a view of bugs ahead of production or bugs that currently exist in production (both are important to track).

It is equally important to understand the resolution time for any class of bug that is deemed as critical (e.g. a P1 or P2 bug). A key metric for business users, teams need to be responsive to key bugs, whilst at the same time not being overly responsive to the extent that other delivery commitments are negatively impacted (e.g. sprint target completion and cycle time).

It’s common for most engineering teams to require that all merges to master (or the default branch) include a pull request, yet many merges occur without this basic aspect of code review. It’s critical to understand when and how often this happens so it can be corrected going forward.


Understanding the commit activity across repos and by engineer is key. Many clients gamify this metric by encouraging more commits of smaller size, thereby minimizing the risk profile of any single commit being too large. It’s also important to understanding the general commit behavior of teams and individual developers.

All commits should refer back to the ticket that justified the change in the first place. Teams should definitely keep track of ghost commits which have no clear traceability back to the rationale and history of the change.

How are our sprints progressing and what can we learn from the last one?

Boards are great for helping teams organise, but they aren’t great and telling the whole story for a sprint. Sprint Flow is an incredibly helpful metric that shows the day-to-day progress of all tickets from “To do” to “Done”, enabling you to spot and address key blockers, bottlenecks and other scope changes over the course of the sprint (or in a retrospective).

One of the biggest threats to a team’s sprint success is having the goalposts shifted whilst they are in full flight. With Sprint Scope, teams can track any changes to the scope of a sprint (new tickets added, removed tickets, changing estimates, etc.) in order to ensure they stay on track and meet their commitments.

Find out more