By Charlie Ponsonby

Originally published on InfoQ

Key Takeaways

 

We work with Agile teams of all different shapes and sizes and predictability is a theme that is front-of-mind for almost all – as the words “Agile” and “predictable” don’t always go hand in hand …

So how can development teams maintain their Agility and improve their delivery predictability?  So, when stakeholders ask the predictable question “Are we on schedule?“, they can give a sensible answer.

Typical Agile team forecasting approaches

Product-based Agile software development teams delivering small increments very regularly, may spend little time worrying about forecasting.

But often Agile teams are established to deliver major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast effectively or risk being accused of being “Agile and late!”

In our experience, Agile teams’ forecasting tends to be pretty inaccurate and is often based only on a simple observation of backlog, velocity and word-of-mouth reassurance from the teams themselves.

In our opinion a really meaningful forecast requires a broader set of empirical data reflecting all the potential sources of project delay.

* There is a separate debate as to whether an Agile software development methodology is appropriate in a project context like this, but that is for another day.

“The Logical Six” – the six sources of project delay

Logic dictates that there are six possible reasons why a project is late – the so-called “Logical Six”.  Three of the Logical Six are in direct control of the technology team:

  1. the size and complexity of the task is underestimated
  2. the planned group of appropriate engineers are not available
  3. the delivery team is not delivering as productively as anticipated.

And the other three are in control of the business sponsors interacting with the technology team.  These are:

  1. Unclear requirements definition – internal clients are not clear enough about what they actually want
  2. Scope change – the business moves the goal posts (changed/new requirements or changed priorities)
  3. Ongoing input – the development process is delayed by a lack of stakeholder input where/when required.

In our view, you will never really be able to accurately forecast and improve your delivery predictability unless you collect metrics which track all of these six levers.

Only then will you really understand whether a project is likely to be “late” and what needs to be done to get it back on track.

The "Logical Six" – the six ultimate sources of project delay

The “Logical Six” – the six ultimate sources of project delay

Challenging your teams’ forecasting with analysis of the delivery metrics that matter

So, what are the metrics that relate to the six sources of project delay – and so are critical to delivery predictability and improved forecasting accuracy?

The table below shows our favourite metrics in each of the areas.  We encourage Delivery Managers to focus on these when working with the Delivery Team Leads to create more realistic forecasts of delivery timing.

In summary, the metrics are:

  1. People availability – clearly key.  If we don’t have the engineers that we anticipated, we will be late.
  2. Team productivity relating to:
    1. Productive time – another critical metric considering the proportion of time engineers have to focus on writing new features
    2. Process efficiency – friction in the development process can undermine the best laid delivery plans. So really understanding trends in and the causes of this friction is key
    3. Velocity and time to value – understanding how our throughput and time to value has varied as the project progresses is yet another determinant variable in our forecasting
  3. Estimation Accuracy – if we are adopting a Scrum-based approach– sprint completion gives a very good indicator of our forecasting capability.  If we cannot hit our two-weekly sprint goals, we are unlikely to be effective at estimating effort and forecasting further into the future
  4. Requirements definition, stakeholder input and scope change can be tracked using Quant Engineer Feedback collected from collaboration hubs like Slack.  This is something we use a lot internally to improve our forecasting as it uses quant insight from the people actually doing the work.  It often adds confidence to an otherwise theoretical delivery forecast and sheds light on three of the Logical Six (requirements definition, stakeholder input and genuine scope change).

Key Metrics to track the Logical Six levers of project delay

Trend Metrics Relevance
Available Engineering Resource:

  • Active Engineers (v plan)
Clearly key – shows whether we have the planned resource in place to deliver the work.
Productive Time

  • % Time spent on New Features
  • % Time spent on Upkeep
  • % time lost to Non-product related activity  
Key to understand how this has trended over time.  If we are spending more expending energy on non-productive tasks, clearly this is going to impact our progress going forward.
Process Efficiency

  • Flow efficiency (%)
  • Rework (days)
  • WIP/developer  
These metrics analyse the “friction” in the development process and how this has trended over time.  Declining Flow Efficiency is a problem that can often be addressed, so it is a key metric in forecast mitigation.  Rework shows trends in accumulated time spent reworking tickets that fail QA.  This is another form of friction that may be mitigated (e.g. by assisting engineers new to the code base).

NB: In our view, any metric collected at individual level needs to be viewed in context by people directly involved in the project. They can be taken out of context (to damaging effect) if circulated more broadly.

 Velocity and Time to Value

  • Feature Tickets Completed
  • Cycle Time (days)
  • Lead Time (days) 
Velocity metrics are problematic, but a detailed understanding of trends in tickets completed (and story points/value points per ticket) is a key when challenging forecasts. Critical too is an understanding of changes in Cycle and Lead Times. If they are lengthening accurate forecasting is tricky.
 Sprint Accuracy

  • Overall Completion Rate (%)
  • Sprint Overall Completion v Sprint Target Completion (%)
Inability to meet two weekly sprint goals, makes forecasting over longer periods very difficult.  These metrics are therefore critical to forecasting accuracy.
Quant. Engineer feedback

  • Team morale
  • Sprint effectiveness
  • Quality of ticket and requirements definition
  • Quality of business sponsor input
  • Effort identified as agreed scope change  
Some metrics platforms enable the real-time polling of engineers through collaboration hubs. This provides quant data:

  • of engineers’ views on morale and process efficiency; and
  • on the impact of business sponsors’ requirements definition and ongoing input   
  • Team Lead feedback on stories added by business stakeholders that are additional to the original scope.

Collecting the delivery metrics that matter

The key delivery metrics require surfacing data from a myriad of sources including; work-flow management tools, code repos, and CI/CD tools – as well as collecting quant feedback from the engineering team themselves (via collaboration hubs).

The complexity of the data and multiple sources make this sort of data collection very time consuming to do manually and really requires an end-to-end delivery metrics platform to do at scale.

Delivery metrics platforms are available which consist of a data layer to collate and compile metrics from multiple data sources and a flexible UI layer to enable the creation of custom dashboards to surface the metrics in the desired format.

Using Root Cause RAG reporting to combine your delivery forecast and mitigation plan

If we use metrics to track and analyse the Logical Six drivers of project progress, we will get a much clearer picture of real project progress. By this we mean:

The improved forecast and related mitigations can be presented together in a Root Cause Red, Amber and Green (RAG) Progress Report.

Root Cause RAG reports are far more effective than traditional RAG progress reporting which often sheds very little light on actually why a project (with a required go-live date) is behind schedule and what needs to be done to bring it back on track.

In contrast to a traditional RAG approach, the Root Cause RAG Report (see the example below) clearly shows:

  1. Our latest delivery forecast
  2. The delivery metrics that support our forecast
  3. Our mitigations (based around the Logical Six levers that drive project timing) – e.g. the need to increase productive time by reducing time diverted to upkeep; the need to improve Flow Efficiency by addressing the blockages in the dev process (e.g. QA wait time); or the need for improved stakeholder input (as shown in the quant engineer feedback)
  4. Allocated responsibilities (across the development teams and stakeholders) to deliver the identified mitigations

Done well, Root Cause RAG Reports can be a really effective means of presenting our (more accurate) forecasts in a way that stakeholders can understand and therefore can be an important step in reducing lateness and bringing the technology team and the internal client much closer together.

As discussed however, it relies on an understanding of the metrics that actually determine project lateness and a means of collecting those metrics.

 

Example Root Cause RAG Report

Example Root Cause RAG Report

About the Author:

Charlie Ponsonby started his career as an economist in the developing world, before moving to Andersen Consulting. He was Marketing Director at Sky until 2007, before leaving to found and run Simplifydigital in 2007. Simplifydigital was three times in the Sunday Times Tech Track 100 and grew to become the UK’s largest broadband comparison service. It was acquired by Dixons Carphone plc in April 2016. Ponsonby co-founded Plandek with Dan Lee in October 2017. Plandek as an end-to-end delivery metrics analytics and BI platform. It mines data from toolsets used by delivery teams (such as Jira, Git and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness. Plandek is used by clients globally including Reed Elsevier, News Corporation, Autotrader.ca and Secret Escapes.

Read more by Charlie Ponsonby:

 

By Charlie Ponsonby, Co-CEO Plandek

We work with a great variety of organisations at various points in their Agile transformations.  Whilst the move to Agile has driven tangible and lasting benefit in almost all of these organisations – the great majority have experienced problems and unintended consequences along the way.

One issue that we hear very often (particularly in large commercial organisations) is the difficulty of reconciling Agile’s decentralised, iterative approach with internal clients used to agreeing budgets (up-front) and expecting outcomes delivered at certain points in time.

Products over Projects

An important principle of Agile is to align stable teams around designing and building particular products – so that they have end-to-end responsibility for designing, building and maintaining the technology and become increasingly expert over time.  This is inherently sensible and prevents temporary project based teams being thrown together to build something, only to be reassigned after “launch”.

Effective product-based teams aim to iterate and deploy improvements increasingly rapidly in keeping with the core Agile goal of the “early and continuous delivery of valuable software”.

As a result, forecasting becomes less important as the business expects small, frequent increments to an existing application over time.  Rather like painting the Forth bridge – the job that is famously never finished…

Manage to the Agile metrics that matter

Effective product-based teams aim to iterate and deploy improvements increasingly rapidly in keeping with the core Agile goal of the “early and continuous delivery of valuable software”.

Projects over Products?

After the popular writing of Martin Fowler and others, product-based teams have become the desired option for many Agile organisations – but in certain situations a product-based methodology can have major drawbacks.

A typical example would be the build of a new application that the business requires at a certain time and for an agreed budget (signed-off upfront).  This is a very common scenario in large organisations.  Typical examples might include:

Under these circumstances a product-based Agile methodology can cause serious problems as the methodology:

This inability to forecast cost and timing of delivery almost inevitably causes serious problems with stakeholders anxious to receive progress updates and working software at the planned point in time, and at the planned budget.

So how can an Agile team better predict delivery timing and cost – even if the Agile methodology that they are adopting is not suited to accurate forecasting?

Manage to the Agile metrics that matter

This inability to forecast the cost and timing of delivery almost inevitably causes serious problems with stakeholders anxious to receive progress updates and working software at the planned point in time, and at the planned budget.

The concept of a Delivery Risk Profile

Forecasting techniques within Agile teams are often rudimentary.  Burndown charts offer a linear extrapolation of current velocity, compared to the known backlog – to estimate future completion date of effort outstanding.  There are two basic problems with this:

  1. as Agile teams estimate ticket size/effort on an ongoing basis, there will inevitably be un-estimated backlog that is not included in the forecast
  2. extrapolating current velocity may be highly misleading (for any number of reasons affecting the team’s ability to deliver going forward).

However, the good news is that if we collect and track a set of delivery metrics over time, we can start to put together a more informed view of forecast delivery timing and cost, expressed within a Delivery Risk Profile.

Plandek is the world’s leading BI platform to surface and analyse end-to-end delivery metrics.   It mines data from toolsets used by delivery teams (such as Jira, Git, and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness.

Analysis of this balanced scorecard of metrics gives a more measured view of likely future delivery time and cost.  This analysis is presented in a Delivery Risk Profile.

The Delivery Risk Profile is made up of metrics in five key areas:

  1. Backlog analysis – it’s key to understand the quality and age of the current backlog and to get a better feel of the likely size of the unestimated element of the backlog
  2. Estimation and sprint accuracy – a critical determinant of timing accuracy. If teams are unable to estimate effectively and to deliver to their sprint goals, then longer-term delivery targets become far more unreliable
  3. Process efficiency – often very poorly understood, but vital to understanding likely future velocity
  4. Throughput and velocity – clearly key, but even more interesting when trends by individual team are carefully considered
  5. Talent availability and engagement – again often poorly understood but critical in maintaining quality and velocity – as delivery is ultimately all about the talent.

Key metrics within the Delivery Risk Profile

The graphic below shows key metrics that when considered together give a much more informed view of likely future delivery timing and hence cost – as they are heavily deterministic of future velocity.

The Plandek analytics platform collects these metrics across all teams and projects and enables Delivery Managers to get a much more complete picture of likely future velocity – expressed in a quantitative Delivery Risk Profile.

Typically trends in the metrics are analysed by team to build the profile of risk.  This Risk Profile then enables the Delivery Manager to refine:

A more accurate delivery forecast is therefore synthesised from three different sources – the teams themselves; linear burndown estimates; and the balanced scorecard of metrics known to directly impact future velocity.

Key delivery metrics used to create a Delivery Risk Profile

Key delivery metrics used to create a Delivery Risk Profile

 

Case study – applying a Delivery Risk Profile to reduce unplanned go-live delays by 50%

We work with a number of data companies, one of which has been particularly successful at applying the Plandek metrics set to build an effective risk profile, which they have used to very significantly improve their go-live forecasting.

Their teams are organised around a product-led strategy and therefore not ideally suited to delivering time-dependent “projects”.  In this instance, stakeholders need careful management to ensure that there is as much visibility as possible as regards progress and forecast timing of delivery of major milestones.

Before the Plandek metrics set was tracked and analysed – unplanned go-live delays were common.  Delivery Managers relied on linear burndown charts and word-of-mouth updates from scrum teams.

Plandek was implemented and delivery Team Leads and Delivery Managers started to track and manage to a simple set of delivery metrics.  Trends were analysed over time for each scrum team to build a Delivery Risk Profile (as described on page 3).

The immediate effect was a greater focus on:

  1. Delivering sprints more accurately (measured via Sprint Overall Completion (%) and Sprint Target completion (%)
  2. The forecasting process – with word-of-mouth forecasts from the teams more rigorously debated and refined in the light of trends in the key risk profile metrics.

The net result was a very significant improvement in forecasting accuracy.  Over a 6 month period, unplanned go-live delays reduced by 50%.  This greatly strengthened the relationship between the technology delivery team and the business stakeholders.

The case is summarised in the table below.

A European data business – Understanding the delivery team’s risk profile to improve delivery forecasting accuracy

Case study – applying a Delivery Risk Profile to reduce unplanned go-live delays by 50%

A European data business – Understanding the delivery team’s risk profile to improve delivery forecasting accuracy

 

Read more case studies:

Read more about our team.

You can also check out Charlie’s article on the importance of metrics to Agile teams on InfoQ, which recently trended globally.

 

Agile metrics for self-improvement

(Agile && metrics) ? Can agile metrics help developers and teams improve?

The journey to becoming Agile can sometimes be tricky. In this article, discover nine critical success factors that make Agile metrics work for teams. What questions should you and your team be asking yourself in order to focus on self-improvement, reliability, efficiency, and high-quality code delivery?

By Colin Eatherton

Article originally published on JAXenter.

 

Inevitably, most teams get to the stage where they need to adopt a more Agile delivery process. This is not just a sign of maturity. It’s a sign that the software they are developing is being used, is deemed useful, and is receiving feedback and change requests so that it continues to improve.

My team is in a unique position. We are striving to improve delivery as we develop a tool that strives to help teams do the same. In other words, we use our own tool to improve the delivery of it!

Agile metrics for self-improvement

We are striving to improve delivery as we develop a tool that strives to help teams do the same. In other words, we use our own tool to improve the delivery of it!

In my experience, the journey to becoming more Agile can be tricky. Each team has its own goals and ideas about how to get there. All teams, however, need to be able to reflect on their progress, measure how effective their current strategy is, and gain more visibility of the wider landscape. Of course, this is easier said than done.

Bottom-up is best

The topic of which metrics Agile teams can trust to reliably help them measure progress – or whether to use them at all – is both fascinating and contentious. Many people associate metrics with a top-down management style, which is the opposite of the decentralised, empowered and self-determining team philosophy that Agile promotes.

During a one-to-one meeting with my team lead, I asked him which metrics he felt I should focus on. He explained that the only ones worth looking at were those that the whole team agreed would help improve delivery. When it came to my own self-improvement goals, he said I should select metrics myself.

As a rule, whenever metrics are applied from the top down, the less effective they are. (This is not to say, however, that there aren’t valuable metrics that can indicate progress at a higher level.)

Using Agile metrics for team improvement

Self-improvement is a key Agile principle. On the face of it, it’s a pretty simple process. First, you identify what you want to improve. Next, establish ways to measure the attributes that contribute to improvement. Then measure and reflect. Therefore you will always need a reliable way to track progress.

Using Agile metrics for team improvement

First, you identify what you want to improve. Next, establish ways to measure the attributes that contribute to improvement. Then measure and reflect. Therefore you will always need a reliable way to track progress.

My team chose Agile metrics that focus on various attributes of delivery, quality and value. For example, we measure Lead Time from the time the ticket is created in Jira to its production deployment and number of escaped bugs. We’ve created a dashboard in our own software around these attributes so we can measure, integrate and affect them daily, or as part of a retrospective. Our dashboards help guide us and qualify decisions we make around team, process, and delivery improvement to ensure we continually head in the right direction. We can also opt to see individual contributions to these metrics. For instance, I have chosen to create a view of metrics that only I can see, so I can measure my own personal output.

Using Agile metrics for delivery cycle improvement

As part of our cycle rituals, our team is responsible for making sure our scope is realistic. To support this, we use Agile metrics to ensure that the sum complexity, time and effort of our tasks match the overall time available and the team’s abilities. We measure the scope using story points. We also built and now use a ‘Sprint Report’ facility. This allows us to see a breakdown of the sprint’s overall completion, including the target completion and work added to the sprint after it started. It also includes ‘Sprint-specific dashboards’ that use metrics like ‘Completed Tickets’ to calculate the amount of work developers can reliably complete during a sprint (aka their ‘velocity’).

Using Agile metrics for delivery cycle improvement

As part of our cycle rituals, our team is responsible for making sure our scope is realistic. To support this, we use Agile metrics to ensure that the sum complexity, time and effort of our tasks match the overall time available and the team’s abilities.

 

Nine critical success factors that make Agile metrics work for teams

As I said before, Agile metrics for team improvement can be contentious. They open up a lot of heated discussions and to varying degrees benefit from a wider understanding of context and narrative. So we discuss them and apply the following tenets to help find common ground:

Simplicity

Complicated metrics run counter to the Agile spirit. We like to define our journey as specifically as we can, answering simple questions with easy to understand metrics which support them like:

Consensus

The metrics need to be selected by the development team and serve a common aim shared by project members from the Scrum Master to the technology leader.

Relevance

You shouldn’t measure anything unrelated to your journey’s destination. Each project follows a different set of milestones so may need different metrics. However, as there is only one final destination, some carefully-selected metrics should be applicable across all teams. Less can be more, so when we build out a dashboard together in our team meetings, we try to concentrate on only a handful of metrics at a time.

Significance

Software delivery metrics are often outcome-based. Although legitimate, there’s a risk of tracking only symptoms and not root causes. The ‘Cycle Times’ metric, for example, shows how long work is taking rather than why. Descriptive metrics like these should also include details of the variables that impact the outcomes. For example, alongside Cycle Time you could show an analysis of the bottlenecks. To improve we want to uncover root causes and identify behaviour gains we can make together – we need to tell a full story.

Right sources – we need to analyse data from those sources that our developers genuinely engage within their everyday work. These include workflow management software like Jira; code repositories like GitHub and Bitbucket, TFS or Gitlab; code quality tools like Sonarqube; time tracking systems like Harvest or Tempo; and continuous delivery tools like Jenkins and GoCD.

Automation

If analysing metrics takes significant cognitive effort or time to collate, we tend to lose patience and abandon the effort. The metrics need to complement processes, not slow them down.

Near real-time

Agile metrics delivered in near real-time fundamentally drive improvement as they can be discussed in daily stand-ups and sprint retrospectives.

The human factor

Software development is a process (almost) completely driven by people. This means it should be possible to source information and get to the root-cause of issues very fast. Typically feedback is collected in person, in stand-ups and retros. In theory this should work well, but it can also hide issues that participants don’t want to openly communicate. This is especially true in changing, distributed teams with a mix of full-time employees and contractors. To address this and provide us with context and narrative around our metrics, we incorporate feedback into our tool. For example, when tickets get closed – we get the chance to provide feedback on how the ticket went and its requirements via Slack. These prompts also give us a feel for how a ticket has performed post dev as it continues (hopefully!) past QA.

Actionable

Metrics only make sense if teams can act upon and improve them. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives. Active stewardship by the technology leadership team can make a huge difference.

The limits of home-grown solutions

Since for many developers (Agile && metrics) don’t seem to get along, it’s no surprise that few analytics tools are available to measure Agile delivery effectiveness. However, now that Agile is mainstream, there is urgency to measure. Since few analytics tools were available, many teams started to build their own. This worked well on a small-scale but hit the wall when projects and teams grew.

There are several other problems with home-grown tools. Most notably, they allow teams to tweak calculations and tell an overly flattering story. Also, the time it takes to build your own tool can be a big distraction from planned work. Fortunately, new solutions are now emerging that work in line with the principles listed above.

Agile metrics for self-improvement

If you are still not convinced about using Agile metrics for teams, I recommend testing them on yourself. Most find that when they do this, the metrics become a reassurance or even a confidence boost. For example, a younger colleague of mine was struggling with his programming confidence. He found metrics to be very helpful because they showed him objective proof of improvement.

For my part, one way I often use Agile metrics is to provide insights during a retrospective. To measure how I’m improving over time, I track metrics for the tickets I’ve completed, the amount of story points completed, and the amount of returns I’ve had from QA. Crucially, this also helps me remember the tickets I’ve worked on and how they went. Like most developers, I tend to switch focus once a ticket passes and can find it hard to retain the details when it’s time to review a cycle or perform a project post mortem.

Agile metrics for self-improvement

Whether you decide to use them yourself or for your team, (Agile && metrics) return true. In my experience, people want similar things and work well together in helping deliver on the key Agile principle of self-improvement.

You will of course come up with your own, but I have found these example questions (and related Agile metrics) can help self-improvement:

Whether you decide to use them yourself or for your team, (Agile && metrics) return true. In my experience, people want similar things and work well together in helping deliver on the key Agile principle of self-improvement. Try it out!

About the author

Colin Eatherton

Colin Eatherton is Team Lead and Developer at Agile metrics software company, Plandek. Colin develops the front end of Plandek’s UI using web technologies including JavaScript, HTML, CSS and Ruby. In this role Colin works with a cross-functional development team, stakeholders and Plandek’s product manager to understand and analyse delivery requirements, and implement solutions. His team applies Agile methods and metrics to continuously improve processes and outcomes.

More content from Plandek.

Find out more about our team.