By Charlie Ponsonby
Originally published on InfoQ
- Agile teams are often involved in delivering major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast or risk being accused of being “Agile and late”
- Logic dictates that there are six possible reasons why a project is late – the “Logical Six”. Three in the control of the technology team: underestimation of effort; lack of available talent; and lack of team productivity. Three in the control of sponsors: unclear requirements; scope change; and lack of required ongoing input.
- There are metrics that relate to these six potential sources of delay – it is critical to measure these to improve forecasting accuracy.
- These metrics require surfacing from multiple data sources and are therefore hard to see without an end-to-end delivery metrics/analytics platform
- These metrics can then be used to create a Root Cause Red/Amber/Green (RAG) Progress Report – to share with sponsors a more accurate forecast and clear mitigations, with allocated responsibilities to deliver the identified mitigations.
We work with Agile teams of all different shapes and sizes and predictability is a theme that is front-of-mind for almost all – as the words “Agile” and “predictable” don’t always go hand in hand …
So how can development teams maintain their Agility and improve their delivery predictability? So, when stakeholders ask the predictable question “Are we on schedule?“, they can give a sensible answer.
Typical Agile team forecasting approaches
Product-based Agile software development teams delivering small increments very regularly, may spend little time worrying about forecasting.
But often Agile teams are established to deliver major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast effectively or risk being accused of being “Agile and late!”
In our experience, Agile teams’ forecasting tends to be pretty inaccurate and is often based only on a simple observation of backlog, velocity and word-of-mouth reassurance from the teams themselves.
In our opinion a really meaningful forecast requires a broader set of empirical data reflecting all the potential sources of project delay.
* There is a separate debate as to whether an Agile software development methodology is appropriate in a “project“ context like this, but that is for another day.
“The Logical Six” – the six sources of project delay
Logic dictates that there are six possible reasons why a project is late – the so-called “Logical Six”. Three of the Logical Six are in direct control of the technology team:
- the size and complexity of the task is underestimated
- the planned group of appropriate engineers are not available
- the delivery team is not delivering as productively as anticipated.
And the other three are in control of the business sponsors interacting with the technology team. These are:
- Unclear requirements definition – internal clients are not clear enough about what they actually want
- Scope change – the business moves the goal posts (changed/new requirements or changed priorities)
- Ongoing input – the development process is delayed by a lack of stakeholder input where/when required.
In our view, you will never really be able to accurately forecast and improve your delivery predictability unless you collect metrics which track all of these six levers.
Only then will you really understand whether a project is likely to be “late” and what needs to be done to get it back on track.
The “Logical Six” – the six ultimate sources of project delay
Challenging your teams’ forecasting with analysis of the delivery metrics that matter
So, what are the metrics that relate to the six sources of project delay – and so are critical to delivery predictability and improved forecasting accuracy?
The table below shows our favourite metrics in each of the areas. We encourage Delivery Managers to focus on these when working with the Delivery Team Leads to create more realistic forecasts of delivery timing.
In summary, the metrics are:
- People availability – clearly key. If we don’t have the engineers that we anticipated, we will be late.
- Team productivity relating to:
- Productive time – another critical metric considering the proportion of time engineers have to focus on writing new features
- Process efficiency – friction in the development process can undermine the best laid delivery plans. So really understanding trends in and the causes of this friction is key
- Velocity and time to value – understanding how our throughput and time to value has varied as the project progresses is yet another determinant variable in our forecasting
- Estimation Accuracy – if we are adopting a Scrum-based approach– sprint completion gives a very good indicator of our forecasting capability. If we cannot hit our two-weekly sprint goals, we are unlikely to be effective at estimating effort and forecasting further into the future
- Requirements definition, stakeholder input and scope change can be tracked using Quant Engineer Feedback collected from collaboration hubs like Slack. This is something we use a lot internally to improve our forecasting as it uses quant insight from the people actually doing the work. It often adds confidence to an otherwise theoretical delivery forecast and sheds light on three of the Logical Six (requirements definition, stakeholder input and genuine scope change).
Key Metrics to track the Logical Six levers of project delay
|Available Engineering Resource:
- Active Engineers (v plan)
|Clearly key – shows whether we have the planned resource in place to deliver the work.
- % Time spent on New Features
- % Time spent on Upkeep
- % time lost to Non-product related activity
|Key to understand how this has trended over time. If we are spending more expending energy on non-productive tasks, clearly this is going to impact our progress going forward.
- Flow efficiency (%)
- Rework (days)
|These metrics analyse the “friction” in the development process and how this has trended over time. Declining Flow Efficiency is a problem that can often be addressed, so it is a key metric in forecast mitigation. Rework shows trends in accumulated time spent reworking tickets that fail QA. This is another form of friction that may be mitigated (e.g. by assisting engineers new to the code base).
NB: In our view, any metric collected at individual level needs to be viewed in context by people directly involved in the project. They can be taken out of context (to damaging effect) if circulated more broadly.
| Velocity and Time to Value
- Feature Tickets Completed
- Cycle Time (days)
- Lead Time (days)
|Velocity metrics are problematic, but a detailed understanding of trends in tickets completed (and story points/value points per ticket) is a key when challenging forecasts. Critical too is an understanding of changes in Cycle and Lead Times. If they are lengthening accurate forecasting is tricky.
| Sprint Accuracy
- Overall Completion Rate (%)
- Sprint Overall Completion v Sprint Target Completion (%)
|Inability to meet two weekly sprint goals, makes forecasting over longer periods very difficult. These metrics are therefore critical to forecasting accuracy.
|Quant. Engineer feedback
- Team morale
- Sprint effectiveness
- Quality of ticket and requirements definition
- Quality of business sponsor input
- Effort identified as agreed scope change
|Some metrics platforms enable the real-time polling of engineers through collaboration hubs. This provides quant data:
- of engineers’ views on morale and process efficiency; and
- on the impact of business sponsors’ requirements definition and ongoing input
- Team Lead feedback on stories added by business stakeholders that are additional to the original scope.
Collecting the delivery metrics that matter
The key delivery metrics require surfacing data from a myriad of sources including; work-flow management tools, code repos, and CI/CD tools – as well as collecting quant feedback from the engineering team themselves (via collaboration hubs).
The complexity of the data and multiple sources make this sort of data collection very time consuming to do manually and really requires an end-to-end delivery metrics platform to do at scale.
Delivery metrics platforms are available which consist of a data layer to collate and compile metrics from multiple data sources and a flexible UI layer to enable the creation of custom dashboards to surface the metrics in the desired format.
Using Root Cause RAG reporting to combine your delivery forecast and mitigation plan
If we use metrics to track and analyse the Logical Six drivers of project progress, we will get a much clearer picture of real project progress. By this we mean:
- a more realistic delivery forecast
- and clear mitigations that we can focus on, if the forecast is seen as behind schedule.
The improved forecast and related mitigations can be presented together in a Root Cause Red, Amber and Green (RAG) Progress Report.
Root Cause RAG reports are far more effective than traditional RAG progress reporting which often sheds very little light on actually why a project (with a required go-live date) is behind schedule and what needs to be done to bring it back on track.
In contrast to a traditional RAG approach, the Root Cause RAG Report (see the example below) clearly shows:
- Our latest delivery forecast
- The delivery metrics that support our forecast
- Our mitigations (based around the Logical Six levers that drive project timing) – e.g. the need to increase productive time by reducing time diverted to upkeep; the need to improve Flow Efficiency by addressing the blockages in the dev process (e.g. QA wait time); or the need for improved stakeholder input (as shown in the quant engineer feedback)
- Allocated responsibilities (across the development teams and stakeholders) to deliver the identified mitigations
Done well, Root Cause RAG Reports can be a really effective means of presenting our (more accurate) forecasts in a way that stakeholders can understand and therefore can be an important step in reducing lateness and bringing the technology team and the internal client much closer together.
As discussed however, it relies on an understanding of the metrics that actually determine project lateness and a means of collecting those metrics.
Example Root Cause RAG Report
About the Author:
Charlie Ponsonby started his career as an economist in the developing world, before moving to Andersen Consulting. He was Marketing Director at Sky until 2007, before leaving to found and run Simplifydigital in 2007. Simplifydigital was three times in the Sunday Times Tech Track 100 and grew to become the UK’s largest broadband comparison service. It was acquired by Dixons Carphone plc in April 2016. Ponsonby co-founded Plandek with Dan Lee in October 2017. Plandek as an end-to-end delivery metrics analytics and BI platform. It mines data from toolsets used by delivery teams (such as Jira, Git and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness. Plandek is used by clients globally including Reed Elsevier, News Corporation, Autotrader.ca and Secret Escapes.
Read more by Charlie Ponsonby: