By Charlie Ponsonby, Co-CEO Plandek
Governance and risk management is an increasingly active research area in Agile software delivery – particularly in large scale organisations. Moving to an effective Agile methodology is a major strategic decision. It takes a huge amount of time and effort and inevitably questions are asked (from the C-suite down) about its effectiveness and reliability for critical software delivery initiatives.
Moreover, Agile by its very nature, involves decentralising responsibility to small self-determining teams working in a more organic (agile) way than would be the case in a more traditional waterfall environment. This decentralised model (which is quite rightly at the heart of the Agile philosophy) can make understanding software delivery risk difficult, without effective metrics in place.
As a result, we often hear of the exasperation with existing RAG (Red, Amber, Green) progress reports – with workstreams classified as “Green” for weeks in a row, before flipping to “Red” with apparently no warning!
This short post discusses the analytics and metrics that can be applied to try and ensure that such surprises do not happen, as delivery managers have a much better understanding of the underlying risks within their software delivery teams (capability).
Delivery Capability Risk
For the purposes of this discussion, we are defining “delivery risk”, as the risk of delivering software increments:
- later than expected; and/or
- of worse quality than expected; and/or
- requiring more effort/resource than anticipated.
Understanding software delivery risk in totality is a complex task, with a range of internal and external factors that drive delivery risk. This paper is interested in a key internal risk that is directly controllable by the delivery team, that we term Delivery Capability Risk (DCR).
The concept of DCR is summarised in the graphic below. There is a great range of Enterprise Agile Planning solutions that help you manage your delivery journeys (programmes). They track scope, effort and apparent progress. What they are unable to do is really understand how effectively the teams writing and releasing the software are working together.
DRC analysis lifts the bonnet (hood) on your delivery capability to understand the very real risks sitting across the teams responsible for design, development, testing, builds and deployment.
In our view, only when you fully understand these delivery capability risks, can you have a real understanding of broader delivery risk.
Understanding Delivery Capability Risk in complex IT programme management
There are a set of metrics that can quite accurately track delivery capability risk (DCR), but they are tricky to surface without specialist BI solutions like Plandek.
Plandek for example works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools and Slack), to surface the metrics critical to identify and manage DCR.
It creates a balanced set of metrics that determine delivery capability risk, using both quant data from the underlying tools sets such as Jira, Git etc – and also from the engineers themselves via constant polling through Slack or other collaboration hubs.
The metrics fall into five logical categories which when synthesised together, give an accurate measure of DCR when tracked over time. These categories are:
- Backlog health analysis – metrics and analytics to understand as far as is possible the state of team’s backlog, especially as it relates to the current and next programme cycle;
- Talent – quant metrics to understand your delivery teams’ morale and views on process effectiveness (collected via polling on collaboration hubs);
- Process efficiency and transparency – metrics that reveal the effectiveness of the end-to-end delivery process (e.g. Flow Efficiency and Lead Time analysis) which reveal bottlenecks and friction in the process;
- Throughput and time to value – metrics showing volume of work being produced and time taken to deliver across the end-to-end SDLC;
- Delivery (sprint) accuracy – metrics showing teams’ ability to meet their own sprint goals (for Scrum Agile) which is a key determinant of likelihood of delivering over longer time periods (e.g. Programme Increments).
Examples of these metrics are shown in Figure 2 below.
This balanced scorecard of capability risk metrics add a new dimension to overall programme risk management.
As Figure 2 shows, these metrics are principally designed for use in an Agile delivery context (with concepts of Cycle Times, Sprint Completion etc), but many can also be applied in a hybrid “Scrumfall” context (often adopted by larger organisations to deliver major projects).
- Metrics relating to backlog health are clearly key in any context (and reveal hidden risk);
- real-time understanding of engineer morale and engineer feedback as regards the delivery process are also critical leading indicators of (hidden) delivery risk; and so too are
- changes in time spent (and the efficiency of) fixing bugs and technical debt.
These are all “under the bonnet” metrics that when viewed together, give the experienced Delivery Manager a view on the health of the delivery “engine” – is it firing on all cylinders, or running on empty…?
Applying delivery capability risk to overall project risk management frameworks
Programme management techniques typically map the various workstreams and understand interdependencies and the critical path.
These techniques create well organised Gantt charts showing the theoretical progress of the project relative to planned milestones. However, what these techniques cannot do, is effectively track the health of the underlying technology delivery capability.
i.e. the Gantt chart may show that we just hit a key milestone, but an understanding of the health/stress of the underlying delivery team may paint a very different picture. It may show that this was achieved in an unsustainable way (low morale, declining process efficiency, increasing technical debt etc) – hence the team is unlikely to hit the next milestone.
This is why an understanding of delivery capability risk (i.e. understanding the health of the underlying delivery “engine”) can be a vital extra dimension in complex IT programme management.
This is indeed why Plandek is used as a delivery risk management tool to be applied in conjunction with existing Enterprise Agile Planning tools (such as Jira, Jira Align, Rally etc).
Introduction to Plandek
Plandek is the leading Agile metrics BI and Analytics platform to help large Agile technology teams better manage delivery risk and improve their effectiveness.
Plandek works by mining the data history from dev teams’ toolsets (e.g. Jira, Git) to reveal and track metrics that are highly predictive of improved Agile project outcomes. These metrics are often not visible or accurate with current toolsets. Plandek is working with a wide range of organisations, to track and actively manage these Agile metrics, to manage software delivery risk and improve performance.
Read more by Charlie Ponsonby:
You can also check out Charlie’s article on the importance of metrics to Agile teams on InfoQ, which recently trended globally.
Read more about our team.
By Charlie Ponsonby
Originally published on InfoQ
- Agile teams are often involved in delivering major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast or risk being accused of being “Agile and late”
- Logic dictates that there are six possible reasons why a project is late – the “Logical Six”. Three in the control of the technology team: underestimation of effort; lack of available talent; and lack of team productivity. Three in the control of sponsors: unclear requirements; scope change; and lack of required ongoing input.
- There are metrics that relate to these six potential sources of delay – it is critical to measure these to improve forecasting accuracy.
- These metrics require surfacing from multiple data sources and are therefore hard to see without an end-to-end delivery metrics/analytics platform
- These metrics can then be used to create a Root Cause Red/Amber/Green (RAG) Progress Report – to share with sponsors a more accurate forecast and clear mitigations, with allocated responsibilities to deliver the identified mitigations.
We work with Agile teams of all different shapes and sizes and predictability is a theme that is front-of-mind for almost all – as the words “Agile” and “predictable” don’t always go hand in hand …
So how can development teams maintain their Agility and improve their delivery predictability? So, when stakeholders ask the predictable question “Are we on schedule?“, they can give a sensible answer.
Typical Agile team forecasting approaches
Product-based Agile software development teams delivering small increments very regularly, may spend little time worrying about forecasting.
But often Agile teams are established to deliver major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast effectively or risk being accused of being “Agile and late!”
In our experience, Agile teams’ forecasting tends to be pretty inaccurate and is often based only on a simple observation of backlog, velocity and word-of-mouth reassurance from the teams themselves.
In our opinion a really meaningful forecast requires a broader set of empirical data reflecting all the potential sources of project delay.
* There is a separate debate as to whether an Agile software development methodology is appropriate in a “project“ context like this, but that is for another day.
“The Logical Six” – the six sources of project delay
Logic dictates that there are six possible reasons why a project is late – the so-called “Logical Six”. Three of the Logical Six are in direct control of the technology team:
- the size and complexity of the task is underestimated
- the planned group of appropriate engineers are not available
- the delivery team is not delivering as productively as anticipated.
And the other three are in control of the business sponsors interacting with the technology team. These are:
- Unclear requirements definition – internal clients are not clear enough about what they actually want
- Scope change – the business moves the goal posts (changed/new requirements or changed priorities)
- Ongoing input – the development process is delayed by a lack of stakeholder input where/when required.
In our view, you will never really be able to accurately forecast and improve your delivery predictability unless you collect metrics which track all of these six levers.
Only then will you really understand whether a project is likely to be “late” and what needs to be done to get it back on track.
Challenging your teams’ forecasting with analysis of the delivery metrics that matter
So, what are the metrics that relate to the six sources of project delay – and so are critical to delivery predictability and improved forecasting accuracy?
The table below shows our favourite metrics in each of the areas. We encourage Delivery Managers to focus on these when working with the Delivery Team Leads to create more realistic forecasts of delivery timing.
In summary, the metrics are:
- People availability – clearly key. If we don’t have the engineers that we anticipated, we will be late.
- Team productivity relating to:
- Productive time – another critical metric considering the proportion of time engineers have to focus on writing new features
- Process efficiency – friction in the development process can undermine the best laid delivery plans. So really understanding trends in and the causes of this friction is key
- Velocity and time to value – understanding how our throughput and time to value has varied as the project progresses is yet another determinant variable in our forecasting
- Estimation Accuracy – if we are adopting a Scrum-based approach– sprint completion gives a very good indicator of our forecasting capability. If we cannot hit our two-weekly sprint goals, we are unlikely to be effective at estimating effort and forecasting further into the future
- Requirements definition, stakeholder input and scope change can be tracked using Quant Engineer Feedback collected from collaboration hubs like Slack. This is something we use a lot internally to improve our forecasting as it uses quant insight from the people actually doing the work. It often adds confidence to an otherwise theoretical delivery forecast and sheds light on three of the Logical Six (requirements definition, stakeholder input and genuine scope change).
Key Metrics to track the Logical Six levers of project delay
|Available Engineering Resource:
||Clearly key – shows whether we have the planned resource in place to deliver the work.|
||Key to understand how this has trended over time. If we are spending more expending energy on non-productive tasks, clearly this is going to impact our progress going forward.|
||These metrics analyse the “friction” in the development process and how this has trended over time. Declining Flow Efficiency is a problem that can often be addressed, so it is a key metric in forecast mitigation. Rework shows trends in accumulated time spent reworking tickets that fail QA. This is another form of friction that may be mitigated (e.g. by assisting engineers new to the code base).
NB: In our view, any metric collected at individual level needs to be viewed in context by people directly involved in the project. They can be taken out of context (to damaging effect) if circulated more broadly.
| Velocity and Time to Value
||Velocity metrics are problematic, but a detailed understanding of trends in tickets completed (and story points/value points per ticket) is a key when challenging forecasts. Critical too is an understanding of changes in Cycle and Lead Times. If they are lengthening accurate forecasting is tricky.|
| Sprint Accuracy
||Inability to meet two weekly sprint goals, makes forecasting over longer periods very difficult. These metrics are therefore critical to forecasting accuracy.|
|Quant. Engineer feedback
||Some metrics platforms enable the real-time polling of engineers through collaboration hubs. This provides quant data:
Collecting the delivery metrics that matter
The key delivery metrics require surfacing data from a myriad of sources including; work-flow management tools, code repos, and CI/CD tools – as well as collecting quant feedback from the engineering team themselves (via collaboration hubs).
The complexity of the data and multiple sources make this sort of data collection very time consuming to do manually and really requires an end-to-end delivery metrics platform to do at scale.
Delivery metrics platforms are available which consist of a data layer to collate and compile metrics from multiple data sources and a flexible UI layer to enable the creation of custom dashboards to surface the metrics in the desired format.
Using Root Cause RAG reporting to combine your delivery forecast and mitigation plan
If we use metrics to track and analyse the Logical Six drivers of project progress, we will get a much clearer picture of real project progress. By this we mean:
- a more realistic delivery forecast
- and clear mitigations that we can focus on, if the forecast is seen as behind schedule.
The improved forecast and related mitigations can be presented together in a Root Cause Red, Amber and Green (RAG) Progress Report.
Root Cause RAG reports are far more effective than traditional RAG progress reporting which often sheds very little light on actually why a project (with a required go-live date) is behind schedule and what needs to be done to bring it back on track.
In contrast to a traditional RAG approach, the Root Cause RAG Report (see the example below) clearly shows:
- Our latest delivery forecast
- The delivery metrics that support our forecast
- Our mitigations (based around the Logical Six levers that drive project timing) – e.g. the need to increase productive time by reducing time diverted to upkeep; the need to improve Flow Efficiency by addressing the blockages in the dev process (e.g. QA wait time); or the need for improved stakeholder input (as shown in the quant engineer feedback)
- Allocated responsibilities (across the development teams and stakeholders) to deliver the identified mitigations
Done well, Root Cause RAG Reports can be a really effective means of presenting our (more accurate) forecasts in a way that stakeholders can understand and therefore can be an important step in reducing lateness and bringing the technology team and the internal client much closer together.
As discussed however, it relies on an understanding of the metrics that actually determine project lateness and a means of collecting those metrics.
About the Author:
Charlie Ponsonby started his career as an economist in the developing world, before moving to Andersen Consulting. He was Marketing Director at Sky until 2007, before leaving to found and run Simplifydigital in 2007. Simplifydigital was three times in the Sunday Times Tech Track 100 and grew to become the UK’s largest broadband comparison service. It was acquired by Dixons Carphone plc in April 2016. Ponsonby co-founded Plandek with Dan Lee in October 2017. Plandek as an end-to-end delivery metrics analytics and BI platform. It mines data from toolsets used by delivery teams (such as Jira, Git and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness. Plandek is used by clients globally including Reed Elsevier, News Corporation, Autotrader.ca and Secret Escapes.
Read more by Charlie Ponsonby:
By Charlie Ponsonby, Co-CEO Plandek
We work with a great variety of organisations at various points in their Agile transformations. Whilst the move to Agile has driven tangible and lasting benefit in almost all of these organisations – the great majority have experienced problems and unintended consequences along the way.
One issue that we hear very often (particularly in large commercial organisations) is the difficulty of reconciling Agile’s decentralised, iterative approach with internal clients used to agreeing budgets (up-front) and expecting outcomes delivered at certain points in time.
Products over Projects
An important principle of Agile is to align stable teams around designing and building particular products – so that they have end-to-end responsibility for designing, building and maintaining the technology and become increasingly expert over time. This is inherently sensible and prevents temporary project based teams being thrown together to build something, only to be reassigned after “launch”.
Effective product-based teams aim to iterate and deploy improvements increasingly rapidly in keeping with the core Agile goal of the “early and continuous delivery of valuable software”.
As a result, forecasting becomes less important as the business expects small, frequent increments to an existing application over time. Rather like painting the Forth bridge – the job that is famously never finished…
Projects over Products?
After the popular writing of Martin Fowler and others, product-based teams have become the desired option for many Agile organisations – but in certain situations a product-based methodology can have major drawbacks.
A typical example would be the build of a new application that the business requires at a certain time and for an agreed budget (signed-off upfront). This is a very common scenario in large organisations. Typical examples might include:
- regulatory changes forcing an upgrade or change to a legacy system by a certain date
- the launch of a new app or digital platform required in time for a certain date or milestone (e.g. Christmas trading).
Under these circumstances a product-based Agile methodology can cause serious problems as the methodology:
- is designed for small, regular iterations – to deliver value increments early and often
- does not therefore involve detailed project scoping upfront. Tasks may be T-shirt sized, but no detailed estimation of effort is likely to be undertaken upfront;
- as such forecasting cost and timing of delivery is highly problematic (and is not in keeping with the aims/purpose of the product-based methodology).
This inability to forecast cost and timing of delivery almost inevitably causes serious problems with stakeholders anxious to receive progress updates and working software at the planned point in time, and at the planned budget.
So how can an Agile team better predict delivery timing and cost – even if the Agile methodology that they are adopting is not suited to accurate forecasting?
The concept of a Delivery Risk Profile
Forecasting techniques within Agile teams are often rudimentary. Burndown charts offer a linear extrapolation of current velocity, compared to the known backlog – to estimate future completion date of effort outstanding. There are two basic problems with this:
- as Agile teams estimate ticket size/effort on an ongoing basis, there will inevitably be un-estimated backlog that is not included in the forecast
- extrapolating current velocity may be highly misleading (for any number of reasons affecting the team’s ability to deliver going forward).
However, the good news is that if we collect and track a set of delivery metrics over time, we can start to put together a more informed view of forecast delivery timing and cost, expressed within a Delivery Risk Profile.
Plandek is the world’s leading BI platform to surface and analyse end-to-end delivery metrics. It mines data from toolsets used by delivery teams (such as Jira, Git, and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness.
Analysis of this balanced scorecard of metrics gives a more measured view of likely future delivery time and cost. This analysis is presented in a Delivery Risk Profile.
The Delivery Risk Profile is made up of metrics in five key areas:
- Backlog analysis – it’s key to understand the quality and age of the current backlog and to get a better feel of the likely size of the unestimated element of the backlog
- Estimation and sprint accuracy – a critical determinant of timing accuracy. If teams are unable to estimate effectively and to deliver to their sprint goals, then longer-term delivery targets become far more unreliable
- Process efficiency – often very poorly understood, but vital to understanding likely future velocity
- Throughput and velocity – clearly key, but even more interesting when trends by individual team are carefully considered
- Talent availability and engagement – again often poorly understood but critical in maintaining quality and velocity – as delivery is ultimately all about the talent.
Key metrics within the Delivery Risk Profile
The graphic below shows key metrics that when considered together give a much more informed view of likely future delivery timing and hence cost – as they are heavily deterministic of future velocity.
The Plandek analytics platform collects these metrics across all teams and projects and enables Delivery Managers to get a much more complete picture of likely future velocity – expressed in a quantitative Delivery Risk Profile.
Typically trends in the metrics are analysed by team to build the profile of risk. This Risk Profile then enables the Delivery Manager to refine:
- linear burndown chart forecasts; and
- verbal forecasts provided by the teams themselves.
A more accurate delivery forecast is therefore synthesised from three different sources – the teams themselves; linear burndown estimates; and the balanced scorecard of metrics known to directly impact future velocity.
Case study – applying a Delivery Risk Profile to reduce unplanned go-live delays by 50%
We work with a number of data companies, one of which has been particularly successful at applying the Plandek metrics set to build an effective risk profile, which they have used to very significantly improve their go-live forecasting.
Their teams are organised around a product-led strategy and therefore not ideally suited to delivering time-dependent “projects”. In this instance, stakeholders need careful management to ensure that there is as much visibility as possible as regards progress and forecast timing of delivery of major milestones.
Before the Plandek metrics set was tracked and analysed – unplanned go-live delays were common. Delivery Managers relied on linear burndown charts and word-of-mouth updates from scrum teams.
Plandek was implemented and delivery Team Leads and Delivery Managers started to track and manage to a simple set of delivery metrics. Trends were analysed over time for each scrum team to build a Delivery Risk Profile (as described on page 3).
The immediate effect was a greater focus on:
- Delivering sprints more accurately (measured via Sprint Overall Completion (%) and Sprint Target completion (%)
- The forecasting process – with word-of-mouth forecasts from the teams more rigorously debated and refined in the light of trends in the key risk profile metrics.
The net result was a very significant improvement in forecasting accuracy. Over a 6 month period, unplanned go-live delays reduced by 50%. This greatly strengthened the relationship between the technology delivery team and the business stakeholders.
The case is summarised in the table below.
A European data business – Understanding the delivery team’s risk profile to improve delivery forecasting accuracy
Read more case studies:
Read more about our team.
You can also check out Charlie’s article on the importance of metrics to Agile teams on InfoQ, which recently trended globally.
Agile metrics for self-improvement
(Agile && metrics) ? Can agile metrics help developers and teams improve?
The journey to becoming Agile can sometimes be tricky. In this article, discover nine critical success factors that make Agile metrics work for teams. What questions should you and your team be asking yourself in order to focus on self-improvement, reliability, efficiency, and high-quality code delivery?
By Colin Eatherton
Article originally published on JAXenter.
Inevitably, most teams get to the stage where they need to adopt a more Agile delivery process. This is not just a sign of maturity. It’s a sign that the software they are developing is being used, is deemed useful, and is receiving feedback and change requests so that it continues to improve.
My team is in a unique position. We are striving to improve delivery as we develop a tool that strives to help teams do the same. In other words, we use our own tool to improve the delivery of it!
In my experience, the journey to becoming more Agile can be tricky. Each team has its own goals and ideas about how to get there. All teams, however, need to be able to reflect on their progress, measure how effective their current strategy is, and gain more visibility of the wider landscape. Of course, this is easier said than done.
Bottom-up is best
The topic of which metrics Agile teams can trust to reliably help them measure progress – or whether to use them at all – is both fascinating and contentious. Many people associate metrics with a top-down management style, which is the opposite of the decentralised, empowered and self-determining team philosophy that Agile promotes.
During a one-to-one meeting with my team lead, I asked him which metrics he felt I should focus on. He explained that the only ones worth looking at were those that the whole team agreed would help improve delivery. When it came to my own self-improvement goals, he said I should select metrics myself.
As a rule, whenever metrics are applied from the top down, the less effective they are. (This is not to say, however, that there aren’t valuable metrics that can indicate progress at a higher level.)
Using Agile metrics for team improvement
Self-improvement is a key Agile principle. On the face of it, it’s a pretty simple process. First, you identify what you want to improve. Next, establish ways to measure the attributes that contribute to improvement. Then measure and reflect. Therefore you will always need a reliable way to track progress.
My team chose Agile metrics that focus on various attributes of delivery, quality and value. For example, we measure Lead Time from the time the ticket is created in Jira to its production deployment and number of escaped bugs. We’ve created a dashboard in our own software around these attributes so we can measure, integrate and affect them daily, or as part of a retrospective. Our dashboards help guide us and qualify decisions we make around team, process, and delivery improvement to ensure we continually head in the right direction. We can also opt to see individual contributions to these metrics. For instance, I have chosen to create a view of metrics that only I can see, so I can measure my own personal output.
Using Agile metrics for delivery cycle improvement
As part of our cycle rituals, our team is responsible for making sure our scope is realistic. To support this, we use Agile metrics to ensure that the sum complexity, time and effort of our tasks match the overall time available and the team’s abilities. We measure the scope using story points. We also built and now use a ‘Sprint Report’ facility. This allows us to see a breakdown of the sprint’s overall completion, including the target completion and work added to the sprint after it started. It also includes ‘Sprint-specific dashboards’ that use metrics like ‘Completed Tickets’ to calculate the amount of work developers can reliably complete during a sprint (aka their ‘velocity’).
Nine critical success factors that make Agile metrics work for teams
As I said before, Agile metrics for team improvement can be contentious. They open up a lot of heated discussions and to varying degrees benefit from a wider understanding of context and narrative. So we discuss them and apply the following tenets to help find common ground:
Complicated metrics run counter to the Agile spirit. We like to define our journey as specifically as we can, answering simple questions with easy to understand metrics which support them like:
- Are we effective at delivering value?
- What is our lead time?
- How long is our bug backlog?
- Are our sprints going well?
- Are we completing all tickets in a sprint?
- Are we adding lots of scope to sprints?
- Is team morale high?
- Does the team understand the business value of the work?
- Does the team feel autonomous?
The metrics need to be selected by the development team and serve a common aim shared by project members from the Scrum Master to the technology leader.
You shouldn’t measure anything unrelated to your journey’s destination. Each project follows a different set of milestones so may need different metrics. However, as there is only one final destination, some carefully-selected metrics should be applicable across all teams. Less can be more, so when we build out a dashboard together in our team meetings, we try to concentrate on only a handful of metrics at a time.
Software delivery metrics are often outcome-based. Although legitimate, there’s a risk of tracking only symptoms and not root causes. The ‘Cycle Times’ metric, for example, shows how long work is taking rather than why. Descriptive metrics like these should also include details of the variables that impact the outcomes. For example, alongside Cycle Time you could show an analysis of the bottlenecks. To improve we want to uncover root causes and identify behaviour gains we can make together – we need to tell a full story.
Right sources – we need to analyse data from those sources that our developers genuinely engage within their everyday work. These include workflow management software like Jira; code repositories like GitHub and Bitbucket, TFS or Gitlab; code quality tools like Sonarqube; time tracking systems like Harvest or Tempo; and continuous delivery tools like Jenkins and GoCD.
If analysing metrics takes significant cognitive effort or time to collate, we tend to lose patience and abandon the effort. The metrics need to complement processes, not slow them down.
Agile metrics delivered in near real-time fundamentally drive improvement as they can be discussed in daily stand-ups and sprint retrospectives.
The human factor
Software development is a process (almost) completely driven by people. This means it should be possible to source information and get to the root-cause of issues very fast. Typically feedback is collected in person, in stand-ups and retros. In theory this should work well, but it can also hide issues that participants don’t want to openly communicate. This is especially true in changing, distributed teams with a mix of full-time employees and contractors. To address this and provide us with context and narrative around our metrics, we incorporate feedback into our tool. For example, when tickets get closed – we get the chance to provide feedback on how the ticket went and its requirements via Slack. These prompts also give us a feel for how a ticket has performed post dev as it continues (hopefully!) past QA.
Metrics only make sense if teams can act upon and improve them. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives. Active stewardship by the technology leadership team can make a huge difference.
The limits of home-grown solutions
Since for many developers (Agile && metrics) don’t seem to get along, it’s no surprise that few analytics tools are available to measure Agile delivery effectiveness. However, now that Agile is mainstream, there is urgency to measure. Since few analytics tools were available, many teams started to build their own. This worked well on a small-scale but hit the wall when projects and teams grew.
There are several other problems with home-grown tools. Most notably, they allow teams to tweak calculations and tell an overly flattering story. Also, the time it takes to build your own tool can be a big distraction from planned work. Fortunately, new solutions are now emerging that work in line with the principles listed above.
Agile metrics for self-improvement
If you are still not convinced about using Agile metrics for teams, I recommend testing them on yourself. Most find that when they do this, the metrics become a reassurance or even a confidence boost. For example, a younger colleague of mine was struggling with his programming confidence. He found metrics to be very helpful because they showed him objective proof of improvement.
For my part, one way I often use Agile metrics is to provide insights during a retrospective. To measure how I’m improving over time, I track metrics for the tickets I’ve completed, the amount of story points completed, and the amount of returns I’ve had from QA. Crucially, this also helps me remember the tickets I’ve worked on and how they went. Like most developers, I tend to switch focus once a ticket passes and can find it hard to retain the details when it’s time to review a cycle or perform a project post mortem.
You will of course come up with your own, but I have found these example questions (and related Agile metrics) can help self-improvement:
- Where has my time been spent? I find it interesting to look back on this. The time I actually spend on high-priority work drives velocity, productivity and helps me estimate delivery dates with more confidence.
- How actively am I contributing towards ticket creation to improve quality?
- How much impact have I had? How much work did I get done and what was it?
- Am I delivering high-quality code?
- How reliable am I?
- How efficient am I at delivering value?
Whether you decide to use them yourself or for your team, (Agile && metrics) return true. In my experience, people want similar things and work well together in helping deliver on the key Agile principle of self-improvement. Try it out!
About the author
More content from Plandek.
Find out more about our team.