Plans
Roles
Support
Customers
The complete Software Engineering Intelligence platform
Get the full suite of Plandek intelligence tools for actionable delivery insights at every level
Book Demo
PRODUCT OVERVIEW
USEFUL PAGES
Data-driven insights for engineering leaders
2025 Software Delivery Benchmark Report
Download Now
PRICING
ABOUT
ACADEMY
Solutions
Platform
Pricing
About
Academy
Your voice matters: Join the GenAI adoption conversation - contribute to our industry research.
Written by
The following article was posted by InfoQ: https://www.infoq.com/articles/metrics-agile-teams/
We are fortunate to have the opportunity to work with a great variety of engineering teams – from those in start-ups to very large, distributed enterprises.
Although definitions of “engineering excellence” vary in these different contexts, all teams aspire to it. They also share the broad challenge of needing to balance the “day job” of delivering high quality, high value outcomes against the drive to continually improve.
Continuous Improvement (CI) inherently requires metrics against which to measure progress. These need to be balanced and meaningful (i.e. deterministic of improved outcomes). This creates two immediate issues:
We view CI as vital in healthy and maturing Agile environments. Hence metrics to underpin this process are also vital. However, CI should be owned and driven by the teams themselves so that teams become self-improving. Ergo, CI programmes become SI (Self-Improvement) programmes.
This article focuses on how teams can implement a demonstrably effective SI programme even in the fastest moving and most resource constrained Agile environments so that they remain self-managing, deliver value quickly, and continue to improve at the same time
The concept of CI has been around for a long time. It was applied perhaps most famously in a business context in Japan and became popularised with Masaaki Imai’s 1986 book “Kaizen: the Key to Japan’s Competitive Success.”
The CI principle is very complementary with core Agile principles. Indeed, the Agile Manifesto states:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.
There are two key themes here – firstly, CI and secondly, that CI is driven by the teams themselves (SI). This raises the question as to what role of leadership should take in this improvement process.
Our evidence shows that the size of the prize is very significant. Well implemented SI programmes can deliver significant and sustained improvement in metrics that underpin your time to value (TTV) – for example:
However, achieving these goals is hard and requires sustained effort. Technology leadership needs to give teams the tools (and encouragement) to own and drive the self-improvement process. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives.
The principle of team Self-Improvement (SI) is simple and powerful, but very hard to deliver effectively. It requires four important things:
Agile teams are almost always busy and resource-constrained. As a result, the intention of always improving (in a structured and demonstrable way) often loses out to the pressures of the day job – delivering to the evolving demands of the business.
In our experience, successful SI requires coordination and stewardship by the technology leadership team, whilst empowering teams to own and drive the activities that result in incremental improvement. Therefore this needs to be in the form of a structured, long-term and well implemented SI programme.
Self-Improvement needs a serious commitment from the leadership team within engineering to provide teams with the tools they need to self-improve.
This will not be possible if the organisation lacks the BI tools to provide the necessary metrics and reporting over the full delivery lifecycle. Firstly, the reporting found within common workflow management tools like Jira is not optimised to provide the level of reporting that many teams require for an effective SI programme. Secondly, teams use a number of tools across the delivery cycle, which often results in data existing in siloes and not integrated to reflect a full view of end-to-end delivery.Teams should seek out BI tools that address these challenges. The right tools will give product and engineering teams meaningful metrics and reporting around which to build robust SI programmes.
As mentioned in the intro, selecting and agreeing metrics is often the most contentious issue. Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives.
By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.
We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.
As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency).
The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.
The (determinant) metrics are grouped into six key areas. These are:
From these six areas, we believe these are some of the most common and meaningful metrics around which a team can build an effective self-improvement programme:
In our experience, a highly effective Agile SI programme can be built around these metric sets. We’ve also found that having an integrated view of the full delivery cycle across the right tools in a single view, underpinned by these core metrics reveals key areas that can be optimised, i.e. low hanging fruit that can materially improve Time to Value.
Metrics should be available in near real-time to the teams, with minimal effort. If teams have to collect data manually, the overall initiative is likely to fail.
When all team members have a near real-time view of the metrics that they’ve signed up to, these become a core part of daily stand-ups and sprint retrospective reviews.
The aim is not to compare these metrics across teams – instead the key aim is to track improvement over time within the team itself. Leadership teams need to remain outcome focused, whilst enabling and empowering teams to identify and make incremental improvements that will improve those outcomes.
Team SI is unlikely to take place consistently and sustainably across teams, without committed leadership. The SI programme needs to be formally established on a monthly cycle of team target-setting, implementation, review, and celebration of success (see below).
Team Leaders and Scrum Masters need to strike the right balance of sponsoring, framing and guiding the programme with giving teams the time and space they need to realise improvements.
SI is designed to be a positive and motivating process – and it is vital that it is perceived as such. A key element of this is remember to celebrate success. It’s easy to “gamify” SI and find opportunities to recognise and reward the most-improved teams, competence leaders, centres of excellence, and so on.
Questions often arise around target setting and agreeing what success looks like. Some organisations opt only to track individual teams’ improvement over time (and deliberately not make comparisons between teams). Still others find benchmarks useful and divide them into three categories:
The SI programme leader/sponsor can view progress against these benchmarks and look back over the duration of the programme to view the rate of improvement.
https://jaxenter.com/second-age-agile-159373.html
As all LOTR fans will know, the Second Age in Middle Earth lasted 3,441 years – from the defeat of Morgoth to the first defeat of Sauron. Lots happened in Middle Earth during the period and many chickens came home to roost (metaphorically speaking).
In many ways, Agile is entering a similar age. It’s been more than 15 years since the Agile Manifesto was conceived and adoption has been very rapid. It is estimated that 88 percent of all US businesses are involved in some form of Agile methodology in some part of their technology operations.
As such, Agile finds itself approaching the top of the “S” of the adoption curve (see below). As with all innovations approaching late-adoption maturity, the honeymoon period is over and businesses working in Agile are under increasing pressure to demonstrate that their Agile transformations are successful and adding real business benefits.
Technology teams are very familiar with measuring output, performance and quality and are not short of quant data. Surprisingly, however, there are very few BI solutions available that aim to measure the overall effectiveness of Agile software development teams across the full delivery cycle– from ideation to deployment.
The solutions out today there tend to focus on one element within the overall Agile process – e.g. code quality tools (focused on coding itself), and workflow management plug-ins that look at certain aspects of the development process, yet often exclude pre-development and post-development stages.
Indeed the “Agile metrics platforms” or “Agile BI” sector is so embryonic, that analysts like Gartner do not yet track it. The closest related sector that Gartner analyses is “Enterprise Agile Planning Tools, which, although related, is focused on planning rather than the efficiency and quality of the output.
Fortunately, newer solutions are emerging vying to answer this unmet consumer need. To create a balanced set of Agile metrics that track overall effectiveness, look for new systems that ingest and analyse data from a variety of tools that software development teams use in their everyday work.
It is reasonable to assume that all Agile transformations broadly aim to deliver against the Agile Manifesto’s number one objective: the early and continuous delivery of value. As the Manifesto states:
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.
The Manifesto’s subsequent aims support this guiding principle and can be summarised as:
The challenge and key question is how do you measure these objectives and demonstrate that “Agile is working”? This opens up the contentious subject of Agile metrics.
Why are Agile metrics contentious? There are many protagonists within large technology teams. Each has their own distinct views as to:
This makes selecting Agile metrics extremely important. Unless the process involves the key protagonists (from teams to delivery managers to heads of engineering) the metrics may not be accepted or trusted. In those circumstances, there is little point in collecting metrics, as teams will not drive to improve them and show the desired progress.
This is our take on a meaningful set of metrics for Agile development teams to track and demonstrate improvement over time.
As the table shows, some of the metrics are used by the team only and will not be compared across teams. Some can be aggregated across teams in order to give managers an overall view of progress.
These metrics are by no means definitive and readers will doubtless disagree with some. Since they have shown to be deterministic of outcomes, however, they provide a very useful starting point for development teams in this ‘Second Age of Agile’.
Free managed POC available.