We are fortunate to have the opportunity to work with a great variety of engineering teams – from those in start-ups to very large, distributed enterprises.
Although definitions of “engineering excellence” vary in these different contexts, all teams aspire to it. They also share the broad challenge of needing to balance the “day job” of delivering high quality, high value outcomes against the drive to continually improve.
Continuous Improvement (CI) inherently requires metrics against which to measure progress. These need to be balanced and meaningful (i.e. deterministic of improved outcomes). This creates two immediate issues:
First – metrics (and indeed the concept of measurement) are contentious. What is the ideal “balanced scorecard”? Is there even such a thing?
Second – the Agile philosophy is predicated on decentralised, empowered and self-managing teams, which runs counter to the concept of top-down measurement.
We view CI as vital in healthy and maturing Agile environments. Hence metrics to underpin this process are also vital. However, CI should be owned and driven by the teams themselves so that teams become self-improving. Ergo, CI programmes become SI (Self-Improvement) programmes.
This article focuses on how teams can implement a demonstrably effective SI programme even in the fastest moving and most resource constrained Agile environments so that they remain self-managing, deliver value quickly, and continue to improve at the same time
The Size of the Prize
The concept of CI has been around for a long time. It was applied perhaps most famously in a business context in Japan and became popularised with Masaaki Imai’s 1986 book “Kaizen: the Key to Japan’s Competitive Success.”
The CI principle is very complementary with core Agile principles. Indeed, the Agile Manifesto states:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.
There are two key themes here – firstly, CI and secondly, that CI is driven by the teams themselves (SI). This raises the question as to what role of leadership should take in this improvement process.
Our evidence shows that the size of the prize is very significant. Well implemented SI programmes can deliver significant and sustained improvement in metrics that underpin your time to value (TTV) – for example:
10%+ velocity improvement
10%+ improvements in flow efficiency
15%+ reduction in return rates and time spent reworking tickets (returned from QA)
30%+ improvement in sprint completion accuracy (Scrum Agile)
Greatly improved team collaboration and team wellness.
However, achieving these goals is hard and requires sustained effort. Technology leadership needs to give teams the tools (and encouragement) to own and drive the self-improvement process. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives.
The tools needed for effective Agile team Self-Improvement
The principle of team Self-Improvement (SI) is simple and powerful, but very hard to deliver effectively. It requires four important things:
A serious long-term commitment and sponsorship from both the leadership team and the teams/squads themselves – and requires effort and resources over a prolonged period of time to realise iterative improvement
An agreed, objective set of metrics to track progress – making sure that these metrics are actually the right ones, i.e. deterministic of the desired outcome
A means for teams to easily track these metrics and set targets (with targets calibrated against internal and external benchmarks)
An embedded process within teams to make the necessary changes; celebrate success and move on.
Agile teams are almost always busy and resource-constrained. As a result, the intention of always improving (in a structured and demonstrable way) often loses out to the pressures of the day job – delivering to the evolving demands of the business.
In our experience, successful SI requires coordination and stewardship by the technology leadership team, whilst empowering teams to own and drive the activities that result in incremental improvement. Therefore this needs to be in the form of a structured, long-term and well implemented SI programme.
Implementing an effective team Self-Improvement programme
Self-Improvement needs a serious commitment from the leadership team within engineering to provide teams with the tools they need to self-improve.
This will not be possible if the organisation lacks the BI tools to provide the necessary metrics and reporting over the full delivery lifecycle. Firstly, the reporting found within common workflow management tools like Jira is not optimised to provide the level of reporting that many teams require for an effective SI programme. Secondly, teams use a number of tools across the delivery cycle, which often results in data existing in siloes and not integrated to reflect a full view of end-to-end delivery.
Teams should seek out BI tools that address these challenges. The right tools will give product and engineering teams meaningful metrics and reporting around which to build robust SI programmes.
Metrics for SI
As mentioned in the intro, selecting and agreeing metrics is often the most contentious issue. Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives.
By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.
We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.
As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency).
The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.
The (determinant) metrics are grouped into six key areas. These are:
The key enabler – best practice and tool use
A key push-back is often that tool usage (e.g. Jira) is so inconsistent, that the data collected from within it is not meaningful (the old adage of “garbage in, garbage out”).
However, there are some simple disciplines, that can themselves be measured, that greatly improve data quality.
In addition to focusing on best practice “hygiene” metrics, teams can build their self-improvement initiatives around five further determinant metric sets…
Sprint disciplines and consistent delivery of sprint goals (Scrum Agile)
Proportion of time spent/velocity/efficiency of writing new features (productive coding)
Quality and failure rates and therefore…
Proportion of time spent/efficiency of bug fixing and re-work
Teamwork, team wellness and the ability to collaborate effectively.
From these six areas, we believe these are some of the most common and meaningful metrics around which a team can build an effective self-improvement programme:
In our experience, a highly effective Agile SI programme can be built around these metric sets. We’ve also found that having an integrated view of the full delivery cycle across the right tools in a single view, underpinned by these core metrics reveals key areas that can be optimised, i.e. low hanging fruit that can materially improve Time to Value.
Metrics should be available in near real-time to the teams, with minimal effort. If teams have to collect data manually, the overall initiative is likely to fail.
A sample SI Dashboard
When all team members have a near real-time view of the metrics that they’ve signed up to, these become a core part of daily stand-ups and sprint retrospective reviews.
The aim is not to compare these metrics across teams – instead the key aim is to track improvement over time within the team itself. Leadership teams need to remain outcome focused, whilst enabling and empowering teams to identify and make incremental improvements that will improve those outcomes.
Running the SI programme
Team SI is unlikely to take place consistently and sustainably across teams, without committed leadership. The SI programme needs to be formally established on a monthly cycle of team target-setting, implementation, review, and celebration of success (see below).
Team Leaders and Scrum Masters need to strike the right balance of sponsoring, framing and guiding the programme with giving teams the time and space they need to realise improvements.
SI is designed to be a positive and motivating process – and it is vital that it is perceived as such. A key element of this is remember to celebrate success. It’s easy to “gamify” SI and find opportunities to recognise and reward the most-improved teams, competence leaders, centres of excellence, and so on.
Questions often arise around target setting and agreeing what success looks like. Some organisations opt only to track individual teams’ improvement over time (and deliberately not make comparisons between teams). Still others find benchmarks useful and divide them into three categories:
Internal benchmarks (e.g. measures taken from the most mature Agile teams and centres of excellence within the organisation)
External competitor/comparator benchmarks – some tools provide anonymised benchmarks across all metrics from similar organisations
Agile best-practice benchmarks – these are often hard to achieve but are obvious targets as the SI programme develops.
The SI programme leader/sponsor can view progress against these benchmarks and look back over the duration of the programme to view the rate of improvement.
The philosophy of Continuous Improvement is central to Agile. The Agile Manifesto states:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.
Continuous Improvement (CI) should not be imposed and driven top-down – instead it should be led by Agile teams themselves, so Self-Improvement (SI) is a more suitable terminology
he evidence shows that Agile engineering teams practicing SI perform significantly better than Agile teams simply focused on their immediate delivery priorities.
However, effective and sustained SI is hard. It requires:
formal sponsorship by technology leadership in the form of recognition and a suitable framework to manage the long-term process; and crucially
a set of meaningful and agreed Agile metrics that underpin the process of SI and track performance improvement over time; and
a means to surface these metrics in near real time, with minimum/no effort involved for the teams themselves.
The following article has been published in JaxEnter
As all LOTR fans will know, the Second Age in Middle Earth lasted 3,441 years – from the defeat of Morgoth to the first defeat of Sauron. Lots happened in Middle Earth during the period and many chickens came home to roost (metaphorically speaking).
In many ways, Agile is entering a similar age. It’s been more than 15 years since the Agile Manifesto was conceived and adoption has been very rapid. It is estimated that 88 percent of all US businesses are involved in some form of Agile methodology in some part of their technology operations.
As such, Agile finds itself approaching the top of the “S” of the adoption curve (see below). As with all innovations approaching late-adoption maturity, the honeymoon period is over and businesses working in Agile are under increasing pressure to demonstrate that their Agile transformations are successful and adding real business benefits.
The lack of Agile metrics platforms
Technology teams are very familiar with measuring output, performance and quality and are not short of quant data. Surprisingly, however, there are very few BI solutions available that aim to measure the overall effectiveness of Agile software development teams across the full delivery cycle– from ideation to deployment.
The solutions out today there tend to focus on one element within the overall Agile process – e.g. code quality tools (focused on coding itself), and workflow management plug-ins that look at certain aspects of the development process, yet often exclude pre-development and post-development stages.
Indeed the “Agile metrics platforms” or “Agile BI” sector is so embryonic, that analysts like Gartner do not yet track it. The closest related sector that Gartner analyses is “Enterprise Agile Planning Tools, which, although related, is focused on planning rather than the efficiency and quality of the output.
Fortunately, newer solutions are emerging vying to answer this unmet consumer need. To create a balanced set of Agile metrics that track overall effectiveness, look for new systems that ingest and analyse data from a variety of tools that software development teams use in their everyday work.
What should you measure?
It is reasonable to assume that all Agile transformations broadly aim to deliver against the Agile Manifesto’s number one objective: the early and continuous delivery of value. As the Manifesto states:
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.
The Manifesto’s subsequent aims support this guiding principle and can be summarised as:
flexibility – actively including changing requirements and involving stakeholders
Frequent deployment of working software
Self-organising, small and motivated teams working at a sustainable pace
Simple, good design
Constant self-assessment and self-improvement.
The challenge and key question is how do you measure these objectives and demonstrate that “Agile is working”? This opens up the contentious subject of Agile metrics.
Navigating the politics of Agile metrics
Why are Agile metrics contentious? There are many protagonists within large technology teams. Each has their own distinct views as to:
whether measurement (including comparison across teams) is desirable and/or possible
which metrics are meaningful at the team level
which metrics are meaningful when aggregated across teams and workstreams
This makes selecting Agile metrics extremely important. Unless the process involves the key protagonists (from teams to delivery managers to heads of engineering) the metrics may not be accepted or trusted. In those circumstances, there is little point in collecting metrics, as teams will not drive to improve them and show the desired progress.
Meaningful Agile metrics
This is our take on a meaningful set of metrics for Agile development teams to track and demonstrate improvement over time.
As the table shows, some of the metrics are used by the team only and will not be compared across teams. Some can be aggregated across teams in order to give managers an overall view of progress.
Team view Agile metric set
These metrics are by no means definitive and readers will doubtless disagree with some. Since they have shown to be deterministic of outcomes, however, they provide a very useful starting point for development teams in this ‘Second Age of Agile’.