By Charlie Ponsonby, Co-CEO Plandek

Introduction

The UK bank TSB migrated to a new IT platform in April 2018, following its acquisition by Sabadell in 2015.  The new platform was put live in a “big bang” launch on 20-22 April 2018 when c5m customers were transferred to the new platform.  Customers immediately experienced problems accessing their online and mobile banking services and TSB received 33,000 complaints within 10 days, with the platform being described as “unstable and almost unusable” post launch.  The platform was subsequently improved over time, but TSB suffered very significant reputational and commercial damage.

The legal firm Slaughter and May was commissioned to investigate the causes of the failure and published their findings in October 2019.  They concluded that:

  1. The new platform was not ready to support TSB’s 5m customers when launched
  2. SABIS (who built and host the platform) were not ready to operate the new platform at the point of launch.

This was despite:

  1. there being seemingly robust programme management disciplines in place; and
  2. there being seemingly strong Governance and risk management with a “three-line defence” of: the Programme team, the Risk Oversight team, and Internal Audit all involved in risk management.

So why did the £325m project fail to deliver immediately post launch?  And could a better understanding of the development capability (and the hidden risk within it) have helped?

Slaughter and May’s diagnostic – a focus on programme management failure

It appears from the Slaughter and May report that TSB adopted a fairly traditional “waterfall” software delivery methodology, with the project estimated and planned upfront, with defined milestones building towards a big-bang migration.

The migration date was originally intended for Nov 2017, before being re-scheduled to April 2018.  However, both times the planning process was described as “right to left” rather than “left to right” – i.e. TSB started with the end-date that they desired and used this as the key INPUT into the planning process.  Whereas a “left to right process” would produce an end-date as an OUTPUT of the planning process.

The net effect of this “right to left” approach appears to be that the time planned was inadequate and the project was rushed, (so critical load testing for example was rushed).  As such it is characterised as a failure of programme management approach and execution.

 Understanding delivery capability risk in complex IT programme management

The overriding theme of Slaughter and May’s diagnostic seems to be that TSB was rushing towards an unachievable go-live date.  There was huge stress on the software delivery team and corners were cut as time ran out – and the net result was a disastrous go-live.

Understanding delivery capability risk in complex IT programme management.

Understanding delivery capability risk in complex IT programme management. The overriding theme of Slaughter and May’s diagnostic seems to be that TSB was rushing towards an unachievable go-live date. There was huge stress on the software delivery team and corners were cut as time ran out – and the net result was a disastrous go-live.

So, two critical questions are:

I believe the answer to both questions is an emphatic “yes” (if you have the tools to collect and analyse the relevant metrics).

Plandek is such a tool.  It provides a complete set of end-to-end delivery metrics and analytics, to understand and mitigate software delivery capability risk.

It works by mining data from toolsets used by delivery teams (such as Jira, Git, CI/CD tools and Slack), to surface critical metrics, to optimise software delivery forecasting, risk management and capability improvement.

As such, it creates a balanced set of metrics that determine delivery capability risk, using both quant data from the underlying tools sets such as Jira, Git etc – but also from the engineers themselves via constant polling through Slack or other collaboration hubs.

Figure 1. Example end-to-end software delivery metrics that determine delivery (capability) risk

Figure 1. Example end-to-end software delivery metrics that determine delivery (capability) risk

Figure 1. Example end-to-end software delivery metrics that determine delivery (capability) risk

This balanced scorecard of capability risk metrics add a new dimension to overall programme risk management.

As Figure 1 shows, these metrics are principally designed for use in an Agile delivery context (with concepts of Cycle Times, Sprint Completion etc), but many can also be applied in a hybrid “Scrumfall” context (often adopted by larger organisations to deliver major projects).

For example:

These are all “under the bonnet” metrics that when viewed together, give the experienced Delivery Manager a view on the health of the delivery “engine” – is it firing on all cylinders, or running on empty…?

Applying delivery capability risk to overall project risk management frameworks

In the case of TSB, Slaughter and May noted “material limitations in the tracking and monitoring of the programme’s progress”.  The project seemed to have relied on quite traditional programme management techniques to map the various workstreams and understand interdependencies and the critical path.

These techniques create well organised Gantt charts showing the theoretical progress of the project relative to the planned timeframe.  However, what these techniques cannot do, is effectively track the health of the underlying technology delivery capability.

i.e. the Gantt chart may show that we just hit a key milestone, but an understanding of the health/stress of the underlying delivery team may paint a very different picture.  It may show that this was achieved in an unsustainable way (low morale, declining process efficiency, increasing technical debt etc) – hence the team is unlikely to hit the next milestone.

This is why an understanding of delivery capability risk (i.e. understanding the health of the underlying delivery “engine”) can be a vital extra dimension in complex IT programme management.

Would it save companies such as TSB from such costly mistakes?  I think it is unlikely in the case of TSB, as their problems seem to have stemmed from unrealistic “right to left” planning, placing a near impossible burden on the IT team to deliver.

That said, it may have identified (and clearly quantified) the stress in TSB’s delivery capability, which in turn would have raised warning flags to the Risk Oversight and Internal Audit committees.  As such, it may have put more doubt in the minds of those taking the key decisions, earlier in the process.  So perhaps it might just have saved TSB from a costly mistake…

 

This article was written by Charlie Ponsonby.  The views, opinions and information expressed in this article are the author’s own and do not necessarily reflect those of Plandek Ltd.

 

Read more by Charlie Ponsonby:

By Charlie Ponsonby

Originally published on InfoQ

Key Takeaways

 

We work with Agile teams of all different shapes and sizes and predictability is a theme that is front-of-mind for almost all – as the words “Agile” and “predictable” don’t always go hand in hand …

So how can development teams maintain their Agility and improve their delivery predictability?  So, when stakeholders ask the predictable question “Are we on schedule?“, they can give a sensible answer.

Typical Agile team forecasting approaches

Product-based Agile software development teams delivering small increments very regularly, may spend little time worrying about forecasting.

But often Agile teams are established to deliver major milestones that the business expects at a certain point and for an agreed budget, so will need to forecast effectively or risk being accused of being “Agile and late!”

In our experience, Agile teams’ forecasting tends to be pretty inaccurate and is often based only on a simple observation of backlog, velocity and word-of-mouth reassurance from the teams themselves.

In our opinion a really meaningful forecast requires a broader set of empirical data reflecting all the potential sources of project delay.

* There is a separate debate as to whether an Agile software development methodology is appropriate in a project context like this, but that is for another day.

“The Logical Six” – the six sources of project delay

Logic dictates that there are six possible reasons why a project is late – the so-called “Logical Six”.  Three of the Logical Six are in direct control of the technology team:

  1. the size and complexity of the task is underestimated
  2. the planned group of appropriate engineers are not available
  3. the delivery team is not delivering as productively as anticipated.

And the other three are in control of the business sponsors interacting with the technology team.  These are:

  1. Unclear requirements definition – internal clients are not clear enough about what they actually want
  2. Scope change – the business moves the goal posts (changed/new requirements or changed priorities)
  3. Ongoing input – the development process is delayed by a lack of stakeholder input where/when required.

In our view, you will never really be able to accurately forecast and improve your delivery predictability unless you collect metrics which track all of these six levers.

Only then will you really understand whether a project is likely to be “late” and what needs to be done to get it back on track.

The "Logical Six" – the six ultimate sources of project delay

The “Logical Six” – the six ultimate sources of project delay

Challenging your teams’ forecasting with analysis of the delivery metrics that matter

So, what are the metrics that relate to the six sources of project delay – and so are critical to delivery predictability and improved forecasting accuracy?

The table below shows our favourite metrics in each of the areas.  We encourage Delivery Managers to focus on these when working with the Delivery Team Leads to create more realistic forecasts of delivery timing.

In summary, the metrics are:

  1. People availability – clearly key.  If we don’t have the engineers that we anticipated, we will be late.
  2. Team productivity relating to:
    1. Productive time – another critical metric considering the proportion of time engineers have to focus on writing new features
    2. Process efficiency – friction in the development process can undermine the best laid delivery plans. So really understanding trends in and the causes of this friction is key
    3. Velocity and time to value – understanding how our throughput and time to value has varied as the project progresses is yet another determinant variable in our forecasting
  3. Estimation Accuracy – if we are adopting a Scrum-based approach– sprint completion gives a very good indicator of our forecasting capability.  If we cannot hit our two-weekly sprint goals, we are unlikely to be effective at estimating effort and forecasting further into the future
  4. Requirements definition, stakeholder input and scope change can be tracked using Quant Engineer Feedback collected from collaboration hubs like Slack.  This is something we use a lot internally to improve our forecasting as it uses quant insight from the people actually doing the work.  It often adds confidence to an otherwise theoretical delivery forecast and sheds light on three of the Logical Six (requirements definition, stakeholder input and genuine scope change).

Key Metrics to track the Logical Six levers of project delay

Trend Metrics Relevance
Available Engineering Resource:

  • Active Engineers (v plan)
Clearly key – shows whether we have the planned resource in place to deliver the work.
Productive Time

  • % Time spent on New Features
  • % Time spent on Upkeep
  • % time lost to Non-product related activity  
Key to understand how this has trended over time.  If we are spending more expending energy on non-productive tasks, clearly this is going to impact our progress going forward.
Process Efficiency

  • Flow efficiency (%)
  • Rework (days)
  • WIP/developer  
These metrics analyse the “friction” in the development process and how this has trended over time.  Declining Flow Efficiency is a problem that can often be addressed, so it is a key metric in forecast mitigation.  Rework shows trends in accumulated time spent reworking tickets that fail QA.  This is another form of friction that may be mitigated (e.g. by assisting engineers new to the code base).

NB: In our view, any metric collected at individual level needs to be viewed in context by people directly involved in the project. They can be taken out of context (to damaging effect) if circulated more broadly.

 Velocity and Time to Value

  • Feature Tickets Completed
  • Cycle Time (days)
  • Lead Time (days) 
Velocity metrics are problematic, but a detailed understanding of trends in tickets completed (and story points/value points per ticket) is a key when challenging forecasts. Critical too is an understanding of changes in Cycle and Lead Times. If they are lengthening accurate forecasting is tricky.
 Sprint Accuracy

  • Overall Completion Rate (%)
  • Sprint Overall Completion v Sprint Target Completion (%)
Inability to meet two weekly sprint goals, makes forecasting over longer periods very difficult.  These metrics are therefore critical to forecasting accuracy.
Quant. Engineer feedback

  • Team morale
  • Sprint effectiveness
  • Quality of ticket and requirements definition
  • Quality of business sponsor input
  • Effort identified as agreed scope change  
Some metrics platforms enable the real-time polling of engineers through collaboration hubs. This provides quant data:

  • of engineers’ views on morale and process efficiency; and
  • on the impact of business sponsors’ requirements definition and ongoing input   
  • Team Lead feedback on stories added by business stakeholders that are additional to the original scope.

Collecting the delivery metrics that matter

The key delivery metrics require surfacing data from a myriad of sources including; work-flow management tools, code repos, and CI/CD tools – as well as collecting quant feedback from the engineering team themselves (via collaboration hubs).

The complexity of the data and multiple sources make this sort of data collection very time consuming to do manually and really requires an end-to-end delivery metrics platform to do at scale.

Delivery metrics platforms are available which consist of a data layer to collate and compile metrics from multiple data sources and a flexible UI layer to enable the creation of custom dashboards to surface the metrics in the desired format.

Using Root Cause RAG reporting to combine your delivery forecast and mitigation plan

If we use metrics to track and analyse the Logical Six drivers of project progress, we will get a much clearer picture of real project progress. By this we mean:

The improved forecast and related mitigations can be presented together in a Root Cause Red, Amber and Green (RAG) Progress Report.

Root Cause RAG reports are far more effective than traditional RAG progress reporting which often sheds very little light on actually why a project (with a required go-live date) is behind schedule and what needs to be done to bring it back on track.

In contrast to a traditional RAG approach, the Root Cause RAG Report (see the example below) clearly shows:

  1. Our latest delivery forecast
  2. The delivery metrics that support our forecast
  3. Our mitigations (based around the Logical Six levers that drive project timing) – e.g. the need to increase productive time by reducing time diverted to upkeep; the need to improve Flow Efficiency by addressing the blockages in the dev process (e.g. QA wait time); or the need for improved stakeholder input (as shown in the quant engineer feedback)
  4. Allocated responsibilities (across the development teams and stakeholders) to deliver the identified mitigations

Done well, Root Cause RAG Reports can be a really effective means of presenting our (more accurate) forecasts in a way that stakeholders can understand and therefore can be an important step in reducing lateness and bringing the technology team and the internal client much closer together.

As discussed however, it relies on an understanding of the metrics that actually determine project lateness and a means of collecting those metrics.

 

Example Root Cause RAG Report

Example Root Cause RAG Report

About the Author:

Charlie Ponsonby started his career as an economist in the developing world, before moving to Andersen Consulting. He was Marketing Director at Sky until 2007, before leaving to found and run Simplifydigital in 2007. Simplifydigital was three times in the Sunday Times Tech Track 100 and grew to become the UK’s largest broadband comparison service. It was acquired by Dixons Carphone plc in April 2016. Ponsonby co-founded Plandek with Dan Lee in October 2017. Plandek as an end-to-end delivery metrics analytics and BI platform. It mines data from toolsets used by delivery teams (such as Jira, Git and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness. Plandek is used by clients globally including Reed Elsevier, News Corporation, Autotrader.ca and Secret Escapes.

Read more by Charlie Ponsonby:

 

By Charlie Ponsonby, Co-CEO Plandek

We work with a great variety of organisations at various points in their Agile transformations.  Whilst the move to Agile has driven tangible and lasting benefit in almost all of these organisations – the great majority have experienced problems and unintended consequences along the way.

One issue that we hear very often (particularly in large commercial organisations) is the difficulty of reconciling Agile’s decentralised, iterative approach with internal clients used to agreeing budgets (up-front) and expecting outcomes delivered at certain points in time.

Products over Projects

An important principle of Agile is to align stable teams around designing and building particular products – so that they have end-to-end responsibility for designing, building and maintaining the technology and become increasingly expert over time.  This is inherently sensible and prevents temporary project based teams being thrown together to build something, only to be reassigned after “launch”.

Effective product-based teams aim to iterate and deploy improvements increasingly rapidly in keeping with the core Agile goal of the “early and continuous delivery of valuable software”.

As a result, forecasting becomes less important as the business expects small, frequent increments to an existing application over time.  Rather like painting the Forth bridge – the job that is famously never finished…

Manage to the Agile metrics that matter

Effective product-based teams aim to iterate and deploy improvements increasingly rapidly in keeping with the core Agile goal of the “early and continuous delivery of valuable software”.

Projects over Products?

After the popular writing of Martin Fowler and others, product-based teams have become the desired option for many Agile organisations – but in certain situations a product-based methodology can have major drawbacks.

A typical example would be the build of a new application that the business requires at a certain time and for an agreed budget (signed-off upfront).  This is a very common scenario in large organisations.  Typical examples might include:

Under these circumstances a product-based Agile methodology can cause serious problems as the methodology:

This inability to forecast cost and timing of delivery almost inevitably causes serious problems with stakeholders anxious to receive progress updates and working software at the planned point in time, and at the planned budget.

So how can an Agile team better predict delivery timing and cost – even if the Agile methodology that they are adopting is not suited to accurate forecasting?

Manage to the Agile metrics that matter

This inability to forecast the cost and timing of delivery almost inevitably causes serious problems with stakeholders anxious to receive progress updates and working software at the planned point in time, and at the planned budget.

The concept of a Delivery Risk Profile

Forecasting techniques within Agile teams are often rudimentary.  Burndown charts offer a linear extrapolation of current velocity, compared to the known backlog – to estimate future completion date of effort outstanding.  There are two basic problems with this:

  1. as Agile teams estimate ticket size/effort on an ongoing basis, there will inevitably be un-estimated backlog that is not included in the forecast
  2. extrapolating current velocity may be highly misleading (for any number of reasons affecting the team’s ability to deliver going forward).

However, the good news is that if we collect and track a set of delivery metrics over time, we can start to put together a more informed view of forecast delivery timing and cost, expressed within a Delivery Risk Profile.

Plandek is the world’s leading BI platform to surface and analyse end-to-end delivery metrics.   It mines data from toolsets used by delivery teams (such as Jira, Git, and CI/CD tools), to provide end-to-end delivery metrics to optimise software delivery forecasting, risk management and delivery effectiveness.

Analysis of this balanced scorecard of metrics gives a more measured view of likely future delivery time and cost.  This analysis is presented in a Delivery Risk Profile.

The Delivery Risk Profile is made up of metrics in five key areas:

  1. Backlog analysis – it’s key to understand the quality and age of the current backlog and to get a better feel of the likely size of the unestimated element of the backlog
  2. Estimation and sprint accuracy – a critical determinant of timing accuracy. If teams are unable to estimate effectively and to deliver to their sprint goals, then longer-term delivery targets become far more unreliable
  3. Process efficiency – often very poorly understood, but vital to understanding likely future velocity
  4. Throughput and velocity – clearly key, but even more interesting when trends by individual team are carefully considered
  5. Talent availability and engagement – again often poorly understood but critical in maintaining quality and velocity – as delivery is ultimately all about the talent.

Key metrics within the Delivery Risk Profile

The graphic below shows key metrics that when considered together give a much more informed view of likely future delivery timing and hence cost – as they are heavily deterministic of future velocity.

The Plandek analytics platform collects these metrics across all teams and projects and enables Delivery Managers to get a much more complete picture of likely future velocity – expressed in a quantitative Delivery Risk Profile.

Typically trends in the metrics are analysed by team to build the profile of risk.  This Risk Profile then enables the Delivery Manager to refine:

A more accurate delivery forecast is therefore synthesised from three different sources – the teams themselves; linear burndown estimates; and the balanced scorecard of metrics known to directly impact future velocity.

Key delivery metrics used to create a Delivery Risk Profile

Key delivery metrics used to create a Delivery Risk Profile

 

Case study – applying a Delivery Risk Profile to reduce unplanned go-live delays by 50%

We work with a number of data companies, one of which has been particularly successful at applying the Plandek metrics set to build an effective risk profile, which they have used to very significantly improve their go-live forecasting.

Their teams are organised around a product-led strategy and therefore not ideally suited to delivering time-dependent “projects”.  In this instance, stakeholders need careful management to ensure that there is as much visibility as possible as regards progress and forecast timing of delivery of major milestones.

Before the Plandek metrics set was tracked and analysed – unplanned go-live delays were common.  Delivery Managers relied on linear burndown charts and word-of-mouth updates from scrum teams.

Plandek was implemented and delivery Team Leads and Delivery Managers started to track and manage to a simple set of delivery metrics.  Trends were analysed over time for each scrum team to build a Delivery Risk Profile (as described on page 3).

The immediate effect was a greater focus on:

  1. Delivering sprints more accurately (measured via Sprint Overall Completion (%) and Sprint Target completion (%)
  2. The forecasting process – with word-of-mouth forecasts from the teams more rigorously debated and refined in the light of trends in the key risk profile metrics.

The net result was a very significant improvement in forecasting accuracy.  Over a 6 month period, unplanned go-live delays reduced by 50%.  This greatly strengthened the relationship between the technology delivery team and the business stakeholders.

The case is summarised in the table below.

A European data business – Understanding the delivery team’s risk profile to improve delivery forecasting accuracy

Case study – applying a Delivery Risk Profile to reduce unplanned go-live delays by 50%

A European data business – Understanding the delivery team’s risk profile to improve delivery forecasting accuracy

 

Read more case studies:

Read more about our team.

You can also check out Charlie’s article on the importance of metrics to Agile teams on InfoQ, which recently trended globally.

 

Agile metrics for self-improvement

(Agile && metrics) ? Can agile metrics help developers and teams improve?

The journey to becoming Agile can sometimes be tricky. In this article, discover nine critical success factors that make Agile metrics work for teams. What questions should you and your team be asking yourself in order to focus on self-improvement, reliability, efficiency, and high-quality code delivery?

By Colin Eatherton

Article originally published on JAXenter.

 

Inevitably, most teams get to the stage where they need to adopt a more Agile delivery process. This is not just a sign of maturity. It’s a sign that the software they are developing is being used, is deemed useful, and is receiving feedback and change requests so that it continues to improve.

My team is in a unique position. We are striving to improve delivery as we develop a tool that strives to help teams do the same. In other words, we use our own tool to improve the delivery of it!

Agile metrics for self-improvement

We are striving to improve delivery as we develop a tool that strives to help teams do the same. In other words, we use our own tool to improve the delivery of it!

In my experience, the journey to becoming more Agile can be tricky. Each team has its own goals and ideas about how to get there. All teams, however, need to be able to reflect on their progress, measure how effective their current strategy is, and gain more visibility of the wider landscape. Of course, this is easier said than done.

Bottom-up is best

The topic of which metrics Agile teams can trust to reliably help them measure progress – or whether to use them at all – is both fascinating and contentious. Many people associate metrics with a top-down management style, which is the opposite of the decentralised, empowered and self-determining team philosophy that Agile promotes.

During a one-to-one meeting with my team lead, I asked him which metrics he felt I should focus on. He explained that the only ones worth looking at were those that the whole team agreed would help improve delivery. When it came to my own self-improvement goals, he said I should select metrics myself.

As a rule, whenever metrics are applied from the top down, the less effective they are. (This is not to say, however, that there aren’t valuable metrics that can indicate progress at a higher level.)

Using Agile metrics for team improvement

Self-improvement is a key Agile principle. On the face of it, it’s a pretty simple process. First, you identify what you want to improve. Next, establish ways to measure the attributes that contribute to improvement. Then measure and reflect. Therefore you will always need a reliable way to track progress.

Using Agile metrics for team improvement

First, you identify what you want to improve. Next, establish ways to measure the attributes that contribute to improvement. Then measure and reflect. Therefore you will always need a reliable way to track progress.

My team chose Agile metrics that focus on various attributes of delivery, quality and value. For example, we measure Lead Time from the time the ticket is created in Jira to its production deployment and number of escaped bugs. We’ve created a dashboard in our own software around these attributes so we can measure, integrate and affect them daily, or as part of a retrospective. Our dashboards help guide us and qualify decisions we make around team, process, and delivery improvement to ensure we continually head in the right direction. We can also opt to see individual contributions to these metrics. For instance, I have chosen to create a view of metrics that only I can see, so I can measure my own personal output.

Using Agile metrics for delivery cycle improvement

As part of our cycle rituals, our team is responsible for making sure our scope is realistic. To support this, we use Agile metrics to ensure that the sum complexity, time and effort of our tasks match the overall time available and the team’s abilities. We measure the scope using story points. We also built and now use a ‘Sprint Report’ facility. This allows us to see a breakdown of the sprint’s overall completion, including the target completion and work added to the sprint after it started. It also includes ‘Sprint-specific dashboards’ that use metrics like ‘Completed Tickets’ to calculate the amount of work developers can reliably complete during a sprint (aka their ‘velocity’).

Using Agile metrics for delivery cycle improvement

As part of our cycle rituals, our team is responsible for making sure our scope is realistic. To support this, we use Agile metrics to ensure that the sum complexity, time and effort of our tasks match the overall time available and the team’s abilities.

 

Nine critical success factors that make Agile metrics work for teams

As I said before, Agile metrics for team improvement can be contentious. They open up a lot of heated discussions and to varying degrees benefit from a wider understanding of context and narrative. So we discuss them and apply the following tenets to help find common ground:

Simplicity

Complicated metrics run counter to the Agile spirit. We like to define our journey as specifically as we can, answering simple questions with easy to understand metrics which support them like:

Consensus

The metrics need to be selected by the development team and serve a common aim shared by project members from the Scrum Master to the technology leader.

Relevance

You shouldn’t measure anything unrelated to your journey’s destination. Each project follows a different set of milestones so may need different metrics. However, as there is only one final destination, some carefully-selected metrics should be applicable across all teams. Less can be more, so when we build out a dashboard together in our team meetings, we try to concentrate on only a handful of metrics at a time.

Significance

Software delivery metrics are often outcome-based. Although legitimate, there’s a risk of tracking only symptoms and not root causes. The ‘Cycle Times’ metric, for example, shows how long work is taking rather than why. Descriptive metrics like these should also include details of the variables that impact the outcomes. For example, alongside Cycle Time you could show an analysis of the bottlenecks. To improve we want to uncover root causes and identify behaviour gains we can make together – we need to tell a full story.

Right sources – we need to analyse data from those sources that our developers genuinely engage within their everyday work. These include workflow management software like Jira; code repositories like GitHub and Bitbucket, TFS or Gitlab; code quality tools like Sonarqube; time tracking systems like Harvest or Tempo; and continuous delivery tools like Jenkins and GoCD.

Automation

If analysing metrics takes significant cognitive effort or time to collate, we tend to lose patience and abandon the effort. The metrics need to complement processes, not slow them down.

Near real-time

Agile metrics delivered in near real-time fundamentally drive improvement as they can be discussed in daily stand-ups and sprint retrospectives.

The human factor

Software development is a process (almost) completely driven by people. This means it should be possible to source information and get to the root-cause of issues very fast. Typically feedback is collected in person, in stand-ups and retros. In theory this should work well, but it can also hide issues that participants don’t want to openly communicate. This is especially true in changing, distributed teams with a mix of full-time employees and contractors. To address this and provide us with context and narrative around our metrics, we incorporate feedback into our tool. For example, when tickets get closed – we get the chance to provide feedback on how the ticket went and its requirements via Slack. These prompts also give us a feel for how a ticket has performed post dev as it continues (hopefully!) past QA.

Actionable

Metrics only make sense if teams can act upon and improve them. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives. Active stewardship by the technology leadership team can make a huge difference.

The limits of home-grown solutions

Since for many developers (Agile && metrics) don’t seem to get along, it’s no surprise that few analytics tools are available to measure Agile delivery effectiveness. However, now that Agile is mainstream, there is urgency to measure. Since few analytics tools were available, many teams started to build their own. This worked well on a small-scale but hit the wall when projects and teams grew.

There are several other problems with home-grown tools. Most notably, they allow teams to tweak calculations and tell an overly flattering story. Also, the time it takes to build your own tool can be a big distraction from planned work. Fortunately, new solutions are now emerging that work in line with the principles listed above.

Agile metrics for self-improvement

If you are still not convinced about using Agile metrics for teams, I recommend testing them on yourself. Most find that when they do this, the metrics become a reassurance or even a confidence boost. For example, a younger colleague of mine was struggling with his programming confidence. He found metrics to be very helpful because they showed him objective proof of improvement.

For my part, one way I often use Agile metrics is to provide insights during a retrospective. To measure how I’m improving over time, I track metrics for the tickets I’ve completed, the amount of story points completed, and the amount of returns I’ve had from QA. Crucially, this also helps me remember the tickets I’ve worked on and how they went. Like most developers, I tend to switch focus once a ticket passes and can find it hard to retain the details when it’s time to review a cycle or perform a project post mortem.

Agile metrics for self-improvement

Whether you decide to use them yourself or for your team, (Agile && metrics) return true. In my experience, people want similar things and work well together in helping deliver on the key Agile principle of self-improvement.

You will of course come up with your own, but I have found these example questions (and related Agile metrics) can help self-improvement:

Whether you decide to use them yourself or for your team, (Agile && metrics) return true. In my experience, people want similar things and work well together in helping deliver on the key Agile principle of self-improvement. Try it out!

About the author

Colin Eatherton

Colin Eatherton is Team Lead and Developer at Agile metrics software company, Plandek. Colin develops the front end of Plandek’s UI using web technologies including JavaScript, HTML, CSS and Ruby. In this role Colin works with a cross-functional development team, stakeholders and Plandek’s product manager to understand and analyse delivery requirements, and implement solutions. His team applies Agile methods and metrics to continuously improve processes and outcomes.

More content from Plandek.

Find out more about our team.

The following article was posted by InfoQ: https://www.infoq.com/articles/metrics-agile-teams/

The importance of metrics to Agile teams

We are fortunate to have the opportunity to work with a great variety of engineering teams – from those in start-ups to very large, distributed enterprises.

Although definitions of “engineering excellence” vary in these different contexts, all teams aspire to it. They also share the broad challenge of needing to balance the “day job” of delivering high quality, high value outcomes against the drive to continually improve.

Continuous Improvement (CI) inherently requires metrics against which to measure progress. These need to be balanced and meaningful (i.e. deterministic of improved outcomes). This creates two immediate issues:

We view CI as vital in healthy and maturing Agile environments. Hence metrics to underpin this process are also vital. However, CI should be owned and driven by the teams themselves so that teams become self-improving. Ergo, CI programmes become SI (Self-Improvement) programmes.

This article focuses on how teams can implement a demonstrably effective SI programme even in the fastest moving and most resource constrained Agile environments so that they remain self-managing, deliver value quickly, and continue to improve at the same time

The Size of the Prize

The concept of CI has been around for a long time. It was applied perhaps most famously in a business context in Japan and became popularised with Masaaki Imai’s 1986 book “Kaizen: the Key to Japan’s Competitive Success.”

The CI principle is very complementary with core Agile principles. Indeed, the Agile Manifesto states:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

There are two key themes here – firstly, CI and secondly, that CI is driven by the teams themselves (SI). This raises the question as to what role of leadership should take in this improvement process.

Our evidence shows that the size of the prize is very significant. Well implemented SI programmes can deliver significant and sustained improvement in metrics that underpin your time to value (TTV) – for example:

However, achieving these goals is hard and requires sustained effort. Technology leadership needs to give teams the tools (and encouragement) to own and drive the self-improvement process. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives.

The tools needed for effective Agile team Self-Improvement

The principle of team Self-Improvement (SI) is simple and powerful, but very hard to deliver effectively. It requires four important things:

  1. A serious long-term commitment and sponsorship from both the leadership team and the teams/squads themselves – and requires effort and resources over a prolonged period of time to realise iterative improvement
  2. An agreed, objective set of metrics to track progress – making sure that these metrics are actually the right ones, i.e. deterministic of the desired outcome
  3. A means for teams to easily track these metrics and set targets (with targets calibrated against internal and external benchmarks)
  4. An embedded process within teams to make the necessary changes; celebrate success and move on.

Agile teams are almost always busy and resource-constrained. As a result, the intention of always improving (in a structured and demonstrable way) often loses out to the pressures of the day job – delivering to the evolving demands of the business.

In our experience, successful SI requires coordination and stewardship by the technology leadership team, whilst empowering teams to own and drive the activities that result in incremental improvement. Therefore this needs to be in the form of a structured, long-term and well implemented SI programme.

Implementing an effective team Self-Improvement programme

Self-Improvement needs a serious commitment from the leadership team within engineering to provide teams with the tools they need to self-improve.

This will not be possible if the organisation lacks the BI tools to provide the necessary metrics and reporting over the full delivery lifecycle. Firstly, the reporting found within common workflow management tools like Jira is not optimised to provide the level of reporting that many teams require for an effective SI programme. Secondly, teams use a number of tools across the delivery cycle, which often results in data existing in siloes and not integrated to reflect a full view of end-to-end delivery.
Teams should seek out BI tools that address these challenges. The right tools will give product and engineering teams meaningful metrics and reporting around which to build robust SI programmes.

Metrics for SI

As mentioned in the intro, selecting and agreeing metrics is often the most contentious issue. Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives.

By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.

We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.

As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency).

The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.

The (determinant) metrics are grouped into six key areas. These are:

  1. The key enabler – best practice and tool use
  2. A key push-back is often that tool usage (e.g. Jira) is so inconsistent, that the data collected from within it is not meaningful (the old adage of “garbage in, garbage out”).
  3. However, there are some simple disciplines, that can themselves be measured, that greatly improve data quality.
  4. In addition to focusing on best practice “hygiene” metrics, teams can build their self-improvement initiatives around five further determinant metric sets…
  5. Sprint disciplines and consistent delivery of sprint goals (Scrum Agile)
  6. Proportion of time spent/velocity/efficiency of writing new features (productive coding)
  7. Quality and failure rates and therefore…
  8. Proportion of time spent/efficiency of bug fixing and re-work
  9. Teamwork, team wellness and the ability to collaborate effectively.

From these six areas, we believe these are some of the most common and meaningful metrics around which a team can build an effective self-improvement programme:

In our experience, a highly effective Agile SI programme can be built around these metric sets. We’ve also found that having an integrated view of the full delivery cycle across the right tools in a single view, underpinned by these core metrics reveals key areas that can be optimised, i.e. low hanging fruit that can materially improve Time to Value.

Metrics should be available in near real-time to the teams, with minimal effort. If teams have to collect data manually, the overall initiative is likely to fail.

A sample SI Dashboard

When all team members have a near real-time view of the metrics that they’ve signed up to, these become a core part of daily stand-ups and sprint retrospective reviews.

The aim is not to compare these metrics across teams – instead the key aim is to track improvement over time within the team itself. Leadership teams need to remain outcome focused, whilst enabling and empowering teams to identify and make incremental improvements that will improve those outcomes.

Running the SI programme

Team SI is unlikely to take place consistently and sustainably across teams, without committed leadership. The SI programme needs to be formally established on a monthly cycle of team target-setting, implementation, review, and celebration of success (see below).

Team Leaders and Scrum Masters need to strike the right balance of sponsoring, framing and guiding the programme with giving teams the time and space they need to realise improvements.

SI is designed to be a positive and motivating process – and it is vital that it is perceived as such. A key element of this is remember to celebrate success. It’s easy to “gamify” SI and find opportunities to recognise and reward the most-improved teams, competence leaders, centres of excellence, and so on.

Target setting

Questions often arise around target setting and agreeing what success looks like. Some organisations opt only to track individual teams’ improvement over time (and deliberately not make comparisons between teams). Still others find benchmarks useful and divide them into three categories:

  1. Internal benchmarks (e.g. measures taken from the most mature Agile teams and centres of excellence within the organisation)
  2. External competitor/comparator benchmarks – some tools provide anonymised benchmarks across all metrics from similar organisations
  3. Agile best-practice benchmarks – these are often hard to achieve but are obvious targets as the SI programme develops.

The SI programme leader/sponsor can view progress against these benchmarks and look back over the duration of the programme to view the rate of improvement.

In summary:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

  1. formal sponsorship by technology leadership in the form of recognition and a suitable framework to manage the long-term process; and crucially
  2. a set of meaningful and agreed Agile metrics that underpin the process of SI and track performance improvement over time; and
  3. a means to surface these metrics in near real time, with minimum/no effort involved for the teams themselves.
The following article has been published in JaxEnter

https://jaxenter.com/second-age-agile-159373.html

As all LOTR fans will know, the Second Age in Middle Earth lasted 3,441 years – from the defeat of Morgoth to the first defeat of Sauron. Lots happened in Middle Earth during the period and many chickens came home to roost (metaphorically speaking).

In many ways, Agile is entering a similar age. It’s been more than 15 years since the Agile Manifesto was conceived and adoption has been very rapid. It is estimated that 88 percent of all US businesses are involved in some form of Agile methodology in some part of their technology operations.

As such, Agile finds itself approaching the top of the “S” of the adoption curve (see below). As with all innovations approaching late-adoption maturity, the honeymoon period is over and businesses working in Agile are under increasing pressure to demonstrate that their Agile transformations are successful and adding real business benefits.

The lack of Agile metrics platforms

Technology teams are very familiar with measuring output, performance and quality and are not short of quant data. Surprisingly, however, there are very few BI solutions available that aim to measure the overall effectiveness of Agile software development teams across the full delivery cycle– from ideation to deployment.

The solutions out today there tend to focus on one element within the overall Agile process – e.g. code quality tools (focused on coding itself), and workflow management plug-ins that look at certain aspects of the development process, yet often exclude pre-development and post-development stages.

Indeed the “Agile metrics platforms” or “Agile BI” sector is so embryonic, that analysts like Gartner do not yet track it. The closest related sector that Gartner analyses is “Enterprise Agile Planning Tools, which, although related, is focused on planning rather than the efficiency and quality of the output.

Fortunately, newer solutions are emerging vying to answer this unmet consumer need. To create a balanced set of Agile metrics that track overall effectiveness, look for new systems that ingest and analyse data from a variety of tools that software development teams use in their everyday work.

What should you measure?

It is reasonable to assume that all Agile transformations broadly aim to deliver against the Agile Manifesto’s number one objective: the early and continuous delivery of value. As the Manifesto states:

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.

The Manifesto’s subsequent aims support this guiding principle and can be summarised as:

The challenge and key question is how do you measure these objectives and demonstrate that “Agile is working”? This opens up the contentious subject of Agile metrics.

Navigating the politics of Agile metrics

Why are Agile metrics contentious? There are many protagonists within large technology teams. Each has their own distinct views as to:

This makes selecting Agile metrics extremely important. Unless the process involves the key protagonists (from teams to delivery managers to heads of engineering) the metrics may not be accepted or trusted. In those circumstances, there is little point in collecting metrics, as teams will not drive to improve them and show the desired progress.

Meaningful Agile metrics

This is our take on a meaningful set of metrics for Agile development teams to track and demonstrate improvement over time.

As the table shows, some of the metrics are used by the team only and will not be compared across teams. Some can be aggregated across teams in order to give managers an overall view of progress.

Team view Agile metric set

These metrics are by no means definitive and readers will doubtless disagree with some. Since they have shown to be deterministic of outcomes, however, they provide a very useful starting point for development teams in this ‘Second Age of Agile’.

Plandek the rapidly growing SaaS provider of Agile BI, has recently closed a $3.3m funding round led by Perscitus LLP and a group of experienced private investors and family offices.  This comes a year after an initial fund raise of $2.7m in January 2018.

Plandek was co-founded in 2017 by Dan Lee (founder of Globrix) and Charlie Ponsonby (founder of Simplifydigital).  It has struck a chord with its unique take on Agility metrics for software development teams,accessed in near real-time via the Plandek dashboard.

Most large enterprises now apply an Agile methodology to some or all of their software development.  However, measuring the efficiency of Agile software development teams remains a contentious subject.  Plandek is unique in providing a library of meaningful Agility metrics suitable for all levels within the technology organisation – from the teams themselves, to the CIO.

Plandek is growing very rapidly and is already working with global organisations such as Reed Elsevier, Worldpay and News Corporation – as well as a growing portfolio of technology-led growth businesses such as Arcus Global and Secret Escapes in the UK.

Charlie Ponsonby, Co-CEO of Plandek welcomed another vote of confidence from investors saying: “Having refined our go-to-market strategy in 2018, we are delighted to have the investment required to accelerate our growth, focused on larger enterprise clients across the UK, Europe and North America”.

———————————————————————————————–

For enquiries please contact Charlie Ponsonby: cponsonby@plandek.com

February 2019 – Plandek debuts new UK advertising campaign

Plandek the rapidly growing SaaS provider of Agile BI,analytics and reporting debuted its first UK advertising campaign in Canary Wharf London in February.

The “FrAgile” campaign dramatizes the challenge of implementing large scale Agile software development methodologies in large organisations.  It alludes to the need for meaningful metrics to measure progress in the journey towards Agile“engineering excellence”.  Without meaningful metrics down to the team level, teams are unable to self-diagnose and self-improve their processes.  With the result that Agile can become Fragile.

The digital outdoor campaign is focused on the Greater London area and supports a direct marketing campaign targeted at large enterprise CTOs and CIOs.

Charlie Ponsonby, Co-CEO of Plandek commented:

“We are delighted to be launching our first commercials in the UK market.  Plandek is really striking a chord with CTOs under pressure to ensure that their Agile software delivery teams deliver great results. And the campaign dramatizes how Plandek’s innovative Agile BI platform can help meet that challenge.”

Plandek (www.plandek.com) is an Agile BI, analytics and reporting platform help technology teams deliver Agile software development projects more productively and predictably.

Plandek’s big data platform mines the data history from dev teams’ tool sets (e.g. Jira, Git) to reveal and track levers right down to the team and individual level, that are highly predictive of project productivity and predictability – in order to significantly improve Agile project outcomes.

Plandek is based in London and global clients include: ReedElsevier, News Corporation and Worldpay.