Plandek Academy logo

Ultimate Guide – 3

Download the Ultimate Guide

Table of Contents


3. Key watch outs when applying software delivery metrics

The role of metrics in the ‘philosophy’ of Agile software delivery

As mentioned above, the Agile Manifesto is based on some core democratic principles that empower individuals and teams to be self-determining and to define their own work schedules and processes.

As such, a core element of this Agile culture has been a healthy scepticism of top-down metrics and analytics. Many Agile practitioners have tended to view metrics as potentially flawed in two key ways:

  • they are somehow contrary to the ‘spirit’ of self-determination and the individual;
  • they are often unrepresentative, inaccurate and too easily gamed.

Hence, despite the current explosion in interest in software delivery metrics, scepticism about metrics is often still prevalent in parts of organisations. Hence, embedding metrics in Agile delivery organisations can be challenging.

To succeed, it needs:

  1. Strong technology leadership and sponsorship of a data-led approach to software delivery.

  2. The ability to easily surface meaningful and accurate metrics in near real-time, which teams and individuals understand and trust.

  3. A framework and methodology to embed metrics across the delivery organisation and shift behaviour so that teams can set their own targets and self-improve around the metrics in question.

‘Top-down’ versus ‘Bottom-up’ metrics in Agile software delivery – the key role of the team!

In our experience (across clients of all sizes and stages of Agile DevOps maturity), ‘top-down’ metrics frameworks, conceived and led by management, fail to deliver much business benefit for three reasons:

  1. Top-down measurement and the feeling that ‘Big Brother is watching’ is completely contrary to the spirit and values of Agile and can be extremely demotivating to engineering teams.

  2. Whilst top-down metrics may initially give management greater visibility across and within teams, the data very quickly becomes inaccurate unless the teams themselves are very involved in configuring and managing the metrics in question.

  3. There is no point in managers reviewing metrics that the teams themselves have not engaged with because managers will not see any improvement in these metrics, as improvement is in the hands of the teams!

So, we are strong advocates of a balanced approach to software delivery metrics, which is much more ‘top-down meets bottom-up’ in philosophy and approach. This means:

  • teams should be very involved in the metric selection and configuration – hence the mantra that such metrics should be ‘loved by teams and relied on by managers’;
  • there is a balance struck between ‘North Star’ metrics (the key metrics that technology leadership would like all teams to align around) – and team-level metrics selected by the teams themselves;
  • teams should therefore have the ability to choose metrics that suit their individual circumstances and objectives;
  • teams should have their own metrics dashboards that they use in daily stand-ups and retros as their own trusted metrics increasingly become part of their daily routines; and
  • the business benefit is delivered, as the teams themselves use the metrics to self-improve over time (at the team level).

Value Stream Management platforms may include software delivery metrics to track and manage value delivery within value streams. Unfortunately, these metrics frameworks tend to be top-down in nature, with a series of aggregated metrics (like Lead Time to Value) that are collected and reviewed by management.

Plandek is an exception in that it is a complete software delivery metrics solution – that combines the ability to provide Value Stream metrics with a strong focus on team metrics, team involvement and team metrics dashboards.

Data sources for effective software delivery and analytics

Software delivery is a complex and interrelated process. Useful analytics requires an end-to-end view of the inter-related processes across Pre-Development, Development, Integration, Deployment and Live Management. As an example, Figure 3 below shows the core Plandek data integrations.

Figure 3: Multiple systems integrations required for an end-to-end view of the software delivery process
Figure 3: Multiple systems integrations required for an end-to-end view of the software delivery process

Collating, flattening and analysing data from these disparate sources is complex. It can be done manually or by applying a generic BI tool like Tableau, but it is resource-intensive and prone to failures/errors as the underlying systems constantly change.

As such, a specialist end-to-end software delivery analytics tool like Plandek is very often the only viable solution.


Data security

Data security is critical in software delivery analytics. It requires access to data-sensitive and critical systems (including, for example, proprietary project information held within workflow management tools and source code held in code repos). Hence, information security is always a key priority.

Most delivery analytics BI solutions are cloud-based, so much care is needed in selecting a data-secure solution. Plandek is used by infosec-sensitive organisations as it addresses infosec in four ways:

  1. It is securely architected in the European Google Cloud.
  2. It cleans the base data to only analyse non-sensitive meta-data (e.g. it removes labels from Jira tickets).
  3. It encrypts data before any data transfer.
  4. It offers an on-premises data gatherer solution so that all sensitive data is held on-premises and only summary data presentation is undertaken in the cloud.

We recommend that you take into account all four of these considerations when selecting a software delivery analytics solution.


Metrics and analytics configurability

The ‘devil really is in the detail’ with software delivery analytics for two key reasons:

  • First, the software delivery process is extremely complex (especially at scale) and may involve separate system stacks, multiple systems and system instances, many different workflows and related operational complexity. As such, metrics are meaningless unless they can be very carefully configured to take into account the context in which they are applied. Very often, software delivery analytics fail as the metrics look plausible, but when scrutinised by the teams involved, they are discarded as they do not accurately reflect the situation ‘on the ground’.
  • Second (and related to the first point), ‘ownership and trust’ are critical in any metrics roll-out. There may be scepticism among some members of the engineering team as to the suitability of metrics. As such, ownership and trust in metrics will not be achieved across all teams unless the metrics very accurately reflect the idiosyncrasies of each team’s particular situation/workflow. Indeed, if users start to distrust the metrics, any hope of adoption is doomed to failure.

It is very important, therefore to check the integrity of the metrics surfaced before attempting roll-out and adoption.

The BI tool needs to be flexible enough to ensure that metrics can be configured to accurately represent the truth and gain the trust of users at the team level. This can be done during a technical proof of concept, pre-roll-out.