Delivering software is hard. I know as well as anyone just how difficult it can be. I run Engineering at a company called Plandek, prior to this, I led a multitude of software teams delivering software for companies from startups to FTSE100 companies. Like most leaders of engineering teams, I’ve struggled to balance my desire to give teams a great environment to work in, including a high degree of autonomy and responsibility, with being accountable to stakeholders both internal and external.
The expectations from stakeholders are often beyond the capacity available – but it is hard for us to know by how much and reset expectations at a realistic level. Most engineers have also worked on projects where managing tech debt has been sacrificed in the name of delivering software faster – then experienced the frustration of attempting to explain the need to slow down new development to stabilise the codebase. We’ve experienced the difficulty in providing clarity to colleagues from outside engineering why a project hasn’t been completed on schedule and when it might be released.
For most managers, addressing these challenges result in either deep frustration with engineering from your stakeholders or a reversion to command and control style management – which invariably alienates your top-performing engineers. However, there is a better way: using a transparent measurement system that is outcome-based.
Let’s first address the elephant in the room: “Is it possible to design a perfect, objective measurement system?”. Clearly, the answer is no. However, Gilb’s Law states that “Anything you need to quantify can be measured in some way that is superior to not measuring it at all”. Your instincts and good judgement will have helped you to rise to a leadership role and there is no replacement for these talents. However, a well thought out set of metrics will help you to direct your attention and create a culture of greater responsibility in your team.
So, how do you design a measurement system? Imagine that you are an Engineering Manager at Phoenix Bank, leading a team which is tasked with improving the stability and quality of a long-neglected legacy component in the bank’s architecture.
First, you sit down and begin to identify the metrics which best represent the ultimate outcome. These metrics will likely be clear from the demands on you from your business stakeholders. In your one on ones with your manager, you have often been hearing that “important functionality of this component is often broken” and “it takes a long time for bug fixes to go from report into production”. This focuses your mind on tracking the number of Unresolved Bugs for your component and also tracking the Lead Time from a bug being reported to the time it’s deployed into production.
But it is hard to lead your team towards accomplishing an outcome by only having that end state measurement defined – it is like trying to sail across an ocean by asking yourself if you’re on the other side yet. To solve this problem, first you need to know where in the ocean you are. That is, you have to leverage measuring at the midpoint of your process and identifying which metrics that can be tracked which indicate progress towards your ultimate goal. These are known as leading indicators.
To find these leading indicators, you gather your team to explain why and how you’re going to measure progress. This allows you to explain the business context, which measurements you have selected to show progress against your business goals and allows your team to identify the factors that are contributing to the business problem, which can then be measured and focused on for improvement. This transparency and openness also helps you to prevent your team from feeling like they are being monitored and checked up on.
In this meeting, your team tells you that you have a problem with technical debt in your component and that this causes a lot of regressions, so you identify that cognitive complexity is your first leading indicator, you can then also prove the business benefits of resolving technical debt by tracking changes in the number of created bugs. These two metrics then form two of your leading indicators.
A member of your team suggests that there is a clear relationship between the number of bugs being fixed and the number of unresolved bugs and so you add this to your set of leading indicators for number of unresolved bugs. You agree to set a target on the numbers of bugs fixed per week and review regularly to find ways to increase this number.
One of the team members then complains that there is a high degree of context switching due to new high priority bugs coming from business stakeholders. This demand is expected due to the stability issues with your business critical component, however this causes long cycle times because work is frequently started and then left as the biggest focus moves to a different issue, so you add a measurement of Work In Progress to your leading indicators. You also agree to meet with your business stakeholders to use the new measurements to show the impact of demands for context switches and try to reduce them in future.
Once you return to your desk, you begin looking at your dashboards of Lead Time and Unresolved Bugs and can clearly see the impacts of technical debt and context switching causing a situation where more bugs are being created than fixed, and so you budget time to resolve tech debt and change the process to introduce WIP limits. Over the next few weeks, you begin seeing gradual improvement to the stability of your component. Your one-on-ones with your manager become easier as the business impact reduces and it becomes clear how well you have improved the state of your component.
The morale of your team also improves as they leave firefighting mode and create a codebase that they’re proud to work on. They begin proactively identifying areas of concern in the codebase and you give the team the responsibility to identify and resolve these issues without your intervention.
This process will finish by giving you a first iteration of a simple measurement system which your team feels invested in, and which helps you to focus on achieving positive business outcomes. It’s very unlikely that you will get all of the measurements right first time, so it is important to revisit the chosen metrics regularly to ensure that they are still driving the right changes and that the theories about leading indicators remain solid.
This has shown you how you can use data to inform decisions for one use-case, and given you some principles that you can apply your own set of problems. Using data to inform leadership decisions and align around objectives in engineering teams is in its infancy, but I believe it poses several interesting questions. How are you ever sure if management decisions have made a positive impact without data? How do you align people around goals? What does a high performing team look like? How are you able to understand what is holding your teams back?
About the Author
Reuben Sutton is Head of Engineering at Plandek. He is passionate about helping companies use data to improve their decision making. If you want to discuss how you can use data to better deliver software, you can contact Reuben via email or message him on the Data Driven Delivery Slack channel.