Plans
Roles
Support
Customers
The complete Software Engineering Intelligence platform
Get the full suite of Plandek intelligence tools for actionable delivery insights at every level
Book Demo
PRODUCT OVERVIEW
USEFUL PAGES
Data-driven insights for engineering leaders
2025 Software Delivery Benchmark Report
Download Now
PRICING
ABOUT
ACADEMY
Solutions
Platform
Pricing
About
Academy
Your voice matters: Join the GenAI adoption conversation - contribute to our industry research.
Written by
For April 2023’s instalment of our Client Webinar Series, two of Plandek’s team members sat down to discuss how one of Plandek’s Scrum teams – StreamWeaverz – has utilised the Plandek platform to achieve their own goals.
This was a great session that explored StreamWeaverz’s experience of using a platform to maintain – and develop – that same platform.
This webinar was hosted by Agile Success Manager Rachel Reyes, and she was in discussion with senior software engineer, team lead and manager Colin Eatherton. You can watch the full webinar on-demand now by starting the video below.
This Scrum team, having already completed 40 Sprints over the past 2 years, is made up of:
In this setup, the QA also acts as a proxy Scrum Master. This means she runs the daily standups and the retrospectives, both of which provide key insights into the progress and continuous improvement of the whole Scrum team.
Regarding the team’s evolution and continuous growth, Colin commented:
“I would say that we’re definitely showing signs of [Agile DevOps] maturity. The cool thing is, although some metrics might not be going our way all the time, we can demonstrate that this team is making a journey of continuous improvement.”
According to Colin, the biggest enabler for the team has been space for trial and error along the way:
“First and foremost, we’ve had a lot of space from the organisation. Managers [gave] us a lot of autonomy to start this journey, and take charge of this journey the way we want to. So we felt trusted.”
Additionally, Colin explained the impact of relevant metrics on the performance of the team. Namely, when the team was empowered to choose metrics that would work for that team specifically, they could work together to find a common definition of success.
Various core team metrics make up the broader definition of StreamWeaverz’s success. One of these core team metrics – which is in itself a core indicator of success – is Sprint Target Completion.
Throughout the webinar, Colin referred numerous times to the idea of Sprint completion, the idea of internal and external expectations of the team, and the direct link this team has to the customer through their product deployment schedule.
During question time, one attendee asked Colin an interesting question regarding reliability: how do you calculate it?
“Whatever we did, we wanted to do what we said we were going to do. So we wanted to make sure we matched the expectations of the team. The way we calculate it is: the count of Story Points we completed in the Sprint from those that we committed to how many Tickets or Story Points did you complete in the Sprint? Say we completed 77% of the Story Points this Sprint, that’s our reliability. Then we just have to downgrade our version of success to 80% as opposed to 100%, so then we’re a bit more likely to succeed [with the specific Sprint]. We also found that a lower completion rate helps us to ready an item or 2 for the following Sprint, reducing QA bottlenecks by having something ready for QA at the start of the Sprint.”
“Whatever we did, we wanted to do what we said we were going to do. So we wanted to make sure we matched the expectations of the team.
The way we calculate it is: the count of Story Points we completed in the Sprint from those that we committed to how many Tickets or Story Points did you complete in the Sprint? Say we completed 77% of the Story Points this Sprint, that’s our reliability.
Then we just have to downgrade our version of success to 80% as opposed to 100%, so then we’re a bit more likely to succeed [with the specific Sprint].
We also found that a lower completion rate helps us to ready an item or 2 for the following Sprint, reducing QA bottlenecks by having something ready for QA at the start of the Sprint.”
Staying realistic, Colin explained that the initial change was something to get used to:
“We’ve all got different habits, and we find certain things easier than others, so I think we’re more inclined to stick with what we know.”
Even with their depth of understanding regarding Plandek’s intelligent insights and how the platform has helped other teams achieve their goals, StreamWeaverz had a lot of the same concerns as any other software development team. Members of the team voiced concerns regarding the dangers of chasing numbers, potential gamification and losing sight of genuine issues by focusing too heavily on the metrics alone.
After much discussion, the StreamWeaverz agreed that their main goal was to become – and remain – a reliable team, so added Plandek into their everyday toolset.
As Colin explained, the results speak for themselves:
We got through those areas of resistance and [the Plandek platform] has directly led to useful outcomes. There might be a lot of metrics, but you just focus on a couple of metrics or behaviours and that drives change throughout.
Ironically, attempts to gamify the metrics that the team are tracking through Plandek resulted in what Colin described as ‘fantastic knock-on effects’.
“The team had to close 50 Tickets a month, and at the time we were producing around 20. So we wanted to increase Ticket throughput and in terms of gamifying that you could just make tons and tons of small Tickets with single Story Points. But it wasn’t a bad thing. It was embraced. We found that our core metric reliability went up by producing smaller Tickets because they encapsulated less work and risk. Risk and average return rate went down, and rework went down, too.”
“The team had to close 50 Tickets a month, and at the time we were producing around 20. So we wanted to increase Ticket throughput and in terms of gamifying that you could just make tons and tons of small Tickets with single Story Points.
But it wasn’t a bad thing. It was embraced. We found that our core metric reliability went up by producing smaller Tickets because they encapsulated less work and risk. Risk and average return rate went down, and rework went down, too.”
In addition to this positive development, the team is generally increasing its Agile DevOps maturity and trending towards higher rates of predictability regarding their collective deliverables.
Namely, the team can keep an eye on the bigger picture using the organisation’s North Star metrics while fine-tuning their behaviours and habits using team-based metrics. This, in turn, results in the sustained and reliable acceleration of StreamWeaverz’s value delivery.
Free managed POC available.