The following article was posted by InfoQ: https://www.infoq.com/articles/metrics-agile-teams/

The importance of metrics to Agile teams

We are fortunate to have the opportunity to work with a great variety of engineering teams – from those in start-ups to very large, distributed enterprises.

Although definitions of “engineering excellence” vary in these different contexts, all teams aspire to it. They also share the broad challenge of needing to balance the “day job” of delivering high quality, high value outcomes against the drive to continually improve.

Continuous Improvement (CI) inherently requires metrics against which to measure progress. These need to be balanced and meaningful (i.e. deterministic of improved outcomes). This creates two immediate issues:

We view CI as vital in healthy and maturing Agile environments. Hence metrics to underpin this process are also vital. However, CI should be owned and driven by the teams themselves so that teams become self-improving. Ergo, CI programmes become SI (Self-Improvement) programmes.

This article focuses on how teams can implement a demonstrably effective SI programme even in the fastest moving and most resource constrained Agile environments so that they remain self-managing, deliver value quickly, and continue to improve at the same time

The Size of the Prize

The concept of CI has been around for a long time. It was applied perhaps most famously in a business context in Japan and became popularised with Masaaki Imai’s 1986 book “Kaizen: the Key to Japan’s Competitive Success.”

The CI principle is very complementary with core Agile principles. Indeed, the Agile Manifesto states:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

There are two key themes here – firstly, CI and secondly, that CI is driven by the teams themselves (SI). This raises the question as to what role of leadership should take in this improvement process.

Our evidence shows that the size of the prize is very significant. Well implemented SI programmes can deliver significant and sustained improvement in metrics that underpin your time to value (TTV) – for example:

However, achieving these goals is hard and requires sustained effort. Technology leadership needs to give teams the tools (and encouragement) to own and drive the self-improvement process. Without constant support, many teams will not have the time or inclination to drive their own self-improvement as they strive to meet their short-term delivery objectives.

The tools needed for effective Agile team Self-Improvement

The principle of team Self-Improvement (SI) is simple and powerful, but very hard to deliver effectively. It requires four important things:

  1. A serious long-term commitment and sponsorship from both the leadership team and the teams/squads themselves – and requires effort and resources over a prolonged period of time to realise iterative improvement
  2. An agreed, objective set of metrics to track progress – making sure that these metrics are actually the right ones, i.e. deterministic of the desired outcome
  3. A means for teams to easily track these metrics and set targets (with targets calibrated against internal and external benchmarks)
  4. An embedded process within teams to make the necessary changes; celebrate success and move on.

Agile teams are almost always busy and resource-constrained. As a result, the intention of always improving (in a structured and demonstrable way) often loses out to the pressures of the day job – delivering to the evolving demands of the business.

In our experience, successful SI requires coordination and stewardship by the technology leadership team, whilst empowering teams to own and drive the activities that result in incremental improvement. Therefore this needs to be in the form of a structured, long-term and well implemented SI programme.

Implementing an effective team Self-Improvement programme

Self-Improvement needs a serious commitment from the leadership team within engineering to provide teams with the tools they need to self-improve.

This will not be possible if the organisation lacks the BI tools to provide the necessary metrics and reporting over the full delivery lifecycle. Firstly, the reporting found within common workflow management tools like Jira is not optimised to provide the level of reporting that many teams require for an effective SI programme. Secondly, teams use a number of tools across the delivery cycle, which often results in data existing in siloes and not integrated to reflect a full view of end-to-end delivery.
Teams should seek out BI tools that address these challenges. The right tools will give product and engineering teams meaningful metrics and reporting around which to build robust SI programmes.

Metrics for SI

As mentioned in the intro, selecting and agreeing metrics is often the most contentious issue. Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives.

By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.

We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.

As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency).

The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.

The (determinant) metrics are grouped into six key areas. These are:

  1. The key enabler – best practice and tool use
  2. A key push-back is often that tool usage (e.g. Jira) is so inconsistent, that the data collected from within it is not meaningful (the old adage of “garbage in, garbage out”).
  3. However, there are some simple disciplines, that can themselves be measured, that greatly improve data quality.
  4. In addition to focusing on best practice “hygiene” metrics, teams can build their self-improvement initiatives around five further determinant metric sets…
  5. Sprint disciplines and consistent delivery of sprint goals (Scrum Agile)
  6. Proportion of time spent/velocity/efficiency of writing new features (productive coding)
  7. Quality and failure rates and therefore…
  8. Proportion of time spent/efficiency of bug fixing and re-work
  9. Teamwork, team wellness and the ability to collaborate effectively.

From these six areas, we believe these are some of the most common and meaningful metrics around which a team can build an effective self-improvement programme:

In our experience, a highly effective Agile SI programme can be built around these metric sets. We’ve also found that having an integrated view of the full delivery cycle across the right tools in a single view, underpinned by these core metrics reveals key areas that can be optimised, i.e. low hanging fruit that can materially improve Time to Value.

Metrics should be available in near real-time to the teams, with minimal effort. If teams have to collect data manually, the overall initiative is likely to fail.

A sample SI Dashboard

When all team members have a near real-time view of the metrics that they’ve signed up to, these become a core part of daily stand-ups and sprint retrospective reviews.

The aim is not to compare these metrics across teams – instead the key aim is to track improvement over time within the team itself. Leadership teams need to remain outcome focused, whilst enabling and empowering teams to identify and make incremental improvements that will improve those outcomes.

Running the SI programme

Team SI is unlikely to take place consistently and sustainably across teams, without committed leadership. The SI programme needs to be formally established on a monthly cycle of team target-setting, implementation, review, and celebration of success (see below).

Team Leaders and Scrum Masters need to strike the right balance of sponsoring, framing and guiding the programme with giving teams the time and space they need to realise improvements.

SI is designed to be a positive and motivating process – and it is vital that it is perceived as such. A key element of this is remember to celebrate success. It’s easy to “gamify” SI and find opportunities to recognise and reward the most-improved teams, competence leaders, centres of excellence, and so on.

Target setting

Questions often arise around target setting and agreeing what success looks like. Some organisations opt only to track individual teams’ improvement over time (and deliberately not make comparisons between teams). Still others find benchmarks useful and divide them into three categories:

  1. Internal benchmarks (e.g. measures taken from the most mature Agile teams and centres of excellence within the organisation)
  2. External competitor/comparator benchmarks – some tools provide anonymised benchmarks across all metrics from similar organisations
  3. Agile best-practice benchmarks – these are often hard to achieve but are obvious targets as the SI programme develops.

The SI programme leader/sponsor can view progress against these benchmarks and look back over the duration of the programme to view the rate of improvement.

In summary:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

  1. formal sponsorship by technology leadership in the form of recognition and a suitable framework to manage the long-term process; and crucially
  2. a set of meaningful and agreed Agile metrics that underpin the process of SI and track performance improvement over time; and
  3. a means to surface these metrics in near real time, with minimum/no effort involved for the teams themselves.
The following article has been published in JaxEnter

https://jaxenter.com/second-age-agile-159373.html

As all LOTR fans will know, the Second Age in Middle Earth lasted 3,441 years – from the defeat of Morgoth to the first defeat of Sauron. Lots happened in Middle Earth during the period and many chickens came home to roost (metaphorically speaking).

In many ways, Agile is entering a similar age. It’s been more than 15 years since the Agile Manifesto was conceived and adoption has been very rapid. It is estimated that 88 percent of all US businesses are involved in some form of Agile methodology in some part of their technology operations.

As such, Agile finds itself approaching the top of the “S” of the adoption curve (see below). As with all innovations approaching late-adoption maturity, the honeymoon period is over and businesses working in Agile are under increasing pressure to demonstrate that their Agile transformations are successful and adding real business benefits.

The lack of Agile metrics platforms

Technology teams are very familiar with measuring output, performance and quality and are not short of quant data. Surprisingly, however, there are very few BI solutions available that aim to measure the overall effectiveness of Agile software development teams across the full delivery cycle– from ideation to deployment.

The solutions out today there tend to focus on one element within the overall Agile process – e.g. code quality tools (focused on coding itself), and workflow management plug-ins that look at certain aspects of the development process, yet often exclude pre-development and post-development stages.

Indeed the “Agile metrics platforms” or “Agile BI” sector is so embryonic, that analysts like Gartner do not yet track it. The closest related sector that Gartner analyses is “Enterprise Agile Planning Tools, which, although related, is focused on planning rather than the efficiency and quality of the output.

Fortunately, newer solutions are emerging vying to answer this unmet consumer need. To create a balanced set of Agile metrics that track overall effectiveness, look for new systems that ingest and analyse data from a variety of tools that software development teams use in their everyday work.

What should you measure?

It is reasonable to assume that all Agile transformations broadly aim to deliver against the Agile Manifesto’s number one objective: the early and continuous delivery of value. As the Manifesto states:

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.

The Manifesto’s subsequent aims support this guiding principle and can be summarised as:

The challenge and key question is how do you measure these objectives and demonstrate that “Agile is working”? This opens up the contentious subject of Agile metrics.

Navigating the politics of Agile metrics

Why are Agile metrics contentious? There are many protagonists within large technology teams. Each has their own distinct views as to:

This makes selecting Agile metrics extremely important. Unless the process involves the key protagonists (from teams to delivery managers to heads of engineering) the metrics may not be accepted or trusted. In those circumstances, there is little point in collecting metrics, as teams will not drive to improve them and show the desired progress.

Meaningful Agile metrics

This is our take on a meaningful set of metrics for Agile development teams to track and demonstrate improvement over time.

As the table shows, some of the metrics are used by the team only and will not be compared across teams. Some can be aggregated across teams in order to give managers an overall view of progress.

Team view Agile metric set

These metrics are by no means definitive and readers will doubtless disagree with some. Since they have shown to be deterministic of outcomes, however, they provide a very useful starting point for development teams in this ‘Second Age of Agile’.

Plandek is delighted to confirm that it now provides an integration with CI/CD tools (such as Jenkins, CircleCI, GoCD and TeamCity) to provide yet more insight to its growing customer base.

Plandek now offers the most complete end-to-end view of the software delivery cycle to enable Agile software delivery teams to greatly reduce delivery risk and improve delivery efficiency.

Most organisations apply an Agile methodology to some or all of their software development and there is a growing recognition for the need for improved Agile governance.  Strong Agile governance ensures that the end-to-end Agile development process is transparent, with meaningful metrics to quantify and mitigate delivery risk and ensure optimal efficiency.

The Plandek BI and analytics platform is unique in providing the largest library of delivery, engineering and Agile metrics suitable for all levels within the technology organisation – from the teams themselves, to the CIO.

Plandek, the rapidly growing SaaS provider of software delivery metrics and analytics, was co-founded in 2017 by Dan Lee (founder of Globrix) and Charlie Ponsonby (founder of Simplifydigital). It has struck a chord with its unique take on end-to-end software delivery metrics and analytics, accessed in near real-time via the Plandek dashboard.

Plandek is growing very rapidly and works with global organisations such as Reed Elsevier, News Corporation, Autotrader – as well as a growing portfolio of European technology-led businesses such as Preqin and Secret Escapes.

Dan Lee, Co-CEO of Plandek commented on the new development: “We are delighted to announce Plandek’s integration with the full suite of CI/CD tools.  It gives our customers a unique and powerful view of the end-to-end software delivery cycle – enabling them to greatly reduce software delivery risk”.

For enquiries please contact Darina Lysenko: dlysenko@plandek.com.

Becoming a Data Driven Engineering Leader

Delivering software is hard. I know as well as anyone just how difficult it can be. I run Engineering at a company called Plandek, prior to this, I led a multitude of software teams delivering software for companies from startups to FTSE100 companies. Like most leaders of engineering teams, I’ve struggled to balance my desire to give teams a great environment to work in, including a high degree of autonomy and responsibility, with being accountable to stakeholders both internal and external.

Like most leaders of engineering teams, I’ve struggled to balance my desire to give teams a great environment to work in, including a high degree of autonomy and responsibility, with being accountable to stakeholders both internal and external.

The expectations from stakeholders are often beyond the capacity available – but it is hard for us to know by how much and reset expectations at a realistic level. Most engineers have also worked on projects where managing tech debt has been sacrificed in the name of delivering software faster – then experienced the frustration of attempting to explain the need to slow down new development to stabilise the codebase. We’ve experienced the difficulty in providing clarity to colleagues from outside engineering why a project hasn’t been completed on schedule and when it might be released.

For most managers, addressing these challenges result in either deep frustration with engineering from your stakeholders or a reversion to command and control style management – which invariably alienates your top-performing engineers. However, there is a better way: using a transparent measurement system that is outcome-based.

Let’s first address the elephant in the room: “Is it possible to design a perfect, objective measurement system?”. Clearly, the answer is no. However, Gilb’s Law states that “Anything you need to quantify can be measured in some way that is superior to not measuring it at all”.  Your instincts and good judgement will have helped you to rise to a leadership role and there is no replacement for these talents. However, a well thought out set of metrics will help you to direct your attention and create a culture of greater responsibility in your team.

Gilb’s Law states that “Anything you need to quantify can be measured in some way that is superior to not measuring it at all”.

So, how do you design a measurement system? Imagine that you are an Engineering Manager at Phoenix Bank, leading a team which is tasked with improving the stability and quality of a long-neglected legacy component in the bank’s architecture.

First, you sit down and begin to identify the metrics which best represent the ultimate outcome. These metrics will likely be clear from the demands on you from your business stakeholders. In your one on ones with your manager, you have often been hearing  that “important functionality of this component is often broken” and “it takes a long time for bug fixes to go from report into production”. This focuses your mind on tracking the number of Unresolved Bugs for your component and also tracking the Lead Time from a bug being reported to the time it’s deployed into production.

But it is hard to lead your team towards accomplishing an outcome by only having that end state measurement defined – it is like trying to sail across an ocean by asking yourself if you’re on the other side yet. To solve this problem, first you need to know where in the ocean you are. That is, you have to leverage measuring at the midpoint of your process and identifying which metrics that can be tracked which indicate progress towards your ultimate goal. These are known as leading indicators.

It is hard to lead your team towards accomplishing an outcome by only having that end state measurement defined – it is like trying to sail across an ocean by asking yourself if you’re on the other side yet.

To find these leading indicators, you gather your team to explain why and how you’re going to measure progress. This allows you to explain the business context, which measurements you have selected to show progress against your business goals and allows your team to identify the factors that are contributing to the business problem, which can then be measured and focused on for improvement. This transparency and openness also helps you to prevent your team from feeling like they are being monitored and checked up on.

In this meeting, your team tells you that you have a problem with technical debt in your component and that this causes a lot of regressions, so you identify that cognitive complexity is your first leading indicator, you can then also prove the business benefits of resolving technical debt by tracking changes in the number of created bugs. These two metrics then form two of your leading indicators.

A member of your team suggests that there is a clear relationship between the number of bugs being fixed and the number of unresolved bugs and so you add this to your set of leading indicators for number of unresolved bugs. You agree to set a target on the numbers of bugs fixed per week and review regularly to find ways to increase this number.

One of the team members then complains that there is a high degree of context switching due to new high priority bugs coming from business stakeholders. This demand is expected due to the stability issues with your business critical component, however this causes long cycle times because work is frequently started and then left as the biggest focus moves to a different issue, so you add a measurement of Work In Progress to your leading indicators. You also agree to meet with your business stakeholders to use the new measurements to show the impact of demands for context switches and try to reduce them in future.

Once you return to your desk, you begin looking at your dashboards of Lead Time and Unresolved Bugs and can clearly see the impacts of technical debt and context switching causing a situation where more bugs are being created than fixed, and so you budget time to resolve tech debt and change the process to introduce WIP limits. Over the next few weeks, you begin seeing gradual improvement to the stability of your component. Your one-on-ones with your manager become easier as the business impact reduces and it becomes clear how well you have improved the state of your component.

The morale of your team also improves as they leave firefighting mode and create a codebase that they’re proud to work on. They begin proactively identifying areas of concern in the codebase and you give the team the responsibility to identify and resolve these issues without your intervention.

This process will finish by giving you a first iteration of a simple measurement system which your team feels invested in, and which helps you to focus on achieving positive business outcomes. It’s very unlikely that you will get all of the measurements right first time, so it is important to revisit the chosen metrics regularly to ensure that they are still driving the right changes and that the theories about leading indicators remain solid.

It’s very unlikely that you will get all of the measurements right first time, so it is important to revisit the chosen metrics regularly to ensure that they are still driving the right changes.

This has shown you how you can use data to inform decisions for one use-case, and given you some principles that you can apply your own set of problems. Using data to inform leadership decisions and align around objectives in engineering teams is in its infancy, but I believe it poses several interesting questions. How are you ever sure if management decisions have made a positive impact without data? How do you align people around goals? What does a high performing team look like? How are you able to understand what is holding your teams back?

About the Author

Reuben Sutton is Head of Engineering at Plandek. He is passionate about helping companies use data to improve their decision making. If you want to discuss how you can use data to better deliver software, you can contact Reuben via email or message him on the Data Driven Delivery Slack channel.

Getting the Agile balance right

The author of this guest blog was Director of Quality Engineering at a global information analytics business specialising in science and health. The author was responsible for overall Quality Engineering strategy across one of the business units before recently taking a position closer to home. 

 

Whether you’re talking about governments or Agile development, the decision to centralise power and aim for consistency or to let individual groups self-govern can be polarising. As ever, there are no black and white answers.

We have learned through experience that different groups move at different speeds and work in different ways for very legitimate reasons. We accept this as a good thing and it reflects the spirit of Agile. Equally, there needs to be some degree of consistency for other legitimate reasons. Getting that balance right is both a challenge and an opportunity.

My team covers quality engineers across various locations and various stages of Agile maturity. Most teams hold the usual Scrum ceremonies and do two-week sprints. Other Agile practises vary and teams are empowered to find ways of working that work best for them. Another thing that varies is our use of tools. We have consistency across the board for some, such as the use of Jira, but the tools in use for CI/CD, version control, static code analysis etc. can vary.

“Plandek was the only tool we found that let us integrate data from multiple tools and Jira instances into a single dashboard.”

This brings me to why we decided to work with Plandek for Agile metrics.  Other alternatives we looked required teams to work in a consistent way, using the same tools and even a single Jira instance – clearly not practical for us. Plandek was the only tool we found that let us integrate data from multiple tools and Jira instances into a single dashboard.

Since we’re running multiple Jira instances and a couple hundred different ‘projects,’ or workstreams, just having a tool like Plandek that can integrate and present Jira data is proving extremely valuable. We started implementing it in our group late last year and have completed the initial rollout phase of over 100 projects

In terms of metrics, the individual teams are experimenting with determining which metrics are best suited to the way they work. Quite a few squads, for example, work very true to the Kanban style. Anything to do with velocity, or that requires estimation is not relevant to them because they don’t estimate story points. Teams are free to use metrics that are useful to them. We also have a handful of rolled up metrics that we report on monthly and Plandek has helped reduce what was previously a lot of time-consuming manual tasks to gather the data across the group.

“Just like Agile itself, people and teams embrace [Plandek] at different speeds. Some immediately see great value in being able to measure certain things and identify the bottlenecks.”

Before Plandek we had no way of gathering metrics in a rolled up fashion at all, so we’re definitely seeing value. However introducing Agile metrics is not without its challenges. Just like Agile itself, people and teams embrace it at different speeds. Some immediately see great value in being able to measure certain things and identify the bottlenecks. Others are concerned about being overly-monitored, so we have to reassure those people that we aren’t using metrics as a stick or viewing data out of context. To do this, we agreed as a management team to turn off the ability to drill down into individual’s data and then let individual teams decide if they want to turn it back on again.

What’s helped drive metrics adoption most successfully are the engagement sessions we’ve held with Plandek. In these, Plandek works closely with a particular team that volunteers to showcase their actual (not demo) data to others in the company. Those teams really gain a lot of value from being able to learn from the Plandek consultants, and they also get visibility for their work across the company.

Going back to my original point, there’s nothing black and white about Agile. Trying to achieve total consistency goes against the whole Agile ethos. At the same time, some level of consistency, especially when you’re scaling, is desirable because it provides some common ground for sharing best practices and continuous improvement. It’s much easier for people to adjust to a gain, then a loss, so I would advise other companies in our situation to hang onto some reins of consistency while empowering teams with the flexibility to adapt Agile – and metrics – to their unique requirements.

 

Found this article useful? Share it with your network and tag Plandek.

Stay tuned – we will be publishing our next CXO Blog post shortly! Follow Plandek on LinkedIn for more updates.

The following article has been published in ComputerWeekly and TechTarget

https://www.computerweekly.com/blog/CW-Developer-Network/Plandek-co-CEO-5-areas-for-Agile-team-self-improvement

https://itknowledgeexchange.techtarget.com/cwdn/plandek-co-ceo-5-areas-for-agile-team-self-improvement/

In his role co-CEO of big data analytics company PlandekCharlie Ponsonby guest writes for the Computer Weekly Developer Network to examine how teams can get more out of Agile development projects.

Plandek uses proprietary algorithms to synthesise complex fuzzy data-sets to provide actionable insights designed to improve productivity today and early-warning signs to mitigate against the problem projects of tomorrow.

Ponsonby writes as follows…

Now that Agile is officially mainstream, development teams must not allow old, ingrained habits to resurface and dilute its potential. This is a very real risk.

Agile, is, after all, a relative term and fairly meaningless unless qualified. So do you know how agile your development is? One-way to embed the culture change required to answer that key question is through self-improvement (SI) processes underpinned by the right agility metrics.

Agile is already closely linked to SI — let’s remember that the Agile Manifesto states: “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.”

In other words, Agile is about continuous, team-driven SI. The fact that retrospectives is among the top five Agile techniques underscores SI’s importance (source: State of Agile report).

Nevertheless, SI efforts regularly fail due to inadequate leadership and follow-through. Teams either don’t have the right tools to collect the data or that they set the wrong metrics. The latter can be especially problematic when Agile development projects are scaling.

At Plandek and in former positions, our leadership team has had the opportunity to work hands on with a wide variety of Agile engineering teams – from start-ups to very large, distributed enterprises. Based on these experiences we have been able to identify five critical areas for effective Agile team SI.

#1 Commitment & sponsorship

Agile teams are almost always busy and resource-constrained. As a result, the intention to always improve (in a structured and demonstrable way) often loses out to the pressures of the day job – delivering to the ceaseless demands of the business.

SI calls for a serious commitment from engineering leadership in the form of a structured, long-term and well-implemented SI programme. This includes establishing a monthly cycle of team target setting, implementation and review. These must be supported with robust tools that provide the necessary metrics and reporting.

#2 Agreed metrics

Selecting and agreeing metrics is often the most contentious issue. Failing to reach consensus and buy-in on metrics is the reason why so many Agile programmes fail.

As the Agile author Scott M. Graffius put it: “If you don’t collect any metrics, you’re flying blind. If you collect and focus on too many, they may be obstructing your field of view.”

By nature, Agile encourages a myriad of different methodologies and workflows that vary from team to team. However, this does not mean that it is impossible to agree on a set of meaningful Agile metrics around which to build a self-improvement programme. Our initial research across more than 100 projects in 12 months have revealed the following Agile metrics to be deterministic of better outcomes, and can be tracked and improved in all teams:

  1. The key enabler metrics – best practice and tool use. Some team members argue that tool usage (e.g. of Jira) is so inconsistent, that data collected from within it is not meaningful (garbage in, garbage out). However, it’s possible and useful to measure the extent to which team members are adhering to simple processes that are part of your software development lifecycle.
  2. Sprint disciplines and consistent delivery of sprint goals (Scrum Agile).
  3. The proportion of time spent/velocity/efficiency of writing new features (productive coding).
  4. Quality and failure rates.
  5. The proportion of time spent/efficiency of bug fixing and re-work.
  6. Teamwork, team wellness and the ability to collaborate effectively.

#3 Data context

A metric on its own is only meaningful when viewed in the right context. This means you need to harvest and combine the right data sources. To gather meaningful insights into the team processes, data should come from systems that developers use including workflow management software like Jira; code repositories like GitHub, Bitbucket, TFS or Gitlab; code quality tools like Sonarqube; and time tracking systems like Harvest or Tempo.

#4 Tracking metrics in near real-time

Once metrics are agreed, it’s essential to make them available to teams in as near to real-time as possible without creating added work. This means not expecting people to collect data manually. Besides creating extra work, the data will be retrospective and therefore less useful. Look for tools that gives teams the metrics they need in near real-time, without requiring manual input.

#5 Celebrate success

Once you’ve agreed what success looks like, it’s important to pause and celebrate when teams reach key milestones. You want people to regard SI as a positive and motivating process. SI lends itself well to gamification and communicating recognition to the most-improved teams, competence leaders, centres of excellence and so forth. Celebrating success shouldn’t be seen as a ‘fluffy’ optional, but essential to reinforcing the right behaviours and raising morale. Also: teams that measure SI gather the data necessary to win agile development awards.

Well-implemented SI programmes can deliver profound and sustained improvement in metrics that are indicative of productivity and timing accuracy. Typical results that we see are:

Without question, SI calls for major cultural change and sustained effort. Technology leadership needs to give teams the tools (and encouragement) to drive the process. Without ongoing support, many teams will not have the time or inclination to drive their own SI as they strive to meet their short-term delivery objectives.

Fortunately, new tools are becoming available that help teams get SI programmes right and improve genuinely monitor and improve agility. The results we have seen in practice show that it is well worth the effort.

Plandek the rapidly growing SaaS provider of Agile BI, has recently closed a $3.3m funding round led by Perscitus LLP and a group of experienced private investors and family offices.  This comes a year after an initial fund raise of $2.7m in January 2018.

Plandek was co-founded in 2017 by Dan Lee (founder of Globrix) and Charlie Ponsonby (founder of Simplifydigital).  It has struck a chord with its unique take on Agility metrics for software development teams,accessed in near real-time via the Plandek dashboard.

Most large enterprises now apply an Agile methodology to some or all of their software development.  However, measuring the efficiency of Agile software development teams remains a contentious subject.  Plandek is unique in providing a library of meaningful Agility metrics suitable for all levels within the technology organisation – from the teams themselves, to the CIO.

Plandek is growing very rapidly and is already working with global organisations such as Reed Elsevier, Worldpay and News Corporation – as well as a growing portfolio of technology-led growth businesses such as Arcus Global and Secret Escapes in the UK.

Charlie Ponsonby, Co-CEO of Plandek welcomed another vote of confidence from investors saying: “Having refined our go-to-market strategy in 2018, we are delighted to have the investment required to accelerate our growth, focused on larger enterprise clients across the UK, Europe and North America”.

———————————————————————————————–

For enquiries please contact Charlie Ponsonby: cponsonby@plandek.com

February 2019 – Plandek debuts new UK advertising campaign

Plandek the rapidly growing SaaS provider of Agile BI,analytics and reporting debuted its first UK advertising campaign in Canary Wharf London in February.

The “FrAgile” campaign dramatizes the challenge of implementing large scale Agile software development methodologies in large organisations.  It alludes to the need for meaningful metrics to measure progress in the journey towards Agile“engineering excellence”.  Without meaningful metrics down to the team level, teams are unable to self-diagnose and self-improve their processes.  With the result that Agile can become Fragile.

The digital outdoor campaign is focused on the Greater London area and supports a direct marketing campaign targeted at large enterprise CTOs and CIOs.

Charlie Ponsonby, Co-CEO of Plandek commented:

“We are delighted to be launching our first commercials in the UK market.  Plandek is really striking a chord with CTOs under pressure to ensure that their Agile software delivery teams deliver great results. And the campaign dramatizes how Plandek’s innovative Agile BI platform can help meet that challenge.”

Plandek (www.plandek.com) is an Agile BI, analytics and reporting platform help technology teams deliver Agile software development projects more productively and predictably.

Plandek’s big data platform mines the data history from dev teams’ tool sets (e.g. Jira, Git) to reveal and track levers right down to the team and individual level, that are highly predictive of project productivity and predictability – in order to significantly improve Agile project outcomes.

Plandek is based in London and global clients include: ReedElsevier, News Corporation and Worldpay.

Plandek is delighted to announce further additions to its London based software engineering team. Mattia Licciardi joins as a Front End Developer having led UI initiatives at Digital Rockers and TeamSystem in Italy. And Colin Eatherton joins as a UI Developer from Energydeck in London.

Eduardo Turino Plandek’s Head of Application Engineering says “We are really delighted to have attracted such exceptional candidates as Mattia and Colin which shows the strength of the Plandek growth story and the continued attractiveness of London as a vibrant technology centre”.

The increased UI resource is needed to deliver Plandek’s process of building out the unique Plandek software development analytics platform.

Plandek the rapidly growing SaaS provider of software delivery BI has put its $2.7m recent fund-raise to work with its first project update of 2018.

The Plandek dashboard/forecasting platform is designed to help delivery teams more effectively manage the software development process through actionable insight.  It mines data sitting within commonly used delivery tools (e.g. Jira, Github etc), to identify bottlenecks in the process; the impact of scope change, and trends in team velocity – in order to improve teams’ productivity and forecast delivery/spend more effectively.

The latest version of the Plandek dashboard enables more detailed analysis within Sprints and the creation of bespoke reports for use in Sprint retrospectives. In addition, it offers additional insights into deltas between teams and if desired, individual velocity.

Plandek the rapidly growing SaaS provider of software delivery DI (development intelligence), has recently closed a $2.7m funding round led by Perscitus LLP and a group of experienced private investors.

Plandek was co-founded in 2017 by Dan Lee (founder of Globrix) and Charlie Ponsonby (founder of Simplifydigital). It has developed a dashboard/forecasting platform, which is designed to help delivery teams more effectively manage the software development process through actionable insight.

It mines data sitting within commonly used delivery tools (e.g. Jira, Github etc), to identify: hidden bottlenecks/dead-time in the process; the impact of scope change, and trends in team velocity – in order to improve teams’ productivity and forecast delivery/spend more effectively.

Plandek seems to have struck a chord among clients of all sizes and is already working with companies small and large including News UK, Dixons Carphone, Sky Bet, TalkTalk and Worldpay to name a few.

Charlie Ponsonby, Co-CEO of Plandek welcomed the vote of confidence from investors saying: “Having already built version 1 of the Plandek dashboard, we are delighted to have the investment required to develop a more complete analytics and forecasting platform, in order to deliver our vision of becoming the Google Analytics of the software development process”.