Product
Roles
Support
Customers
The complete Software Engineering Intelligence platform
Get the full suite of Plandek intelligence tools for actionable delivery insights at every level
Book Demo
PRODUCT OVERVIEW
USEFUL PAGES
Data-driven insights for engineering leaders
2026 Engineering Productivity Benchmarks Report
Download Now
PRICING
ABOUT
ACADEMY
Solutions
Platform
Pricing
About
Academy
Free Trial
Written by
March 2026
Artificial intelligence is transforming software engineering at extraordinary speed. AI coding assistants and developer tools are now used in about 90% of engineering organisations and the tools are embedded across engineering workflows, promising dramatic productivity improvements.
But with this transformation comes a new challenge for technology leaders: how to unlock the productivity benefits of AI while managing the associated risks.
AI is top of mind for all Boards and as a result, so too is AI-risk management as the ‘first do no harm’ principle is central to Board’s fiduciary responsibilities.
Platforms such as Plandek’s Developer Productivity Insight (DPI) platform are essential tools to help organisations achieve the transition to AI-augmented engineering rapidly and safely.
The expectations placed on CTOs have never been greater.
In boardrooms around the world, two seemingly contradictory mandates dominate the agenda:
On the one hand, organisations expect AI to deliver dramatic productivity improvements—some estimates suggest 5-10x increases in engineering productivity. On the other, CEOs and boards are increasingly focused on the risks introduced by uncontrolled AI adoption.
Many boards see AI as either:
The stakes are therefore high. Managing existential risk is the primary fiduciary responsibility of Board members. CTOs therefore face the challenge of balancing rapid technological adoption with robust governance.
The adoption of AI tools in software engineering has been extraordinarily rapid.
In less than two years:
However, adoption patterns vary widely.
Some organisations have implemented structured rollout programmes with governance and oversight, while others have allowed ad-hoc experimentation without clear guardrails.
Common challenges include:
Organisations currently fall across a spectrum of AI adoption maturity.
At one end are companies with ad-hoc AI adoption:
In these environments, organisations may see limited productivity improvements and high risk exposure.
At the other end are organisations that treat AI adoption as a planned change programme. These organisations implement:
In these environments, organisations can achieve high productivity gains with lower risk, unlocking the potential of AI-augmented engineering.
AI risk management in software engineering is not simply a best practice. Increasingly, it is becoming a formal requirement under corporate governance, regulatory frameworks and emerging AI legislation.
Three major oversight umbrellas apply.
Under the UK Corporate Governance Code, Boards of listed companies must oversee and manage material technology risks.
Even in non-listed companies, Directors retain fiduciary responsibilities to shareholders to ensure that significant operational risks—including AI risks—are properly managed.
As AI becomes embedded in software development, it clearly falls within the category of material technology risk that Boards must oversee.
Industry regulators expect organisations to manage AI-related risks under existing governance frameworks.
Examples include:
FCA and SEC accountability frameworks
The FCA Senior Managers and Certification Regime (SM&CR) for example require senior executives to demonstrate accountability for technology risk.
The regime requires SMF24 – Chief Operations Function (which covers IT), to take “all reasonable steps” to manage AI risks and they are required to demonstrate continuous monitoring and governance of AI risks.
Similar expectations to manage AI risk in software engineering apply in healthcare, aviation, critical infrastructure and other regulated sectors.
And ISO’s new 42001 AI Risk Management Framework is focused specifically on the challenge of implementing AI tools at pace whilst managing AI risk, safety and ethics.
The EU AI Act represents the most significant regulatory framework governing AI deployment. Article 26 — Obligations of Deployers comes into force on 2 August 2026.
This regulation applies to:
Key obligations include:
Software engineering teams deploying AI tools must therefore ensure their processes meet these requirements.
Despite rapid adoption, the risks introduced by AI tools in software development are often not completely understood. Several categories of risk are emerging – some less obvious than others.
Intellectual property ambiguity
Many generative AI tools do not guarantee exclusive rights to their outputs.
This creates potential issues including:
Some tools may also train on user inputs, raising additional concerns around intellectual property and proprietary data.
AI-generated code may introduce security weaknesses.
These issues can lead to:
Regulators increasingly expect organisations to implement:
Uncontrolled developer usage of AI tools may violate these expectations.
Developers may unintentionally expose sensitive information when interacting with public AI tools.
To unlock AI’s productivity potential safely, organisations need visibility, measurement and governance.
Plandek is a leading Developer Productivity Insight (DPI) platform. It sits across DevOps toolsets to extract the data-footprint of engineering teams (including the use of AI tools). It provides an intelligence layer which enables organisations to:
Plandek provides a structured approach to managing AI adoption through its RACER methodology.
This framework enables organisations to:
By combining structured governance with real-time engineering data provided by the Plandek platform, the RACER framework helps organisations transition safely toward AI-augmented engineering.
Effective AI governance and risk management requires measurable indicators of AI risk.
Plandek helps organisations monitor key AI risk and compliance metrics across several dimensions.
Example AI risk management metrics are shown below.
Before AI risk can be managed, organisations must first identify where AI-generated code exists.
Key metrics include:
Organisations can also compare quality outcomes between AI-generated and human-written code using metrics such as:
AI-generated code can introduce security issues including:
AI tools may bypass internal design standards or introduce architectural drift.
Key indicators include:
AI outputs may include:
Monitoring metrics include:
AI models may fabricate:
Metrics to track include:
The biggest risk is not that AI makes mistakes—it is whether humans detect them.
Metrics include:
Ultimately, AI risk manifests in production.
Important indicators include:
AI will fundamentally reshape how software is built.
Organisations that successfully manage the transition will gain major advantages in:
But achieving these benefits requires governance, measurement and visibility.
By combining engineering data with AI risk frameworks, organisations can safely move toward hyper-productive AI-augmented engineering.
Platforms such as Plandek provide the insights and structure required to manage this transition—helping technology leaders deliver faster while maintaining the levels of safety, compliance and oversight demanded by Boards and regulators.
For more information visit plandek.com or contact:
Charlie Ponsonby
cponsonby@plandek.com
Will Lytle
wlytle@plandek.com
Free managed POC available.