DORA 2025: AI is an amplifier — the capabilities that decide whether it accelerates outcomes or chaos

DORA 2025: AI is an amplifier — the capabilities that decide whether it accelerates outcomes or chaos

Nearly every engineering organization is experimenting with AI.

The 2025 DORA report on the AI Capabilities Model argues that this is no longer the interesting question.

The hard question is: does AI make your delivery system measurably better, or does it simply make you move faster in the wrong direction?

DORA’s answer is blunt: AI is an amplifier. It magnifies the strengths of well‑run engineering systems — and it magnifies the dysfunctions of weak ones. We found something very similar in our study on how DevEx aligns with AI-assisted development.

Let's summarize DORA’s core idea (the seven AI capabilities) and connect it to a practical operating rhythm from our DevEx guidebook: measure → act → learn, using both user and developer experience signals.

The DORA headline: success with AI is about foundations, not tools

A lot of AI conversations still sound like procurement:

  • “Which coding assistant should we standardize on?”
  • “Should we build our own agent framework?”
  • “Can we mandate AI usage and get the productivity gains?”

DORA pushes leadership in a different direction.

When nearly everyone has access to similar models, competitive advantage comes from the system around the model: how your organization makes decisions, how work flows, how safely knowledge can be accessed, and how quickly you can validate outcomes.

That system is what DORA calls AI capabilities.

The seven AI capabilities, and what they mean in practice

DORA introduces seven foundational capabilities that determine whether AI adoption turns into sustained performance.

1) Clear and communicated AI stance

Teams move faster when they have clarity on what’s allowed and what’s expected. In practice this means:

  • policies that reduce ambiguity (security, privacy, IP)
  • clear defaults (“drafts ok, human sign‑off required”)
  • shared guidance for where AI is useful vs risky

2) Healthy data ecosystems

AI quality depends on data quality. “Healthy” implies more than storage. It includes:

  • definitions and ownership
  • data reliability
  • lineage and governance

3) AI‑accessible internal data

Many organizations have internal data but can’t use it safely. DORA highlights the need for secure pathways that let people and systems retrieve the right context without creating compliance or security hazards.

4) Strong version control practices

AI increases code throughput, but does not remove the need for:

  • traceability and reproducibility
  • reviewability
  • controlled rollouts

5) Working in small batches

AI makes it cheap to generate change. Small batches are how you keep that cheapness from turning into:

  • long integration cycles
  • review overload
  • “fast wrong” delivery

How DevEx Surveys help (practically): if you want to improve batch size, you need to see where the system breaks — not just the symptom.

A simple starting point is a one-question DevEx pulse:

  • “Our tasks are well-sized for efficient work.”

If that signal is weak, you can run a short deep dive that pinpoints the failure mode across the workflow:

  • Clarity / Readiness: “Is work clear enough when it starts?”
  • Size: “Are tasks a good size to work on?”
  • Independence / Dependencies: “Can work move forward without waiting on others?”
  • Flow: “Can tasks be finished smoothly once started?”
  • Delivery: “Are tasks easy to review and ship?”
  • Intent / Pressure (Cost of slicing): “Why are tasks sized the way they are?”
  • Effort (hidden tax): “Thinking about poorly sized or hard-to-finish tasks (too big, unclear at the start, blocked by dependencies, or hard to review/ship), how much time do you spend in a typical week dealing with this?”

That combination turns “small batches” from a slogan into an actionable improvement plan.

6) User‑centric focus

DORA warns that if teams only see throughput dashboards (velocity, deployment frequency), they can lose sight of whether they are improving user outcomes. AI can increase output. User‑centricity ensures you’re increasing value.

7) Quality internal platforms

DORA emphasizes that internal platforms are a primary way to scale AI benefits from individual productivity into system-wide improvements.

Importantly, the report ties platform effectiveness to the holistic developer experience (DevEx) — not just platform features — and recommends a product mindset (including PM ownership focused on DevEx).

Metrics: keep DORA delivery performance, but pair it with experience signals

DORA calls out its delivery performance metrics (speed and stability). These are essential.

But AI changes what gets optimized.

If leadership pressure pushes teams to maximize output metrics, AI can become an “activity accelerator” rather than an “outcome accelerator.”

A more robust measurement approach pairs:

  • delivery performance (speed + stability)
  • user experience (quality as felt by customers)
  • developer experience (friction, waiting, rework, cognitive load)

This is also exactly how our DevEx blueprint frames effective delivery systems.

Where our DevEx playbooks fit: turning capabilities into a repeatable operating rhythm

DORA gives leaders the what: build the capabilities that make AI net-positive.

Our playbooks focus on the how: operationalize improvement as a loop.

The DevEx blueprint: user → developer → user

In The DevEx evolution: aligning delivery with direction, we frame software delivery as an experience-driven cycle that moves:

  • user experience signals → improve quality
  • developer experience signals → improve productivity

Both are executed through a measure → act → learn cadence.

A key leadership insight from the ebook: focus on a small set of top metrics (e.g., five, not fifty) so teams can actually align and act.

The Poland report: AI multiplies strengths and dysfunctions

Our Poland-focused report, Polish DevEx: data, processes, AI, strongly echoes DORA:

  • process chaos slows teams; operational rhythm speeds them up
  • data + processes are foundations
  • AI is a multiplier: it accelerates what works and accelerates what’s broken

It also frames DevEx as an early-warning system: surfacing issues before they show up in delivery outcomes.

A pragmatic approach: how to start improving AI outcomes this quarter

If you want AI to improve outcomes (not just generate output), you don’t need a big-bang transformation.

You need to pick the right capability bottlenecks and install a learning loop.

Step 1: make your AI stance explicit

Make ambiguity expensive by removing it.

  • Where can engineers use AI today?
  • What internal data is allowed in which tools?
  • What requires human sign‑off?

Step 2: prioritize “AI‑accessible internal data” and platform friction

Most teams stall because they cannot reliably retrieve the right context.

Invest in:

  • service/system catalogs, ownership, and docs
  • secure retrieval pathways (access controls + auditability)
  • internal platforms that reduce friction and standardize golden paths

Step 3: enforce small-batch work as the default

Small batches are the simplest guardrail that protects you from “fast wrong.”

Look for:

  • PR size
  • review latency
  • WIP and handoffs

Step 4: run measure → act → learn as a leadership routine

The cadence is the control system.

  • teams review their own signals and pick one improvement
  • leadership removes blockers and prevents local optimizations from turning into system damage

How Network Perspective helps

AI puts pressure on the parts of the system that were previously “invisible”: coordination costs, clarity, bottlenecks, and friction.

We help leadership connect experience signals (what people struggle with) to delivery performance signals (what the system is doing) — including the DORA delivery performance metrics as we calculate them:

  • Lead time for changes
  • Deployment frequency
  • Failed deployment recovery time
  • Change failure rate
  • Deployment rework rate

Two offerings are designed to make those constraints measurable and improvable:

  • DevEx Surveys — a fast baseline of developer experience, revealing friction, blockers, clarity gaps, and constraints behind delivery metrics (and allowing you to re-measure after changes).
  • Work Smart — collaboration observability that helps leaders understand how focus time, interruptions, and collaboration patterns affect flow (especially when AI increases the pace of change).

A practical way to use them with DORA’s model:

  1. run a DevEx baseline to identify the highest-friction constraints
  2. map those constraints to DORA’s seven capabilities
  3. pick 1–2 capabilities to improve, and re-measure after 4–8 weeks

Closing: AI will not fix delivery — it will reveal it

The value of DORA 2025 is its realism.

AI does not eliminate the need for strong engineering foundations.

It stress-tests them.

Organizations that treat AI as a capability upgrade — data, platforms, small batches, user focus, and a disciplined learning cadence — will turn AI into sustainable advantage.

Organizations that treat AI as a tooling rollout will often get the opposite: more output, more noise, and faster drift.

If you want help establishing a baseline and prioritizing improvements, start with DevEx Surveys and build your first measure–act–learn cycle from real signals.

Tags:
March 19, 2026

Want to explore more?

See our tools in action

Developer Experience Surveys

Explore Freemium →

WorkSmart AI

Schedule a demo →
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.