
How Top Engineering Teams Are Quantifying AI Adoption
AI adoption in engineering is accelerating at a pace few predicted. In 2025, global spending on generative AI is projected to reach $644 billion, up 76% from the previous year[1]. Yet, only a fraction of companies have fully integrated these tools into their workflows, and the gap between early adopters and those waiting on the sidelines is widening.
Rethinking Engineering Metrics for the AI Era
Why Traditional Metrics Fall Short
For years, teams have relied on metrics like DORA, developer experience, and pull request counts to track performance. These numbers still matter, but they don’t capture the full picture when AI is part of the workflow. As AI code editors and automated review tools become standard, raw speed or deployment frequency alone can mislead. A team might ship more code, but what is the quality? Does it actually help users or improve the product?
Key Metrics in the AI Context
AI usage metrics: Tracks how often and how effectively engineers use AI-powered tools.
Code turnover: Tracks how much of your code is being re-written and how much of it is AI?
Weave output: Tracks how many ‘expert engineering’ hours are actually being completed so you can get an objective measure of output.
Measuring AI Usage: What Top Teams Track
AI Usage Metrics in Practice
Quantifying AI adoption starts with tracking how engineers interact with AI tools. This includes:
Frequency of AI code editor usage (e.g., Cursor, Windsurf)
Percentage of code generated or reviewed by AI
These metrics help teams understand not just if AI is present, but if it’s making a difference.
Balancing Speed, Quality, and Experience
Top teams don’t just count AI interactions. They connect usage data to outcomes like code quality, bug rates, and developer satisfaction. For example, if AI-generated code increases but so do post-release bugs, it’s a signal to adjust how the tools are used.
Industry Frameworks: DORA, SPACE, and Beyond
Applying Proven Models to AI Adoption
Frameworks like DORA (Deployment Frequency, Lead Time, Change Failure Rate, Mean Time to Recovery) and SPACE (Satisfaction, Performance, Activity, Communication, Efficiency) remain relevant, but need adaptation for AI-driven workflows.
DORA metrics still help track delivery performance, but should be paired with AI usage data to understand the full impact[2].
SPACE metrics can highlight shifts in developer satisfaction or collaboration as AI tools are introduced.
Technical Example: Calculating AI Contribution
To quantify AI’s role, teams can use formulas such as:
AI Contribution Rate = (Lines of Code Generated by AI) / (Total Lines of Code)
This metric, when tracked over time, reveals how much of the codebase is influenced by AI and can be correlated with quality or delivery outcomes.
How Weave Supports Engineering Teams
Weave’s Approach to AI Usage Metrics
Weave’s analytics platform is designed for the modern engineering team. By combining LLM-powered analysis with domain-specific machine learning, Weave tracks not just output, but the hidden strengths and weaknesses that AI adoption can reveal. Teams can see exactly how AI tools are impacting productivity and quality.
Key Features for AI Measurement
Understand how much your team is building with AI, and how your organizations output is affected.
Compare AI code usage to your whole codebase
See which individual contributors are using AI most effectively
Understand how that correlates to your teams true output
Conclusion: The Next Step in Engineering Analytics
Quantifying AI adoption is now a core responsibility for engineering leaders. Traditional metrics still matter, but they need context and adaptation for the AI era. The most effective teams combine usage data and outcome metrics to get a true picture of performance.
Weave offers engineering teams the tools to measure, understand, and optimize AI adoption—helping you turn data into better decisions and stronger results.
Citations
[1] https://www.sequencr.ai/insights/key-generative-ai-statistics-and-trends-for-2025