Enterprise leaders struggle to measure AI return on investment because they're using outdated metrics designed for linear business problems. A new framework from Atlassian's Teamwork Lab reveals that AI value doesn't arrive in one big payoff; instead, it compounds as adoption spreads from individual employees to teams to entire organizations. The research identifies four distinct maturity stages, each with its own measurement priorities and success signals. Why Traditional ROI Math Fails for AI? When boards ask for hard numbers on AI impact, executives often scramble to gather time-saved estimates and productivity anecdotes that only capture a narrow slice of the story. The problem runs deeper than incomplete data. Classic ROI math assumes clean, linear cause and effect, but AI value doesn't work that way. As one Chief AI Officer overseeing more than 1,000 engineers explained the core challenge: "Our number-one problem right now is metrics. When it comes to AI, our board wants to understand what's happening when the rubber meets the road". "Our number-one problem right now is metrics. When it comes to AI, our board wants to understand what's happening when the rubber meets the road," said a Chief AI Officer overseeing 1,000+ engineers. Chief AI Officer, Organization with 1,000+ Engineers Atlassian's AI Collaboration Index research found that many organizations still cannot show clear AI ROI using traditional metrics. The disconnect stems from a fundamental misunderstanding of how AI adoption actually unfolds in practice. Organizations that try to measure everything through a single lens of productivity gains miss the compounding value that emerges as AI becomes embedded into organizational workflows and decision-making processes. What Are the Four Stages of AI Maturity? Atlassian's Enterprise AI ROI Value Framework maps how AI value actually shows up in large organizations, from early experiments to net-new employee-facing AI tools. The framework defines four distinct stages, each with different primary outcomes to measure and different success signals. - Exploring: Employees and teams experiment with AI and run pilots. Focus measurements around adoption rates and participation. - Optimizing: AI becomes embedded into everyday workflows. Focus measurements around operational efficiency and time savings. - Enhancing: AI improves accuracy, consistency, and customer outcomes. Focus measurements around quality improvements and compliance. - Transforming: AI enables net-new products, services, and business models. Focus measurements around innovation and revenue impact. The critical insight is that ROI compounds as AI usage moves from the individual to the collective. Lone superusers can generate personal productivity gains, but the real enterprise impact happens when teams and organizations align around AI-first ways of working together. Organizations that skip stages or try to jump straight to transformation without building the foundation risk wasting investment and failing to capture the compounding benefits. How to Measure Success at Each Stage Each maturity stage requires different metrics and success indicators. Understanding where your organization sits on this ladder is essential for making smarter AI investment decisions and aligning leaders and teams on what "good" actually looks like. The Exploring Stage: This is where teams are experimenting with AI tools and piloting new use cases. It is a critical investment phase where you are seeding skills, building comfort, and spotting where AI can have the most impact. Some leaders dismiss this phase as "AI tourism" or a distraction from core work, but exploration is not waste. It is a prerequisite to unlocking possibility. Organizations that under-invest here cannot move up the ladder to the efficiency, quality, or innovation gains to come. Key metrics to track during exploration include the percentage of employees experimenting with AI in their work, participation in AI events and training, the number and breadth of pilots across teams, and investment in exploration such as time spent in AI education or dedicated AI champions. The Optimizing Stage: You know you are in an optimizing phase when teams across your organization have created and refined a collection of high-value AI-first workflows. The key difference from exploring is that AI is now tied to delivering faster outcomes, not just experimentation or play. At this stage, your measurement focus should shift to operational efficiency. Workflows should feel faster, with improved visibility and collaboration. Gains should show up across teams and functions. - Time Saved: Measure time saved per task or workflow compared to a pre-AI baseline, tracking how many minutes or hours AI shaves off recurring tasks like drafting, summarizing, or troubleshooting. - Cycle Time: Monitor cycle time per workflow before and after AI integration to demonstrate how AI accelerates delivery across coding, writing, customer support, reporting, and other functions. - Throughput and Automation: Track throughput metrics like tickets resolved, content shipped, or tests run with and without AI, plus the percentage of steps or tasks handled by AI. - Cost Reduction: Document cost avoidance or reduction, including less reliance on contractors or vendors and fewer manual hours spent on repetitive, low-value tasks. These numbers make AI ROI tangible for stakeholders and surface which use cases are worth standardizing and scaling across the organization. The Enhancing Stage: This is where most organizations think their AI story ends, but in reality, optimizing is only halfway up the ladder. If you stop at efficiency gains, you get faster outputs, but you risk eroding standards as work quality degrades. Once AI is a routine part of how work gets done, a new opportunity opens up. You are not just working faster; you are systematically improving quality, accuracy, and consistency. This is where AI starts to powerfully assist in review, governance, and decision-making, not just in drafting or ideation. Signals that you are in the enhancing phase include AI review for documents and code to catch issues before they reach customers, AI-assisted quality assurance that surfaces regressions or edge cases humans might miss, brand and tone checks to keep content on-voice and compliant, and automated reviews to check against service level agreements and regulatory requirements. Measure success through error and defect rates before versus after integrating AI, compliance and standards adherence, and customer outcomes clearly linked to AI-assisted workflows such as improvements in customer satisfaction or net promoter scores. Why Stopping at Efficiency Leaves Money on the Table? Many organizations plateau at the optimizing stage, focusing exclusively on speed and productivity. But this approach misses the full potential of AI investment. The enhancing and transforming stages unlock quality improvements and innovation that drive long-term competitive advantage. Organizations that understand this progression can prioritize the AI bets that will help them climb the ladder fastest, moving from faster work to better work to entirely new business opportunities. The framework gives executives, AI leaders, and team managers a shared language and ladder for AI impact. Instead of guessing at ROI, organizations can match their metrics to where they really are on the maturity curve. This alignment enables smarter investment decisions and prevents the common mistake of measuring success with the wrong yardstick at the wrong stage. As enterprises continue to invest heavily in AI capabilities, understanding how value actually compounds across these four stages becomes essential. The organizations that master this framework will move faster up the maturity ladder, capture compounding returns on their AI investments, and build sustainable competitive advantages through AI-first ways of working.