The 93-7 Problem: Why Companies Are Failing at AI Adoption Despite Massive Tech Spending

The real barrier to AI success isn't building better algorithms or deploying faster tools,it's preparing people to work alongside them. While corporate America is pouring resources into AI infrastructure, a critical gap is emerging: organizations are spending roughly 93% of their AI adoption budgets on technology and only 7% on the human side of transformation . This imbalance is creating a crisis that leadership experts, consultants, and researchers are now calling out publicly.

Why Is the 93-7 Split Such a Problem?

The consequences of this lopsided investment are already materializing across industries. When one team deploys an AI agent that doubles their throughput overnight, work suddenly flows downstream at twice the speed. But if the receiving team still works in spreadsheets and lacks access to shared data systems, they become the bottleneck . This isn't a technology problem,it's a design problem.

Lara Abrash, chair of Deloitte U.S., described the stakes bluntly: "Ninety-three to seven is not the right level of effort in both places. Companies should be spending as much time on the workforce right now as they are on the technology. And we're seeing most companies focus much more on the technology" . The imbalance reflects a deeper organizational bias: technology investments are easy to measure and justify to boards, while workforce transformation is messy, slow, and harder to quantify.

The problem extends beyond efficiency. When humans are removed from decision-making loops without deliberate design for what they should do instead, AI systems operate unchecked. This creates real risks: hallucinations, bad outcomes, brand damage, and regulatory exposure. In high-stakes industries like aerospace, life sciences, and financial regulation, 99% accuracy isn't sufficient,some applications require 99.999% accuracy, which demands active human supervision and feedback loops that most companies haven't built .

What Are the Hidden Adoption Failures?

The gap between AI usage and actual implementation is stark. According to Supermetrics research, only 6% of marketers report that AI is fully implemented in their operations, even though 80% face pressure to adopt it and 37% lack a clear strategy . Teams experiment with AI for content generation, meeting summaries, and brainstorming, but these isolated use cases remain disconnected from core workflows, governance structures, and measurement systems.

This pattern repeats across sectors. In government, agencies are discovering that access to AI tools doesn't guarantee adoption. The real work is redesigning jobs around outcomes. New Jersey's AI sandboxes, which allowed employees to test generative AI safely before deployment, saw measurable gains: self-resolved calls increased by 50%, response times fell by 35%, and more than 80% of users reported that the tools improved their work . But these successes required deliberate change management, not just technology access.

Workforce resistance is a predictable response when employees don't see how AI makes their jobs better. Abrash used a biological metaphor: "Workforces are like antigens in your body. They can fight things they want to fight pretty hard. If they don't see how it makes their jobs better and how they can show up and bring what makes them special, they're going to be that antigen and they're going to fight it" . When employees route around, ignore, or undermine AI tools, adoption fails silently.

How to Build Workforce-Centered AI Adoption

  • Design Human-Machine Collaboration Intentionally: The difference between additive (humans plus machines) and multiplicative (humans times machines) collaboration lies in whether AI merely assists or actively amplifies human capability. When humans work closely and iteratively with AI, their performance can improve by up to 29% compared to humans and AI working solo . This requires clarity about division of labor: AI handles pattern recognition, scale, and routine analysis, while humans own judgment, empathy, and navigating ambiguity.
  • Embed AI Into Existing Workflows: Leading organizations integrate AI directly into departmental systems to reduce context switching and minimize disruption. Buckinghamshire Council in the United Kingdom embedded AI into existing systems and saw call wrap time and administrative burden drop within months . Singapore cut administrative time nearly in half by embedding AI tools into everyday government processes.
  • Build Tiered AI Fluency Across Roles: Not everyone needs to be an AI engineer. The U.S. Department of State's StateChat program trained employees in three levels: use fluency (applying AI tools safely in everyday tasks), choose fluency (evaluating tools and trade-offs), and build fluency (designing custom AI applications) . This tiered approach allows organizations to scale adoption while maintaining accountability and governance.

SCAN Health Plan, a Medicare Advantage provider, is taking a different structural approach. The organization appointed Aman Bhandari as its first Chief AI Officer, but positioned the role within the People and Transformation organization rather than within IT. This reflects a deliberate belief that successful AI transformation is fundamentally about people, not just technology . Lindsay Crawley Herbert, Chief People and Transformation Officer at SCAN, explained: "The challenge with AI isn't implementing the technology,it's marrying the complexity of the technology with how the workforce adopts and uses it effectively and ethically" .

Crawley Herbert, Chief People and Transformation Officer at SCAN

What Leadership Skills Are Actually Required?

The old model of leadership,be decisive, show confidence, set a clear destination, and drive toward it,no longer works in an AI-driven environment. Harvard Business School professor Linda Hill and innovation veteran Jason Wild argue that leaders now need "wayfinding" skills instead of "pathfinding." Pathfinders set a destination and drive toward it. Wayfinders navigate fog and uncertainty . As Wild noted, "The world is literally shifting underneath our feet by three or four feet every week."

As Wild

This shift has emotional and intellectual consequences. When leaders don't know what team they'll need in a year, let alone three, the old playbook fails. Instead, organizations need leaders who can build adaptability into daily work, treat change as continuous, and help employees develop skills through practice and experimentation .

The human capabilities that machines cannot replicate are equally critical. Deloitte's research on high-performing teams identified six consistently critical human strengths, with three standing out: curiosity (the drive to generate novel questions, not just process existing ones), emotional and social intelligence (the ability to feel the actual stakes of a team under pressure), and divergent thinking (the capacity to generate multiple solutions rather than converge on one) . Machines are built to drive organizations toward single solutions; humans are not.

The path forward requires a fundamental rebalancing. Organizations that treat AI adoption as primarily a technology problem will continue to see failed pilots, silent resistance, and unchecked risks. Those that invest equally in workforce design, change management, and leadership development will unlock the multiplicative benefits that AI promises. The 93-7 split isn't just a budget allocation problem,it's a strategic choice about whether AI will amplify human capability or replace human judgment without accountability.