The AI Guidance Gap: Why Employees Are Experimenting Without Direction
Organizations are deploying powerful AI tools without giving employees clear directions on how to use them, creating confusion that undermines both worker experience and the technology's potential value. New research from the Thomson Reuters Institute surveyed more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries and found a striking disconnect: while AI usage has nearly doubled over the past year, organizational infrastructure to support adoption lags significantly behind.
What's Creating Confusion for AI Users in Professional Services?
The guidance gap is real and measurable. Approximately 40% of professionals surveyed reported receiving contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in requests for proposals (RFPs). This ambivalence is slowing down decision-making at the front lines, where AI could deliver the most value .
The problem extends beyond internal confusion. Half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. When discussions do occur, concerns about data protection and accuracy dominate the conversation. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even using generative AI (GenAI), and the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use .
Adding another layer of complexity, publicly available tools like ChatGPT dominate current usage, with more than half of respondents citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems, potentially opening organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality .
Why Are Organizations Failing to Measure AI's Real Impact?
Perhaps the most revealing finding is how organizations are measuring, or failing to measure, whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority of respondents, only 18% said their organizations do track ROI, and the metrics they use tell a story about organizational priorities. Internal cost savings and employee usage rates lead the list, suggesting a focus on efficiency over innovation or quality improvements .
This measurement vacuum has direct consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency. This creates a vicious cycle: employees experiment without knowing if their efforts matter, while organizations struggle to demonstrate value from their AI investments .
Across the broader enterprise landscape, the ROI challenge is widespread. McKinsey research found that 80% of companies have deployed generative AI in some form, yet 80% of companies report no material contribution to their bottom line from those implementations . Among companies that are piloting AI, only 70% can tell what the ROI is, and just 55% have structured governance in place .
How to Build an AI Culture That Actually Works
- Draft Clear and Consistent Guidance: Create explicit policies for employees about instances in which AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures when AI outputs seem questionable.
- Develop Meaningful ROI Metrics: Move beyond usage rates and cost savings as key success measurements. Track data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables. Share these metrics transparently so employees understand organizational priorities.
- Invest in Structured Learning: Curate recommended toolsets, provide hands-on training, and create communities of practice where employees can share effective prompts and use cases with colleagues rather than relying on self-provisioning.
Leading organizations are taking a different approach. At Deluxe, a financial services technology company, leadership targets AI education differently depending on the employee's career stage.
"Entry-level, midcareer, and C-suite, every level has different requirements. A CIO should know how to build a strategic view on where AI creates value. For entry-level, it's about AI fluency and critical human skills. For midcareer, it's AI orchestration and change management beyond their existing domain skills," explained Yogaraj Jayaprakasam, chief technology and digital officer at Deluxe.
Yogaraj Jayaprakasam, Chief Technology and Digital Officer at Deluxe
At McDermott International, a provider of oil, gas, and renewable energy technologies, the focus is on employee investment.
"Right now, we are investing in employees. The more you train them, the more AI they use, and the more ROI comes in," said Vagesh Dave, global vice president and CIO of McDermott International.
Vagesh Dave, Global Vice President and CIO at McDermott International
What Are Employees Actually Worried About?
Despite cautious optimism about AI's potential, employee concerns are rising in critical areas. More than half of professionals surveyed said they are either hopeful or excited about the future of generative AI in their industry, recognizing its potential to enhance efficiency, automate routine tasks, and free up time for higher-value work. However, hesitation and concern are mounting .
The most pressing concerns include accuracy, job displacement, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, a trend that demands organizational attention and transparent communication about workforce strategy . When employees lack clear guidance about how AI will impact their roles, fear fills the vacuum.
This fear echoes broader patterns in enterprise AI adoption. Across industries, some employees are reminded of the outsourcing wave that crested a decade or more ago. No one wants to be in a position of having to train their replacement, especially if it's an AI bot.
"When people are distrustful and fearful they will lose their job, it's not good. Really good grassroots adoption will take place where there is a culture of trust. If humans trust each other, they can go through testing and get results faster," noted Melissa Swift, founder and CEO of Anthrome Insight, a human capital management advisory firm.
Melissa Swift, Founder and CEO at Anthrome Insight
The research reveals a workforce that is hopeful but hungry for direction. Professional services organizations that implement clear guidance, meaningful ROI metrics, and structured learning are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and employees to thrive in an automated future .