How Federal Health Agencies Are Building AI Governance From the Inside Out
Federal health agencies are moving beyond waiting for national AI rules by building their own governance frameworks to manage artificial intelligence deployments across departments like the FDA and NIH. The Department of Health and Human Services (HHS) has established an AI governance board and appointed an acting Chief Artificial Intelligence Officer to oversee how AI is adopted across its sprawling mission, from drug approval to medical research .
Why Is HHS Building Its Own AI Governance Structure?
The HHS faces a unique challenge: it oversees agencies with fundamentally different missions and risk profiles. The Food and Drug Administration (FDA) regulates medical devices and pharmaceuticals, while the National Institutes of Health (NIH) funds and conducts cutting-edge research. Both need AI tools, but the governance requirements differ significantly. Rather than waiting for a one-size-fits-all federal AI policy, HHS is implementing the Office of Management and Budget's memo M-25-21, which requires agencies to manage, monitor, and when necessary, halt high-impact AI projects .
This internal governance approach reflects a practical reality: the operational challenges of deploying AI across agencies with diverse missions demand coordination at the leadership level. Theodore Thompson, an attorney at Stinson LLP who examined HHS's AI strategy, noted that standardized compliance practices and strong leadership coordination are essential for success across such varied organizational landscapes .
What Does HHS's AI Governance Framework Actually Include?
The HHS governance structure centers on several key components designed to balance rapid innovation with risk management:
- AI Governance Board: A coordinating body that oversees AI adoption decisions across HHS agencies and ensures alignment with departmental priorities and federal guidance.
- Acting Chief AI Officer Role: Executive-level leadership responsible for setting AI strategy, managing high-impact projects, and ensuring compliance with federal requirements like OMB memo M-25-21.
- Risk Management Requirements: Standardized processes for identifying, assessing, and mitigating risks in high-impact AI deployments before they scale across agencies.
- Data Handling Standards: Protocols ensuring proper management of sensitive health data used to train or operate AI systems, particularly critical given FDA and NIH's work with protected health information.
The framework is not purely internal. HHS is also signaling to federal vendors and AI providers what it expects from technology partners. Vendors seeking to work with HHS need to demonstrate transparency in how their AI systems work, ensure their solutions can integrate across different agency systems, show robust risk management practices, and align with the department's broader "OneHHS" strategy that treats the department as a unified entity rather than isolated silos .
How Can Vendors Align With HHS's AI Strategy?
For companies selling AI solutions to federal health agencies, understanding HHS's governance priorities is becoming essential for winning contracts. Vendors who want to compete for HHS business should focus on several practical areas:
- Transparency and Explainability: Build AI systems that can explain their decisions in ways that healthcare professionals and regulators can understand and audit, rather than black-box systems that produce results without reasoning.
- Interoperability: Design solutions that work across different HHS agencies and integrate with existing federal health IT systems, avoiding vendor lock-in and enabling data sharing where appropriate.
- Risk Management Capabilities: Demonstrate how your AI system identifies potential harms, monitors for bias or errors, and includes safeguards to prevent misuse or unintended consequences.
- American Innovation Focus: Emphasize domestic development, data security practices, and alignment with U.S. regulatory requirements rather than solutions designed primarily for other markets.
"Vendors who align their offerings with these strategic priorities, demonstrating transparency, interoperability, robust risk management, and American innovation, will be best positioned to serve the federal health mission," noted Theodore Thompson, Of Counsel at Stinson LLP.
Theodore Thompson, Of Counsel at Stinson LLP
This vendor-facing guidance is significant because it shows HHS is not just managing AI internally; it is actively shaping the market for AI solutions in federal health by signaling what it will and will not accept. Companies that build AI tools without these governance features in mind may find themselves unable to compete for federal contracts, even if their technology works well in other sectors .
What Does This Mean for the Broader AI Governance Debate?
HHS's approach offers a practical case study in how large federal agencies can implement AI governance without waiting for comprehensive national legislation. Rather than treating AI as a future problem, HHS is treating it as an operational reality that requires immediate management. The framework emphasizes executive-level coordination, standardized compliance practices, and clear communication with external vendors about expectations.
This model suggests that federal AI governance may emerge not from a single sweeping law, but from agencies building their own frameworks that eventually converge around common standards. As more federal departments adopt similar approaches, those standards could become de facto national requirements simply because vendors and agencies align around them. The HHS example shows that governance can happen at the implementation level, even while policy makers continue debating broader AI regulation .
The stakes are high. HHS oversees agencies that make decisions affecting millions of Americans' health outcomes. AI tools that help FDA reviewers evaluate drug safety or help NIH researchers identify promising treatment approaches could save lives. But AI systems that embed bias, fail silently, or are deployed without proper oversight could cause real harm. By establishing governance structures now, HHS is trying to capture the benefits of AI while managing the risks, setting a template that other federal agencies may follow as they navigate their own AI adoption challenges.