Most organizations deploying artificial intelligence lack a structured approach to vetting vendors and managing AI-related risks, leaving them exposed to data breaches, biased outputs, and compliance violations. A comprehensive governance framework starting with vendor due diligence can significantly reduce these dangers, according to guidance shared at Ward and Smith's annual In-House Counsel seminar. What Should Your AI Governance Framework Actually Include? The foundation of effective AI governance begins with understanding what AI systems your organization already uses. Many companies operate "shadow IT" instances where departments deploy AI tools without central oversight. The first critical step is conducting a comprehensive inventory of existing AI systems and use cases across your organization. Once you know what you're working with, the next phase involves classifying AI tools by risk level. Not all AI applications carry equal weight. Resume screening and loan application tools, for example, directly impact legal and material rights, making them high-risk. Marketing copy generation or internal analytics reports fall into moderate-risk categories. Low-risk applications might include brainstorming sessions or summarizing public articles. "High-risk AI tools require strict oversight and formal impact assessments. They may even be prohibited in certain cases," explained Mayukh Sircar, a cybersecurity, data privacy, and technology attorney. Mayukh Sircar, Cybersecurity, Data Privacy, and Technology Attorney at Ward and Smith, P.A. The principle of proportionate governance matters here. If two AI tools accomplish the same business goal but one requires significantly less data, the lower-data option is typically the better choice from a risk management perspective. How to Build a Vendor Evaluation and Selection Process - Security and Data Handling: Evaluate whether vendors offer zero data retention policies, meaning data is explicitly removed after use. Ask about encryption standards, private instance options to prevent third-party access, and incident response plans. If data never enters the vendor's system, it cannot be stolen in a breach. - Accuracy and Bias Mitigation: Request information about known error rates, hallucination rates, and documented biases. Ask vendors what steps they have taken to test for and mitigate biases, and request copies of fairness audits. Failing to verify these metrics could expose your organization to negligence claims. - Compliance and Transparency: Confirm that the vendor can demonstrate compliance with applicable regulations in your jurisdiction. Ask the vendor to explain how the AI reaches its conclusions and describe how the tool functions. Transparency is essential for understanding potential failure modes. - Data Provenance and Training: Understand where the vendor's training data originated. Was it lawfully licensed? Does it include proprietary data? These questions directly affect your legal exposure and the vendor's intellectual property claims. - Pilot Testing Requirements: Mandate small-scale pilot testing using actual organizational data before enterprise deployment. Running a pilot helps identify vulnerabilities and integration issues. Vendors unwilling to provide pilot access represent a significant red flag. - Audit Rights and Sub-processor Oversight: Ensure your contracts grant audit rights to verify vendor compliance with security and privacy requirements. Confirm that vendors have the right to audit their own service providers and sub-processors, and request summaries of those audits. Creating a standardized, non-negotiable AI contract addendum addresses many of these concerns systematically. Rather than negotiating governance terms individually with each vendor, a template addendum ensures consistent protection across all AI tool deployments. "The key here is applying proportionate governance. You should consider how much data a particular tool might need. If there's a different tool that accomplishes the same goal with less data, that is likely a better way to go," noted Mayukh Sircar. Mayukh Sircar, Cybersecurity, Data Privacy, and Technology Attorney at Ward and Smith, P.A. What Should Your AI Contract Addendum Actually Say? A robust AI contract addendum protects your organization by clearly defining responsibilities and allocating risk. The document should include several non-negotiable provisions. First, include a data use restriction stating that the vendor and all sub-processors are prohibited from using any customer data to train, develop, or improve AI models without express written consent. This prevents your proprietary information from becoming training material for competitors or the vendor's own product improvements. Second, clearly define intellectual property ownership. The contract must state that your organization owns all prompts submitted to the tool and all outputs generated, to the fullest extent permitted by law. This prevents disputes over who can use or commercialize the AI-generated content. Third, include broad indemnifications requiring the vendor to stand behind their product. The vendor should agree to cover intellectual property indemnities, data breaches, documented biases, and negligence in correcting hallucinations. This shifts financial responsibility to the party best positioned to prevent problems. Fourth, add a compliance warranty stating that the system complies with applicable laws, including privacy regulations. Vendors may resist this provision, claiming they cannot delete data that enters the AI system. Push back on this objection; if they cannot guarantee compliance, the tool may not be suitable for your use case. Finally, negotiate meaningful security and audit rights covering vulnerability management, breach notification timelines, and independent attestation reports. These provisions give you recourse if the vendor fails to maintain promised security standards. "Do not accept vendor papers at face value, and make sure that AI risks are explicitly and contractually allocated," Sircar emphasized. Mayukh Sircar, Cybersecurity, Data Privacy, and Technology Attorney at Ward and Smith, P.A. Why Most Organizations Are Getting This Wrong Many legal departments treat AI procurement like traditional software purchases, relying on vendor marketing materials and standard software contracts. This approach misses critical AI-specific risks. The governance gap exists because AI tools introduce novel failure modes: hallucinations (confident false statements), biases in training data, and unpredictable behavior in novel situations. Organizations that skip pilot testing or fail to conduct fairness audits expose themselves to operational failures and potential liability. A resume screening tool with undisclosed bias could trigger discrimination claims. A loan application AI with high error rates could damage customer relationships and regulatory standing. The solution requires legal teams to move beyond features and assess tools against a comprehensive legal and operational checklist. As Sircar noted, the question becomes whether the business benefit justifies the governance overhead. For high-risk applications, the answer is almost always yes. By implementing a structured AI governance playbook, organizations can deploy AI tools confidently while protecting themselves from preventable risks. The investment in vendor due diligence, contract negotiation, and pilot testing pays dividends through reduced breach risk, compliance confidence, and operational reliability.