Michigan's Department of Insurance and Financial Services has issued a new bulletin requiring all regulated financial service providers to develop and maintain written AI governance programs, marking a significant shift toward state-level accountability for AI systems that make or support consumer-facing decisions. The 10-page guidance, released in March 2026, doesn't ban AI use but instead establishes clear expectations for how financial institutions must test, monitor, and document their AI systems to prevent discrimination, errors, and bias. What Does Michigan's New AI Bulletin Actually Require? The bulletin, titled "Use of Artificial Intelligence Systems by Financial Service Providers," reminds financial institutions that AI systems cannot exempt them from existing anti-discrimination laws. Michigan law already prohibits discrimination based on protected classes including religion, race, color, national origin, age, sex, sexual orientation, gender identity or expression, height, weight, familial status, and marital status. The new guidance makes clear that deploying AI doesn't change these obligations. The core requirement is straightforward: every financial service provider using AI to make or support decisions must create a written AI Systems Program, or AIS Program. Even providers who don't formally use AI are encouraged to establish an employee AI use policy at minimum to specify acceptable use in writing. How to Build an AI Governance Program That Meets Regulatory Standards - Establish Clear Governance Structure: Senior management must oversee the development, implementation, and ongoing monitoring of the AIS Program, with accountability to the board or a designated committee. Transparent policies and accountability structures must cover all stages of the AI system lifecycle, from design and development through use and eventual retirement. - Implement Robust Risk Management Controls: The program must include oversight for AI system adoption or development, strong data practices and accountability procedures, security measures, validation processes, data and record retention practices, and protection of non-public information. Controls should be scaled to match the degree of potential consumer impact. - Conduct Rigorous Testing and Monitoring: Financial providers must test AI systems to assess the likelihood of errors, hallucinations, and bias before deployment. Ongoing monitoring and audit activities must verify compliance with the bulletin, including documentation of validation and testing results. - Manage Third-Party AI Systems Carefully: When acquiring AI systems or predictive models from vendors, providers must perform due diligence, establish contractual safeguards such as audit rights and notification protocols, and ensure clear responsibilities for regulatory compliance. The financial institution remains ultimately accountable for risks and outcomes regardless of third-party involvement. The bulletin also references the National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework as an appropriate standard that providers may integrate into their programs where appropriate. Why This Matters: The Regulator's Enforcement Approach Michigan's Department of Insurance and Financial Services has signaled that examiners will actively ask about AI systems during routine inspections. Regulators may request copies of written AIS Programs, details about due diligence performed before acquiring AI systems, evidence of monitoring and audit activities, and information about data practices, accountability procedures, data security, and testing protocols. This represents a shift from theoretical guidance to practical enforcement. Rather than waiting for problems to emerge, Michigan regulators are building AI accountability into their standard examination process. Financial institutions that fail to develop adequate programs or cannot demonstrate compliance with anti-discrimination laws when using AI should expect regulatory scrutiny. The bulletin also acknowledges guidance from the United States Department of the Treasury's December 2024 report on AI in financial services, which emphasized the importance of fair and ethical AI use, accountability, transparency, compliance, and security practices. By referencing federal guidance while establishing state-level requirements, Michigan is creating a middle ground between federal inaction and state overreach. What This Means for Financial Institutions and Consumers For banks, credit unions, and other financial service providers, the bulletin creates a clear compliance roadmap but also administrative burden. Institutions must document their AI governance, demonstrate testing rigor, and maintain detailed records of how their AI systems perform. Smaller institutions may struggle with the documentation requirements, while larger ones likely already have compliance infrastructure that can be adapted. For consumers, the requirement creates a paper trail. If an AI system denies a loan, flags an account as suspicious, or makes other consequential decisions, regulators can now demand evidence that the institution tested for bias and monitored for discrimination. This doesn't guarantee fair outcomes, but it does create accountability mechanisms that didn't exist before. The Michigan approach also signals how state-level AI regulation may evolve. Rather than banning AI or imposing one-size-fits-all rules, Michigan is requiring governance, documentation, and testing scaled to actual risk. This pragmatic approach acknowledges that AI can provide benefits to consumers while establishing guardrails against the most obvious harms.