In January 2025, the Trump administration issued a new executive order that fundamentally reshaped how federal agencies must govern artificial intelligence, shifting focus from strict risk management to faster innovation. The change has immediate implications for banks, investment firms, and brokers that use AI tools, because financial regulators are now translating these federal policy shifts into new expectations for how the private sector should handle AI systems. For the past two years, federal agencies operated under a framework that required rigorous testing, transparency, and bias mitigation for high-impact AI systems. That framework, built on recommendations from the National Institute of Standards and Technology (NIST) and the White House Office of Science and Technology Policy, emphasized explainability and fairness. But Executive Order 14179, signed in January 2025, fundamentally changed the rules. What Changed in Federal AI Governance? The new executive order reduces the regulatory burden on federal agencies in several ways. It narrows the definition of "high-impact AI," meaning fewer AI systems now require enhanced oversight. It allows agencies to develop their own risk management frameworks instead of following NIST's standardized approach. Most controversially, it explicitly prohibits agencies from filtering AI output to adjust for dataset bias reflecting historical inequities. The order also removes requirements for vendors to submit testing results demonstrating that AI systems are secure and robust, and eliminates environmental considerations from procurement decisions. These changes prioritize speed and cost-effectiveness over the cautious, test-first approach that dominated federal AI policy under the Biden administration. How Will This Affect Financial Services Firms? Financial regulators, including the Securities and Exchange Commission (SEC), are now adapting federal AI policy to their own regulatory missions. The SEC appointed a Chief AI Officer (CAIO) and an Artificial Intelligence Task Force in 2025, tasked with accelerating AI integration while maintaining appropriate governance. But what "appropriate governance" means is shifting. The SEC's internal AI effort now includes identifying and removing barriers to AI innovation, alongside imposing strong governance to ensure AI is used responsibly. This dual mandate reflects the tension at the heart of the new federal policy: promote AI adoption while still protecting the public. For firms regulated by the SEC, this creates a complex landscape. On one hand, regulators are moving faster to embrace AI themselves, which may signal that AI tools are acceptable in financial services. On the other hand, SEC examiners are closely scrutinizing how firms use AI, issuing deficiencies and pursuing enforcement action against firms that fail to implement AI tools consistently with their regulatory obligations. Steps to Prepare Your Firm for Evolving AI Expectations - Accuracy and Substantiation: Ensure that all marketing claims, client communications, and model-driven service descriptions are fully supported by testing and consistently monitored. SEC examiners are now closely examining whether firms can back up their AI claims with evidence. - Model Documentation and Testing: Document and test your AI systems for data quality, bias, model explainability, drift (when model performance degrades over time), and hallucinations (when AI systems generate false information). These practices are becoming table stakes in financial services. - Vendor Governance: Validate third-party AI capabilities, training data provenance, security controls, and change management processes. If you rely on external AI vendors, you must understand how their systems work and what safeguards they have in place. - Recordkeeping and Auditability: Maintain sufficient documentation for examinations, particularly as regulators increase scrutiny of automated tools. Be prepared to explain to examiners exactly how your AI systems make decisions and what controls you have in place. These are not theoretical expectations. SEC examiners are actively examining AI controls at registered firms and taking enforcement action against those that fall short. Why the Shift Matters for Accountability and Transparency The new federal policy represents a philosophical shift away from the "responsible AI" framework that emphasized transparency, explainability, and fairness. By removing requirements for bias testing and allowing agencies to develop their own risk management approaches, the order prioritizes innovation speed over the kind of rigorous oversight that might catch discriminatory outcomes before they harm consumers. This creates a challenge for financial firms. The SEC still expects firms to manage AI risks responsibly, but the federal government is no longer mandating the specific tools and frameworks that firms might use to do so. Firms must now decide whether to follow the old NIST standards, adopt their own approaches, or wait to see what regulators expect. The stakes are high. AI systems in financial services can affect lending decisions, investment recommendations, and fraud detection. If these systems are biased or opaque, they can harm consumers and expose firms to regulatory action. The challenge for financial services firms is to balance the pressure to innovate quickly with the need to maintain accountability and transparency in how their AI systems operate. In the coming months, the SEC and other financial regulators will likely clarify what they expect from firms in this new environment. Firms that have already invested in robust AI governance, testing, and documentation will be better positioned to adapt to whatever new standards emerge.