The real debate over AI governance isn't about whether to regulate—it's about where to draw the line. At Wharton's inaugural Accountable AI Research Conference in February, academics, policymakers, and industry leaders gathered to tackle a fundamental question: as artificial intelligence reshapes every sector of the economy, who is responsible for making sure it's done right, and how should they do it? Why Regulating the Technology Itself Might Be the Wrong Approach? One of the sharpest insights from the conference came from a deceptively simple analogy. Neil Chilson, head of AI policy at the Abundance Institute, explained the core tension this way: "We don't require hammer manufacturers to make it impossible to misuse a hammer to bludgeon somebody. If you did that, you would also remove a lot of the utility of the hammer for driving nails." The point? Regulating AI at the model level—trying to make the technology itself incapable of causing harm—might strip away its usefulness for legitimate purposes. Alex Engler, executive director of the Penn Center on Media, Technology, and Democracy, agreed that application-specific regulation produces higher-quality policy. Each domain, from real estate valuation algorithms to college admissions to hiring systems, presents fundamentally different risks and requires tailored solutions. This approach acknowledges that AI isn't a monolith—it's a general-purpose technology woven into nearly every industry, and one-size-fits-all rules often miss the mark. How Did We Get Here? The ChatGPT Shock and the Regulatory Whiplash The explosive public debut of ChatGPT in late 2022 triggered a seismic shift in policymaking. Engler noted that the technology itself wasn't a sudden breakthrough, but its visibility sparked radical changes in how governments approached AI regulation. Most visibly, the European Union's AI Act hastily added provisions governing large foundation models in response to the public attention. In the United States, where federal AI legislation remains elusive, state legislatures have filled the vacuum. The data tells a striking story: AI-related bills introduced in state legislatures surged from fewer than 200 in 2023 to well over 1,200 in 2025. Both panelists acknowledged this patchwork of state regulation as imperfect but, as Engler put it, better than "functionally no governance." What's Actually Happening Inside Companies Right Now? The second panel at the conference shifted focus to how organizations are implementing responsible AI in practice. Sarah Bird, chief product officer of responsible AI at Microsoft; Heather Domin, VP and head of responsible AI and governance at HCLTech; and Radha Iyengar Plumb, an AI leader at IBM, each described building governance functions that feel more like startups than established bureaucracies. All three panelists agreed the field remains in its early stages. While leading companies have matured beyond abstract principles into concrete policy and frameworks, adoption across the broader business landscape is "very early." Bird emphasized that generative AI forced a leap in scale: Microsoft went from a handful of teams shipping AI products to thousands doing so annually, which demanded entirely new patterns for testing and oversight. "We're increasing maturity in frameworks," Bird said, "but we're still really, really early in the practice." Steps to Building Responsible AI Governance in Your Organization - Move Beyond Principles: Translate abstract ethical commitments into concrete policies and frameworks that teams can actually follow and measure against. - Scale Your Oversight Processes: As AI product development accelerates, ensure testing and release review processes can keep pace with the volume of new systems being built. - Listen to Customer Demand: Enterprise clients are increasingly requesting detailed briefings on responsible AI deployment before projects begin, signaling that market pressure—not regulation alone—is driving investment in governance. What's Driving Companies to Invest in AI Responsibility? Interestingly, customer demand, not regulation alone, is the primary driver of responsible AI investment. Bird described how enterprise clients' attitudes transformed after ChatGPT's launch—boards that previously treated AI ethics as a distant concern were suddenly requesting two-hour briefings on responsible deployment before any project began. Domin reported a similar experience, noting that client requirements have remained strong regardless of shifts in the U.S. regulatory environment. This suggests that market forces may be more effective than legislation in pushing companies toward accountability. The rise of AI agents—autonomous systems that can take actions on behalf of users—emerged as a shared concern across both panels. Bird highlighted a new governance challenge: agents can be built in minutes by anyone in an organization, not just software engineering teams. "My business program manager is making agents and throwing them out there," she noted, underscoring how traditional release review processes may not keep pace with the speed of innovation. What Comes Next for AI Governance? The conference brought together 24 researchers selected from more than 160 paper submissions to present work spanning AI regulation, ethics, governance, and economic impact. The breadth of research presented—covering topics from proposed regulatory approaches to AI governance in practice, privacy, deepfakes, transparency, intellectual property, and risk management—reflects a growing recognition that solving AI accountability requires more than isolated expertise. It demands sustained collaboration across disciplines. Kevin Werbach, professor of legal studies and business ethics at Wharton and faculty lead for the conference, reiterated the gathering's broader mission: building a community where academic research and real-world practice inform each other. As AI's influence accelerates, the conversations started at this inaugural event—about where governance should sit, how to measure risk, and who bears accountability—are ones the field will be returning to for years to come.