Why a Business Professor's 26-Year AI Journey Is Reshaping How Companies Build Trustworthy Systems
As artificial intelligence becomes more powerful and widespread, one business professor's quarter-century focus on building AI systems that serve people, not just machines, is proving prescient. Xiao Fang, a professor of management information systems at the University of Delaware's Alfred Lerner College of Business and Economics, has spent 26 years studying how AI can be designed to support better decisions while minimizing potential harms. His research is now directly informing how organizations approach transparency, accountability, and fairness as new global AI regulations take shape in 2026 .
Fang's work stands apart because it focuses on what the National Science Foundation calls "use-inspired AI," which is motivated by specific real-world problems rather than foundational research developed independent of application. His objective is straightforward: design AI systems that solve meaningful business and societal challenges while carefully considering potential risks .
What Makes Fang's Approach to Responsible AI Different?
Unlike many AI researchers who began their careers when the technology was already mainstream, Fang started studying artificial intelligence around 2000 while pursuing his doctorate in business, when AI was rarely studied in business schools. "As a business Ph.D. student, I took graduate-level computer science courses, including artificial intelligence," Fang explained. "That exposure, along with my work in data mining, really sparked my interest." Despite early challenges getting AI-focused work published in business journals, he remained committed to the field .
Over the past 26 years, Fang has watched AI evolve from symbolic systems built on explicit rules and logic to today's data-driven models powered by machine learning and neural networks. Yet his core focus has remained steady: working on AI that is driven by real applications and real needs. When generative AI tools like ChatGPT emerged in late 2022, Fang immediately recognized both opportunity and risk. "We quickly realized two major issues: the ease of generating misinformation and the potential for social, gender and racial bias," he stated .
Fang and his collaborators conducted research demonstrating that AI-generated content can reflect and amplify existing biases embedded in training data. Such findings carry important implications for organizations relying on generative AI for communication, hiring, marketing, or decision support. As policymakers work to ensure fairness, transparency, and accountability in AI systems, research identifying bias and mitigation strategies becomes increasingly relevant .
How to Build AI Systems That Explain Their Decisions
For Fang, maximizing AI's benefits requires equal attention to minimizing its risks. Responsible design, he argues, must be embedded from the beginning rather than retrofitted after problems emerge. His research spans several practical applications that demonstrate this principle:
- Interpretable Medical Diagnosis: In a study published in Management Science, Fang and his co-authors developed an interpretable AI model to assist in diagnosing depression associated with chronic disease. Rather than functioning as a black box, the model provides reasoning that clinicians can evaluate and question by identifying which learned prototypes most closely match a patient's symptoms.
- Automated Industry Classification: Fang's AI-based system automatically groups firms by analyzing their annual report filings, identifying similarities in business activities and language patterns. This approach is more adaptive, scalable, and objective than traditional manual classification systems used by governments and financial analysts.
- Bias Detection in AI-Generated Content: His research identifies how AI systems can reflect and amplify existing biases from training data, providing organizations with practical strategies to mitigate these risks before deploying generative AI tools in high-stakes applications.
For mission-critical tasks like medical diagnosis, Fang emphasized that "it's not enough for an AI system to be accurate. It also needs to explain why it made a particular prediction." This principle of interpretability aligns closely with emerging regulatory priorities emphasizing explainability and accountability in AI systems .
Fang
Can Regulation Actually Drive Better AI Innovation?
As new AI regulations take shape in the United States and abroad, Fang encourages organizations to reconsider how they frame compliance. Many people see regulation as a constraint, but Fang views it differently. "I see it as an objective," he stated. He compares business strategy to an optimization problem. Traditionally, companies seek to maximize profit or minimize cost. Fang argues that social objectives, such as fairness, accountability, and transparency, should be incorporated directly into that optimization framework .
"In the long run, aligning economic and social objectives will benefit businesses. Responsible AI builds trust, and trust is essential for sustainable success," said Xiao Fang, Professor of Management Information Systems at the University of Delaware.
Xiao Fang, Professor of Management Information Systems, University of Delaware
Rather than slowing innovation, well-designed guardrails can encourage more thoughtful and resilient AI deployment. Organizations that proactively embed responsible design principles may be better positioned to adapt as regulatory expectations evolve. Fang's work is recognized as high-quality research; he is a recognized researcher whose work contributes to the University of Delaware's Top 20 ranking on the Association for Information Systems list of high-quality journals over the past three years from 2023 to 2025 .
Beyond his research contributions, Fang is committed to mentoring doctoral students and preparing future scholars. "We need to train students so they can become our peers," he noted. "I really enjoy watching them grow into independent researchers." Many of his former students have gone on to academic careers of their own, extending the impact of his approach to responsible, application-driven AI research .
As artificial intelligence enters a more regulated and consequential era, Fang's decades-long focus on use-inspired, responsible AI offers a steady and informed voice. His work underscores a central principle: the most innovative AI systems are those designed with real-world consequences in mind from day one, not those that treat responsibility as an afterthought.