University of Michigan Researchers Are Reimagining How AI Helps People With Disabilities See and Navigate the World
Researchers at the University of Michigan are tackling one of computer vision's most important challenges: making visual AI tools genuinely useful for people with disabilities. The university will present 19 papers at CHI 2026, the world's leading conference on human-computer interaction, with a focus on accessibility innovations powered by generative AI and computer vision .
What Are Researchers Building to Help Blind Users Navigate Virtual Worlds?
One of the most promising projects emerging from U-M's research is RAVEN, a system that enables blind and low-vision users to navigate and modify 3D virtual environments using natural language queries. As virtual reality and 3D spaces become more common in workplaces, education, and entertainment, blind and low-vision people face significant barriers to spatial awareness and interaction. RAVEN addresses this by letting users ask questions and request modifications to 3D scenes in real time, adapting the environment to their needs on the fly .
The research team evaluated RAVEN with eight blind and low-vision participants and six Unity developers, generating insights into how conversational programming can support personalized accessibility. The findings highlight both the promise of natural language interaction, which users found intuitive and empowering, and the challenges of ensuring reliability and trust in AI-driven accessibility systems .
How Are Researchers Extending Mobile AI Assistive Tools With New Features?
Another innovation, called A11yExtensions, takes a different approach by augmenting existing mobile AI assistive technology with add-on services that can be deployed immediately. The system works through in-situ interventions, meaning features are added directly to the apps people already use daily, rather than requiring them to switch to new tools. Through co-design sessions with two blind accessibility professionals, researchers implemented three exemplar extensions, including features for cross-checking AI results and camera aiming assistance .
The research revealed that A11yExtensions provide flexibility and customization opportunities, though they introduce additional onboarding and communication challenges. This work demonstrates the effectiveness of deploying new features via automation within the technologies people actually rely on .
Ways Researchers Are Advancing Computer Vision for Real-World Accessibility
- Audio-Based Lifelogging: EchoScriptor transforms raw in-home audio into natural-language descriptions of activities, achieving 94.15% activity recognition accuracy and 89.25% background recognition accuracy, enabling camera-free lifelogging for memory rehabilitation and personal informatics .
- Real-Time Visual Descriptions: TouchScribe augments non-visual hand-object interactions with automated live visual descriptions, helping blind and low-vision users access rich visual features of physical objects through AI-generated narration .
- Personalized Content Transformation: DIY-MOD, which received a Best Paper Honorable Mention, transforms sensitive content elements in real time based on individual user definitions of harm, preserving informational value while increasing user agency and safety .
These projects reflect a broader shift in how researchers approach accessibility. Rather than treating it as an afterthought, U-M's computer science and engineering teams are embedding accessibility considerations into the design of visual AI systems from the ground up .
The university's presence at CHI 2026 extends beyond accessibility work. Across the conference, U-M researchers are sharing insights on AI-mediated education and writing support, responsible algorithm design, clinical decision support, extended reality, and the practical challenges of deploying large language model (LLM)-powered products in real-world settings. An LLM is an AI system trained on vast amounts of text to understand and generate human language .
The breadth of U-M's contributions underscores a key insight emerging from the research community: computer vision and visual AI are only truly valuable when they work for everyone, including people whose needs have historically been overlooked. As these technologies become more prevalent in workplaces, schools, and homes, ensuring equitable access isn't a nice-to-have feature, it's a fundamental requirement for responsible AI development.