African Musicians Are Building AI Tools That Respect Their Own Cultures. Here's Why That Matters.
African musicians and technologists are taking control of how artificial intelligence engages with their musical heritage, building AI tools designed from the ground up to understand indigenous instruments, languages, and cultural contexts rather than relying on Western-trained systems. At a showcase hosted by the University of the Witwatersrand in Johannesburg on April 16, 2026, five teams from South Africa, Ghana, Cameroon, Kenya, and Ethiopia presented AI music projects that prioritize cultural preservation and accessibility over generic music generation .
The initiative highlights a fundamental problem in the AI music space: mainstream platforms like Suno and Udio have been trained on datasets that overwhelmingly represent Western music, leaving African musical traditions underrepresented and misunderstood by AI systems. This gap has real consequences for artists and communities trying to use AI tools to preserve, teach, and innovate within their own musical traditions .
Why Are African-Focused AI Music Tools Necessary?
One of the most pressing issues addressed at the Wits showcase was the absence of African music in the datasets that power global AI systems. Linda Nyabundi and Gebregziabihier Niguise presented Heritage in Code, a project specifically designed to address this gap by building structured datasets enriched with metadata and cultural context. Their goal is to improve how AI systems recognize and generate African musical forms, rather than forcing African music into frameworks built for Western compositions .
This problem extends beyond representation. African languages often feature tonal systems and phonetic qualities that Western AI models struggle to process accurately. Zazi, an AI-powered co-creation tool developed by South African artist Umlilo and Ghanaian engineer Gideon Gyimah, was specifically designed to handle African languages and tonal systems. During a live demonstration, the platform generated a maskandi-inspired track with isiZulu vocal elements, responding to multilingual prompts in real time, showcasing capabilities that generic AI music tools simply do not possess .
How Are These Projects Balancing Innovation With Cultural Preservation?
The five projects presented at the showcase reveal a shared philosophy: AI should serve cultural preservation and community benefit, not replace human creativity or extract value from communities. The approaches varied in scope but converged on practical applications that address real needs within African music ecosystems .
- Bebii Engine: An AI-driven music generation system introduced by Joshua Kroon and Emmanuel Apetsi that processes and adapts traditional African compositions in real time, designed specifically to preserve indigenous knowledge through sound.
- Timah.AI: A web-based platform developed by Tora Nyamosi and Lawrence Moruye that allows users to upload recordings of traditional music, which are then transcribed and stored in a searchable digital archive to support access to African musical heritage.
- Bina.AI: A platform created by Ehinome Ogbeide and Ashuza Muhigiri that generates personalized songs and stories for children, designed to be culturally relevant and educational with African contexts embedded in its content.
These tools represent a departure from the typical AI music narrative, which has focused on speed, cost reduction, and ease of use for amateur creators. Instead, the African projects emphasize accessibility for communities, preservation of endangered musical traditions, and respect for cultural context .
The Wits showcase also included a roundtable discussion on Creative Sovereignty in African Music and AI, featuring industry professionals and researchers. This framing is significant: the conversation centered not on whether AI should be used in music, but on who controls that technology and whose interests it serves .
How Does This Compare to the Broader AI Music Debate?
The African-focused initiatives emerge against a backdrop of intense legal and ethical conflict in the global AI music industry. Suno, a Cambridge-based AI music company, announced it had reached $300 million in annual recurring revenue and two million paying subscribers, even as artists and record labels have challenged how the technology was built and what it might replace . Udio, a rival company, reached deals with Warner and Universal Music Group, while Suno remains in conflict with Universal and Sony over training data and copyright issues .
"We're not anti-AI. We just want to make sure that this is done fairly," said Ron Gubitz, executive director of the Music Artists Coalition, which counts Don Henley and Meghan Trainor among its board members.
Ron Gubitz, Executive Director of the Music Artists Coalition
The legal disputes center on whether AI music companies trained their systems on copyrighted recordings without permission or compensation. Suno acknowledged that building its system required showing the model "tens of millions of recordings" but argued that such training is protected as fair use . The African projects sidestep some of these conflicts by focusing on community-controlled datasets and cultural preservation rather than mass-market music generation .
Suno
Interestingly, the current AI music debate echoes conflicts from over a century ago. When the player piano emerged in the 1880s, it sparked nearly identical concerns about automation, artistry, and fair compensation. Like today's text-to-song systems, the player piano promised polished musical output for people with little or no training. Composer John Philip Sousa warned in 1906 that such technologies would make children "indifferent to practice" and erode amateur musicianship .
However, the player piano did not destroy the music industry as feared. It created new forms of musical labor, served as a practice aid for young musicians including Fats Waller and Duke Ellington, and some composers even wrote music specifically for piano rolls . The legal system eventually adapted: after the 1908 Supreme Court decision in White-Smith Music Publishing Co. v. Apollo Co. ruled that piano rolls were "parts of a machine" rather than copies governed by copyright law, Congress changed the law the next year to require royalties for rolls and records .
The parallel suggests that AI music technology may follow a similar trajectory: the technology moves first, the rules follow, and creative adaptation tends to surprise everyone. But the African approach offers a different model, one where communities shape the technology from the beginning rather than fighting over its consequences afterward .
Christopher White, an associate professor of music theory at the University of Massachusetts Amherst and author of a 2025 book on AI music, noted that the next generation of trained musicians remains skeptical of generative AI. "You won't meet a group of people who are more skeptical of generative musical AI than conservatory music students," he said . Yet White suspects AI could strengthen the appeal of live performance, even if recorded music faces genuine disruption in commercial niches such as advertising jingles or podcast themes .
The Wits showcase suggests a third path: AI music tools designed by and for specific communities, prioritizing cultural sovereignty and preservation over mass-market disruption. Whether this model can scale beyond African contexts, or whether it will remain a localized alternative to global platforms, remains an open question. But the projects on display demonstrate that the future of AI music need not be determined solely by the companies with the largest training datasets and the deepest legal resources.