A longtime broadcaster is taking Google to court over an AI voice that sounds so much like his own that it raises a fundamental question: when does an artificial voice become your voice? David Greene's lawsuit against Google over its NotebookLM tool represents one of the first major legal tests of voice ownership in the age of generative AI, with implications that could reshape how tech companies train and deploy voice models. What Exactly Is the David Greene Case About? Greene, a longtime broadcaster, filed a Right of Publicity claim against Google, alleging that NotebookLM's AI-generated voice is so similar to his own that it violates his identity rights. The case hinges on whether Google's voice model was trained on Greene's voice without permission or whether the similarity is coincidental. According to legal experts breaking down the case, Greene must prove several things to win, including demonstrating that Google knowingly used his voice characteristics and that the public would reasonably associate the AI voice with him. NotebookLM is Google's research and content creation tool that converts documents, articles, and notes into podcast-style audio discussions. The tool generates synthetic voices to narrate these conversations, but Greene contends that one of these voices is an unauthorized replica of his own distinctive speaking style and vocal characteristics. How Do Courts Decide If an AI Voice Crosses the Line? The legal framework for voice imitation cases comes from two landmark cases: Bette Midler's lawsuit against Ford Motor Company in 1988 and Tom Waits' case against Frito-Lay in 1992. Both established that a person's voice is a protectable aspect of their identity and publicity rights, even if the actual voice isn't used. These precedents shape the legal standard that Greene's case must meet. However, the Greene case introduces a new wrinkle: AI training data. Legal experts note that Google's training practices and whether the company had "knowing use" of Greene's voice will be central to the case. This means proving not just that the voices sound similar, but that Google deliberately or recklessly incorporated Greene's voice characteristics into its training dataset. Forensic voice analysis will likely play a crucial role in establishing how closely the AI voice matches Greene's actual voice. Why This Case Matters Beyond the Courtroom The Greene lawsuit arrives at a moment when AI voice technology is becoming increasingly sophisticated and widespread. Unlike the Midler and Waits cases, which involved deliberate imitation by advertisers, the Greene case forces courts to grapple with whether AI companies have a responsibility to audit their training data for voice characteristics that might resemble real people. The outcome could establish new standards for how tech companies must handle voice data when training generative AI models. The case also highlights a broader pattern of AI tools creating content that mimics human characteristics without explicit consent. Earlier in 2026, senior European journalist Peter Vandermeersch was suspended after admitting he used AI tools including Google's NotebookLM to generate quotes that he then published without verifying their accuracy. Vandermeersch acknowledged he "fell into the trap of hallucinations" and "wrongly put words into people's mouths" when he should have presented them as paraphrases. While Vandermeersch's case involved fabricated quotes rather than voice imitation, it underscores how AI tools can create convincing but inaccurate content that mimics human expression. Steps to Understand the Legal Implications of AI Voice Technology - Voice as Intellectual Property: Recognize that your voice, like your image or name, can be a protectable asset under Right of Publicity laws in many jurisdictions. This protection extends even to synthetic reproductions that capture your distinctive vocal characteristics. - Training Data Transparency: Understand that companies developing AI voice tools may need to disclose what data was used to train their models. If you're concerned about unauthorized voice use, request information about a company's training practices and data sources. - Forensic Analysis as Evidence: Know that voice analysis technology can now compare AI-generated voices to real voices with measurable precision. This scientific evidence will likely become standard in future voice-related AI disputes. - Precedent from Entertainment Law: Learn from cases like Midler and Waits that courts have long recognized voice as a protected form of identity, even before AI made perfect imitation possible. The Greene case is expected to clarify whether AI companies need explicit consent to incorporate voice characteristics into their training data, or whether similarity alone is enough to establish a violation. Legal experts emphasize that the outcome will likely influence how other tech companies approach voice model development and whether they implement additional safeguards to prevent unauthorized voice replication. For now, the case remains in early stages, but it represents a critical moment in establishing legal boundaries around AI-generated content that mimics human identity. As AI voice technology becomes more prevalent in podcasting, audiobook narration, and content creation tools, the question of who owns a voice, and who can use it, will only become more urgent.