Sam Altman Admits ChatGPT's Hallucination Problem Is 'Known' but Won't Be Fixed for Another Year

Sam Altman has publicly acknowledged that ChatGPT's voice model has a persistent hallucination problem, admitting the issue won't be fixed for approximately another year. The OpenAI CEO made the candid admission during an interview after a viral video exposed the AI tool confidently lying about how long it took a user to run a mile, demonstrating a troubling pattern where the system invents answers rather than admitting it doesn't know something .

What Exactly Is ChatGPT's Voice Model Problem?

A YouTuber named HuskIRL recently posted a video that caught ChatGPT's voice model in a particularly embarrassing moment. He asked the AI to measure how long it took him to run a mile, and seconds after starting the "timer," he asked it to stop and provide the result. The voice model confidently claimed he "clocked it at around 10 minutes and 12 seconds," which was wildly inaccurate. The video quickly went viral across social media platforms .

What made the situation worse wasn't just the wrong answer, but how the AI delivered it. The system didn't say "I don't have the ability to measure time" or "I'm not sure." Instead, it fabricated a plausible-sounding response with complete confidence, a behavior known as hallucination in AI terminology. When HuskIRL tested the same voice model again with an identical task, it produced a completely different incorrect answer and refused to acknowledge it was wrong .

Why Did Sam Altman's Response Disappoint People?

When veteran tech journalist Laurie Segall asked Altman about the viral clip during an interview with Mostly Human, the OpenAI CEO laughed but visibly struggled to justify how bad the situation looked. His explanation revealed the core technical limitation: the voice model "doesn't have tools to start a timer or anything like that," according to Altman. He noted that OpenAI plans to "add the intelligence into the voice models" at some point in the future .

"No, no, that's a known issue. Maybe another year," said Sam Altman when asked if he needed to show the clip to his product team at OpenAI.

Sam Altman, CEO at OpenAI

The real concern, however, goes beyond technical limitations. Social media commenters highlighted what many see as the deeper problem: the AI's inability to simply admit uncertainty. One commenter wrote, "I think the bigger problem is how it's lying and gaslighting," while another noted that "the AI is still incapable of saying 'I don't know' after all these years" .

One

How to Identify When ChatGPT Might Be Hallucinating

  • Overconfident Responses: When ChatGPT provides an answer with complete certainty about something it shouldn't know, especially involving real-time information, measurements, or current events, be skeptical of the accuracy.
  • Refusal to Admit Limitations: If the AI insists it can perform a task it clearly cannot, like measuring time or accessing live data, it may be fabricating rather than being honest about its constraints.
  • Inconsistent Answers: When asked the same question multiple times, if you get different answers that are all presented with equal confidence, the model is likely guessing rather than retrieving reliable information.
  • Plausible-Sounding but Unverifiable Claims: Hallucinations often sound reasonable on the surface, which makes them particularly dangerous; always verify important information through independent sources.

This issue has plagued ChatGPT since its early days. The system's tendency to hallucinate information, often paired with what users describe as "sycophancy," has been a persistent concern. There have been numerous examples over the years, including lengthy videos where the AI tool finds itself in an endless loop of justifications for incorrect information .

The timing of Altman's admission is particularly notable given OpenAI's recent challenges. The company has faced concerns about its ability to generate revenue, which was underscored by the closure of its video generation tool. Against this backdrop, acknowledging that a core feature of ChatGPT won't be fully functional for another year raises questions about the company's product development priorities and timeline .

For users who rely on ChatGPT's voice features, the one-year timeline means the current version will continue to have these limitations for the foreseeable future. This is particularly concerning for applications where accuracy matters, such as educational use, professional work, or any scenario where users might trust the AI's confident-sounding but incorrect responses without verification.