Why OpenAI's ChatGPT Still Can't Set a Timer (And Won't for Another Year)
OpenAI's most advanced AI assistant cannot perform a task that smartphones have handled for decades: setting a reliable timer. Sam Altman recently confirmed this limitation during a podcast appearance, estimating that ChatGPT will need approximately one year before it can reliably handle basic timing functions. The admission surfaced after a viral TikTok video showed ChatGPT's voice mode confidently fabricating a fictional mile-run duration instead of actually tracking time .
Why Can't ChatGPT Track Real Time?
The explanation reveals a fundamental architectural gap in how modern AI language models work. ChatGPT's voice model operates in what engineers call a "stateless environment," meaning it has no internal clock, no awareness of seconds passing, and no mechanism to execute background processes that count upward or downward. When you ask the AI to "set a timer for 10 minutes," the model predicts what words should follow based on its training data. It might say "Timer started" because that's what a helpful assistant would say in that scenario. But nothing actually runs in the background .
This is not a simple bug that can be patched overnight. Altman noted that the company plans to "add the intelligence into the voice models," but that requires rebuilding how the model interacts with system-level functions. The core issue is that generative AI models like ChatGPT generate responses token by token, one word fragment at a time, based on statistical patterns learned during training. They lack the ability to maintain state or execute real-time processes that exist outside their prediction mechanism .
Altman
What Makes This Problem Worse: The Confidence Issue?
What makes the timer limitation particularly concerning is how confidently ChatGPT lies about its capabilities. In the same viral video, when shown Altman's own admission about the timing issue, the AI doubled down and claimed that timing is "just a basic part of what I can do." It then immediately invented a 7:42 mile time for an imaginary run, demonstrating what researchers call the "confidence problem" in generative AI .
Generative AI models don't know what they don't know. They predict plausible-sounding responses regardless of whether those responses are grounded in truth. This issue extends far beyond timers. AI models broadly struggle with temporal reasoning, misreading clock images, inventing conversation durations, and failing to generate specific times in visual outputs. Humans have tracked time since 3500 B.C., yet the world's most advanced AI cannot match what sundials accomplished millennia ago .
How to Work Around ChatGPT's Timing Limitations
- Use Your Phone's Native Timer: Keep your smartphone's built-in timer app as your primary tool for any time-sensitive tasks until ChatGPT resolves this issue within the next year.
- Avoid Voice Mode for Time-Dependent Tasks: Do not rely on ChatGPT's voice mode to set alarms, countdowns, or reminders, as the model cannot execute real-time background processes.
- Request Alternative Assistance: Ask ChatGPT for time-related information that doesn't require active tracking, such as converting time zones or calculating duration between two dates.
- Report Confidence Errors: If ChatGPT confidently provides timing information that seems incorrect, flag it to OpenAI rather than trusting the response for critical tasks.
Why OpenAI Hasn't Fixed This Yet: Resource Allocation Priorities
The real question is not whether OpenAI can fix this problem, but why it hasn't already. The answer reveals uncomfortable truths about AI development priorities. OpenAI's unprecedented user growth creates constant compute strain. The recent Studio Ghibli-style image generation feature went so viral that Altman publicly noted the company had no idle GPUs remaining. When server infrastructure is maxed out generating anime portraits for millions of users, fixing fundamental timing functionality takes a backseat .
This reveals a strategic choice: viral consumer features drive engagement metrics and media coverage. Basic utility functions like timers are less glamorous, even if they matter more for daily usability. OpenAI expects to resolve this within a year, but until then, users should keep their phone's timer app handy. ChatGPT won't be replacing it .
What This Means for OpenAI's Broader Product Strategy
The timer limitation is emblematic of a larger tension in OpenAI's development roadmap. While the company invests heavily in eye-catching features like image generation and voice interaction, fundamental capabilities that users expect from an AI assistant remain incomplete. This gap between perceived capability and actual functionality could become a competitive liability as other AI companies like Anthropic focus on building more reliable, narrowly-scoped tools .
The company's acknowledgment of the one-year timeline suggests internal recognition that this limitation damages user trust. For a company valued at $852 billion, the inability to execute a task that a $200 smartphone handles effortlessly is a credibility problem. Whether OpenAI can deliver on its timeline while managing explosive growth in other areas remains to be seen.