"I would never claim that it’s 100 percent."
AI in the Clouds
Apple has finally logged into the AI arms race, announcing a set of strikingly familiar machine learning tools during its Worldwide Developers Conference earlier this week.
But even for Apple, a company with a market cap of $3.3 trillion — over 30 times that of OpenAI — the well-documented shortcomings of AI tech will likely persist.
In a new Washington Post interview, Apple CEO Tim Cook admitted outright that he's not entirely sure his tech empire's latest "Apple Intelligence" won't come up with lies and confidently distort the truth, a problematic and likely intrinsic tendency that has plagued pretty much all AI chatbots released to date.
When asked about his "confidence that Apple Intelligence will not hallucinate," an increasingly unpopular term that has quickly become the catch-all for AI-generated fibs, Cook conceded that plenty of unknowns remain.
"It’s not 100 percent," he answered, arguing that he's still "confident it will be very high quality."
"But I’d say in all honesty that’s short of 100 percent," he added. "I would never claim that it’s 100 percent."
Pants on Fire
It's an uncomfortable reality, especially considering just how laser-focused the tech industry and Wall Street have been on developing AI chatbots. Despite tens of billions of dollars being poured into the tech, AI tools are repeatedly being caught coming up with obvious falsehoods and — perhaps more worryingly — convincingly told lies.
Besides jumbling facts to the point where they no longer hold together, some of these AI models are trained on dubious data that they're happy to offer up as the truth. Case in point, last month, Google's AI-powered search feature confidently told one user to put glue on their pizza, referencing an 11-year-old joke on Reddit.
Cook isn't the first tech executive to admit that these tools may simply continue lying. The news comes after Google CEO Sundar Pichai made strikingly similar statements in an interview with The Verge last month.
"We have definitely made progress when we look at metrics on factuality year on year," he said. "We are all making it better, but it’s not solved."
It remains to be seen how Apple's own implementation — a revised Siri personal assistant, forthcoming ChatGPT integration, among other AI features scattered across its desktop and mobile operating systems — will fare when it comes to hallucinations.
The stakes are high, especially considering the wealth of sensitive consumer data, including photos, emails, and text messages, Apple has collected from its customers. Nobody wants Siri to make up a calendar invite or tell you a meeting was canceled when it wasn't.
Share This Article