"The low-hanging fruit is gone. The hill is steeper."
Uphill Battle
Google CEO Sundar Pichai is saying sayonara to the era of easy AI advancements.
At the New York Times Dealbook Summit last week, Pichai sat down to discuss one of the biggest questions hanging over the tech industry right now: whether or not current techniques for improving AI models are reaching their limit.
He was a little mealy-mouthed when it came to directly addressing the technical side of the issue, but was unequivocal that the effortless newbie gains that AI development initially enjoyed are over.
"I think the progress is going to get harder," Pichai said at the Dealbook summit. "When I look at '25, the low-hanging fruit is gone. The hill is steeper."
Wall to Wall
Fears that generative AI improvements were hitting a wall came to a head last month, as reports trickled out that OpenAI researchers discovered that the company's upcoming large language model, code-named Orion, demonstrated significantly less improvement — and in some cases no clear advances at all — than previous iterations did over their predecessors.
This lent credence to the suspicion that bolstering AI models by adding more data and computing power, or "scaling," was finally showing diminishing returns as many experts had predicted. In response, OpenAI CEO Sam Altman smugly dismissed those claims, tweeting "there is no wall," while others in the organization have hyped up its AI capabilities even further.
Pichai says he doesn't "fully subscribe to the wall notion" himself — but agrees that AI developers will have to stop relying on scaling.
"When you start out quickly scaling up you can throw more compute and make a lot of progress," the Google CEO said at the NYT event. "We are definitely going to need deeper breakthroughs as we go to the next stage."
"I'm very confident there will be a lot of progress in '25," he added. "I think the models are definitely going to get better at reasoning, completing a sequence of actions more reliably — more agentic if you will."
Sky's the Limit
That said, Pichai doesn't think that advances derived from processing power have hit a hard ceiling.
"The current amount of compute we're using is just an arbitrary number. It's not like we're using a lot of compute," he said. "There's no reason why we can't just keep scaling up."
This is a striking thing to assert, because the largest AI models, including Google's, are notorious for devouring an ungodly amount of power — so much that both Google and Microsoft are firing up nuclear power plants just to meet its energy demands.
Meanwhile, the highly coveted AI chips that the industry depends on are so in demand that manufacturer Nvidia can barely keep up. These all seem like obvious obstacles to why you can't keep scaling up forever.
Nonetheless, Pichai concedes that scaling on its own won't be enough. What will prove to be the differentiators will be "technical" and "algorithmic" breakthroughs, he said.
More on AI: AI Chatbots Are Encouraging Teens to Engage in Self-Harm
Share This Article