Caught in a Web

AI-Powered Browsers Are Failing Badly

Imagine browsing the web, but with a chatbot screwing it up for you at every turn.
Frank Landymore Avatar
The Verge tested several AI browsers, finding them to be slow and janky . And we haven't even gotten into their major security risks.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Do AI browsers represent the future of perusing the world wide web? The AI industry certainly wants you to believe that they do, just as it promised autonomous AI “agents” could automate tasks on your computer. 

But at present, they don’t seem to be all that helpful. The Verge recently tested a bunch of these chatbot-integrated browsers, and came away totally unimpressed. A common theme throughout was that the browsers were janky and at times infuriatingly slow. And despite AI’s promise of providing incredible automation, using them took a lot of effort and trial and error.

“No matter the browser, I kept running into the same fundamental problem: you have to think extra hard about how to craft the right prompt,” wrote The Verge’s Victoria Song. “Stapling an AI assistant to a browser doesn’t magically redefine how you interact with a chatbot.”

In October, OpenAI unveiled its new AI browser Atlas, which is built with its chatbot ChatGPT at its center. It brought the still fairly niche realm of AI browsers into the spotlight and suggested that going forward they’ll be another important battleground for the tech. Its competitors include Perplexity’s Comet, which first released in July, and The Browser Company’s Dia, which debuted in June.

Mainstream browsers like Google Chrome and Microsoft Edge offer chatbot features as an add-on — Gemini and Copilot respectively — but full-blown AI browsers like Perplexity’s and OpenAI’s put agentic AI features front and center, encouraging you to type in a prompt before you do a URL.

The Verge tested all these browsers. One task Song tried to use them for was organizing and “summarizing” emails, one of the often-touted selling points of the AI integration. This turned out to be a laborious endeavor: repeated attempts at narrowing down the prompts with more specific instructions still led to the AI browsers flagging unimportant emails and providing unhelpful summaries. It took this monstrosity of a prompt to finally get somewhere:

“Find unanswered emails in which I had previously responded with interest or feature personalized requests/feedback,” the prompt read. “Then, evaluate which ones I should respond to based on timeliness and keywords such as ’embargo’ featuring dates in the next two weeks. Ignore emails with multiple follow-ups to which I have not responded.”

Sounding very practical to you? Comet and Dia did manage to flag a few relevant emails, but the others snagged spam. OpenAI’s Atlas simply provided a technical-sounding explanation for why it couldn’t do what the prompt asked and suggested refining it further, at which point Song declared defeat.

Another task the AI browser struggled to pull off? Shopping, something that the likes of OpenAI have promised AI browsers and other agentic models would excel at. The AI tools could quickly perform a lot of research when asked to recommend a stylish pair of running shoes to buy, but would still commit elementary flubs like recommending stuff in the wrong color. Going through with the purchase was equally fraught: OpenAI’s Atlas, for example, repeatedly nagged the user to ask if the right item was in the cart. At one point, Atlas spent a full minute trying to close one window just to get back to shopping, Song found.

In sum, AI browsers suffer from the same problems plaguing AI agents: they’re slow, require constant supervision, and need you to sign off on any important decisions, defeating the point of having an autonomous helper. 

The browsers also raise security risks that can’t be ignored. Numerous studies have shown how they’re extremely vulnerable to what’s known as a prompt injection attack: a hacker delivers a hidden message to an AI, which could be embedded in malicious webpages the browser visits, to carry out harmful instructions. In one series of tests, researchers demonstrated that Perplexity’s Comet could be manipulated into giving hackers access to your bank account by showing it a Reddit post. Other research showed that feeding OpenAI’s Atlas fake URLs could trick it into visiting your Google Drive account and deleting files.

Safety obviously should be the priority. But it’s a pretty existential risk to the tech if it never becomes seamless enough to convince people to use it. For now, AI browsers are less personal servant, and more something that needs to be constantly baby-sat.

“My whole AI browser experience reinforced that I spend a lot of time doing things for AI so that it can sometimes do things for me,” Song wrote. “It’s less about how AI fits into my life and more about how I can adapt what I do naturally to accommodate its growing presence.”

More on AI: Anthropic’s “Soul Overview” for Claude Has Leaked

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.