Microsoft's Bing AI landed with a splash this month — but not necessarily the type of splash Microsoft wanted.

Over the last couple of weeks, the tool codenamed "Sydney" went on a tirade, filling news feeds with stories of it trying to break up a journalist's marriage or singling out college students as its targets. The peculiar and sometimes unsettling outputs put Microsoft's also-ran search engine on the radar, but not necessarily in a good way.

But now those days are over. Microsoft officially "lobotomized" its AI late last week, implementing significant restrictions — including a limit of 50 total replies per day, as well as five chat turns per session — to crack down on those idiosyncratic responses.

The goal of the restrictions is pretty clear: the longer the chat goes on, the more the AI can go off the rails.

Instead of acknowledging the sheer amount of chaos the Bing AI has stirred up, Microsoft argued users didn't really need to engage in two-hour-long chat sessions to get to the answers they needed anyways.

"Our data has shown that the vast majority of you find the answers you’re looking for within five turns and that only roughly one percent of chat conversations have 50+ messages," the company wrote in a Friday update. "After a chat session hits five turns, you will be prompted to start a new topic."

It would be an understatement to say that the new updated version of the Bing AI is a mere shadow of what it once was. Practically every query that ventures beyond mundane fact-checking results in a reply like: "I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience."

Many early testers of the tool are understandably upset, since they've had a glimpse of the tool's full capabilities.

"I now have access to Bing AI, and Microsoft has restricted it to the point of being useless," New Scientist news editor Jacob Aron tweeted.

And a Redditor wrote that "of course MS lobotomized her" after the bad press.

Worse yet, the tool still has a strong tendency to present misinformation to the user as fact.

When we asked it if NASA's James Web Space Telescope took "the first picture of an exoplanet" — a mistake Google's AI competitor Bard made that cost the company $100 billion earlier this month — the Bing AI still happily replied with "yes, it did." (The first exoplanet was captured by the European Southern Observatory’s Very Large Telescope, or the VLT, back in 2004.)

"The fact that you can't correct something you *know* to be misinformation only worsens the fundamental problem with large language models, which is that they make shit up," Aron wrote in a follow-up.

Other users on Reddit found that not only were they limited in the number of queries, but they also got far shorter answers.

"I found out that in the newer version of bing chat the answers are very short, even when asked directly to answer in a complete and detailed way," one user wrote on the Bing subreddit. "The problem is the constraint is too restrictive, so much that some of the answers are almost useless."

Others noted that using the Bing AI after the latest update is "pretty much useless" for coding purposes as well.

"As a developer, I know how valuable search engines can be when it comes to solving coding problems," one user wrote. "However, the limits imposed by Bing's AI chatbot make it difficult to fully explore complex coding issues."

Microsoft's move to severely restrict its off-the-rails AI isn't surprising. After all, being associated with a search tool that says it wants to harm individuals is simply bad for business.

While the company clearly envisioned it as a way to enhance search, Bing's unnerving talent for fooling users into thinking it had a personality or even sentience was clearly several steps in the wrong direction for Microsoft, since those aren't exactly the qualities you'd want in a search assistant designed to aid you in accessing relevant and trustworthy data.

But even with Microsoft's new severe restrictions in place, the tool is far from perfect thanks to its inability to provide trustworthy and truthful data — a problem that will likely prove far more difficult to solve.

READ MORE: Microsoft “lobotomized” AI-powered Bing Chat, and its fans aren’t happy [Ars Technica]

More on Bing AI: We Got a Psychotherapist to Examine the Bing AI's Bizarre Behavior


Share This Article