Three Day Weekend

Analyst Warns Against Using Microsoft’s Copilot AI on Friday Afternoons

"Copilot makes over-shared documents more accessible."
Joe Wilkins Avatar
A man sitting at a desk with a laptop in front of him, covering his face with both hands, appearing stressed or overwhelmed. The background features a large red circle on a light green backdrop. The man is wearing a short-sleeved shirt and a wristwatch.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

As Microsoft has aggressively pushed its Copilot AI, it’s logged more than a few high-profile errors. Copilot has been found hallucinating police reports, exposing secure passwords, and digesting confidential emails — prompting security fears as its use in corporate and government settings becomes more common.

Dennis Xu, a research analyst at the firm Gartner, went as far as to suggest that companies using Copilot should ban it on Friday afternoons, because by that late juncture in the week, workers might be too checked out to double check its work.

According to the Register, that warning — half-joking but half-serious — came at a Gartner panel called “Mitigating the Top 5 Microsoft 365 Copilot Security Risks” held in Sydney, Australia this week.

“Copilot makes over-shared documents more accessible,” Xu warned. “This is not a net new risk, but a known risk amplified by AI.”

Per the Register, Xu spent 30 minutes talking about the five risks, 20 of which were dedicated to Copilot’s penchant for exposing sensitive data obtained after users failed to take the necessary precautions.

Let’s face it. Given the dangerous hallucinations and reputational blunders other models have facilitated, it might not be a bad idea to extend Xu’s Friday-afternoon ban across any AI chatbot, regardless of the model.

More on AI: Meta’s Head of AI Safety Just Made a Mistake That May Cause You a Certain Amount of Alarm