Fiddled With It

Musician Cancelled as AI Falsely Accuses Him of Horrific Crimes

"Google screwed up, and it put me in a dangerous situation."
Joe Wilkins Avatar
Canadian musician Ashley MacIsaac was mortified when he discovered Google's AI overview was incorrectly labeling him a sex criminal.
Illustration by Tag Hartman-Simkins / Futurism. Source: Ashley MacIsaac

Who needs vicious music columnists when you live in the age of AI?

Apparently not Ashley MacIsaac, a Canadian fiddler, singer, and songwriter who was labeled a sex criminal by Google’s AI overview.

According to the Canadian newspaper The Globe and Mail, event organizers at the Sipekne’katik First Nation, north of Halifax, canceled an upcoming performance featuring MacIsaac after Google incorrectly described him as a sex offender.

The paper reports that the misinformation was the result of one of Google’s AI summaries — brief summations it helpfully plasters above all other search results — which blended the musician’s biography with another person who bears the same name.

“Google screwed up, and it put me in a dangerous situation,” MacIsaac told the paper.

Though the AI overview has since been updated, MacIsaac explained that the situation presents a huge dilemma for him as a touring musician. For one thing, there’s no telling how many other event organizers passed on hiring him because of the libelous claim, or how many potential audience members got the wrong impression, but not the correction.

“People should be aware that they should check their online presence to see if someone else’s name comes in,” MacIsaac told the Globe.

After the truth came to light, the Sipekne’katik First Nation issued an apology, and extended a future welcome to the musician.

“We deeply regret the harm this error caused to your reputation, your livelihood, and your sense of personal safety,” a First Nation spokesperson wrote in a letter shared with the newspaper. “It is important to us to state clearly that this situation was the result of mistaken identity caused by an AI error, not a reflection of who you are.”

A representative for Google, meanwhile, said that “search, including AI Overviews is dynamic and frequently changing to show the most helpful information. When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies.”

Yet as MacIsaac correctly asserts, reputational risk is a difficult thing to repair. There’s no telling how far that misinformation might have spread — and when a corporation rolls out lazy software with obvious flaws, who’s responsible for the damage?

More on Google: Google Caught Replacing News Headlines With AI-Generated Nonsense

Joe Wilkins Avatar

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.


TAGS IN THIS STORY