The Messenger, a news site that launched this year with a self-avowed mission to "champion balanced journalism in an era of bias, subjectivity, and misinformation" through "thorough, objective, non-partisan, and timely news coverage," has partnered with an AI-powered tool called Seekr, which claims to be able to use machine learning to judge whether or not a given item of news is reliable.

As it turns out, though, Seekr's algorithm says that the Messenger's reporting isn't all that trustworthy. Oops!

The partnership seems to have started off on a happy note, with both parties issuing glowing statements about their aligned values in a joint press release.

"This partnership is built on a shared ethos that fact-based journalism standards are foundational to reliable news, and that's especially important now, as consumers are being inundated by torrents of information — much of it misleading, incomplete, or false," said Rob Clark, Seekr's president and chief technology officer. The Messenger's president Richard Beckman, meanwhile, added that "opinion, bias, and subjectivity are bleeding into news and have caused many readers to lose trust in the media," arguing that "Seekr's responsible AI technology will help hold our newsroom accountable to our core mission."

But if you try out the AI tool for yourself, it doesn't seem to hold the Messenger's work in the highest regard.

Seekr's algorithm claims to quantify the reliability of certain articles by way of a one-to-100 "score," which it then translates into a rating label, ranging from an unfortunate decree of "Very Low" to the much more favorable "Very High" tag. But as journalist and Nieman Lab director Joshua Benton noted in a Wednesday post to Twitter-formerly-X, when you plug the Messenger's URL into the AI product's search bar, Seekr returns a lot of articles featuring "Low" or "Very Low" reliability ratings.

Indeed, when we tested the search ourselves, we got similar results.

That said? We'd probably recommend taking Seekr's scores with a grain of salt.

The venture says that its AI uses "semantic analysis" to understand the "sentiment, emotions, and context" within a given text. This, the company says, means that its system can "identify the difference between a fact statement and a judgment" and ultimately glean real meaning from data. But in context, Seekr's algorithm seems to fall pretty flat. After all, news is a subjective thing, as is the concept of reliability itself. Does news have to take the form of context-free lists of bullet points to prove itself reliable, or can fact-informed opinion exist within the malleable, subjective framework of reliability? Isn't there bias just in deciding which stories to cover?

Take, for example, an article from the Messenger titled "Foo Fighters Announce First Major Headline Tour Since the Death of Taylor Hawkins." Seekr scored this post's reliability at "Low" due to an alleged overserving of "subjectivity," but it's difficult to identify the bot's reasoning — from what we can tell, that headline is a statement of fact, and the content of the piece is bolstered by links to the Foo Fighters' website. The information provided, including the headlined statement, can also be confirmed through other sources. An article titled "Chess Grandmaster Hans Niemann Denies Claim He Used Vibrating Anal Beads to Beat World Champ" also received a "Low" score for subjectivity, though again it's hard to understand why, because that's exactly what happened.

Seekr ratings for other sites are cause for some raised eyebrows as well. Take The Blaze, which was founded by the conspiracy theorist Glenn Beck and which greets readers with a pop-up depicting the media company's many personalities with duct tape over their mouths, while bolded text declares that "WE WILL NOT BE CENSORED." Among the many Blaze headlines that Seekr rates as "High" in reliability is a post titled "Jake Tapper asks AOC just one question to expose her absurd defense of Rep. Jamaal Bowman pulling fire alarm." Is there any way that could be seen as less subjective than the Messenger's Foo Fighters article? Using words like "expose" and "absurd defense" doesn't exactly strike us as non-biased reporting, but apparently Seekr's AI disagrees.

Another Blaze article, titled "Newsom appoints Maryland-based head of pro-abortion PAC as Feinstein's replacement," received a "Very High" score from Seekr, regardless of the fact that it mischaracterizes being pro-choice as being "pro-abortion" in the headline and, in the text, refers to Emily's List, a pro-choice group headed by political advisor Laphonza Butler, as a "radical pro-abortion political action committee." Objectivity at its finest, according to the AI!

We've reached out to The Messenger and Seekr for comment, but we've yet to receive any reply.

The automated system seems clearly flawed, and if it's meant to be a replacement for learned, comprehensive media literacy, it's certainly an incomplete alternative. Subjectivity and reliability are complicated, personal, abstract, and complex concepts, especially when context is taken into account; indeed, Seekr's reliability scores appear unreliable themselves.

Anyway, best of luck to the Messenger on its new partnership — it seems like it'll need it, if its goal is to appease Seekr's AI.

More on AI: Bing Chat Will Help with Fraud If You Tug Its Heartstrings about Your Dead Grandma


Share This Article