The "world's first" entirely AI-generated news site is here. It's called NewsGPT, and it seems like an absolutely horrible idea.
The site, according to a press release, is a reporter-less — and thus, it claims, bias-free — alternative to conventional, human-created news, created with the goal of "[providing] unbiased and fact-based news to readers around the world."
"For too long," Alan Levy, NewsGPT's CEO, said in the release, "news channels have been plagued by bias and subjective reporting. With NewsGPT, we are able to provide viewers with the facts and the truth, without any hidden agendas or biases."
Okay. While we understand that a lot of folks out there are frustrated with the modern news cycle, there are about a million problems with what this guy is doing, the least of which being that there are some glaring transparency problems here — which is pretty incredible, given everything that he claims to be railing against.
First and foremost, while its title suggests that it might be using a version of OpenAI's GPT — the Large Language Model (LLM) that powers OpenAI's viral ChatGPT chatbot — Levy fails to ever actually disclose which AI program he's using to power NewsGPT. All the release says is that NewsGPT is powered by "state-of-the-art machine learning algorithms and natural language processing technology" that's allegedly "able to scan relevant news sources from around the world in real-time."
"It then uses this data," the press release reads, "to create news stories and reports that are accurate, up-to-date, and unbiased."
Great. Sure. But again: what is it? It matters! AI software doesn't just spring into existence. Models are conceptualized, built, and programmed by humans, and disclosing which humans are making the underlying tech seems like it should be pretty important to Levy's alleged mission.
When Futurism reached out to NewsGPT for comment, all a spokesperson said was that they're using a "combination of AI programs," which doesn't answer the question (they also bragged that "part of this email is written by AI," without specifying which part.)
Speaking of the underlying tech, we're not just concerned about who's building it. From ChatGPT to Bing Search to CNET's mystery AI journalism machine, language-generating AIs are notorious for the penchant to hallucinate — or, in other words, just make shit up. They don't know what words mean, they just predict what might come next in a sentence, even making up phony sources and numbers to support BS claims.
For its part, NewsGPT did admit to us that machine hallucinations "might" happen. But as they seem to frame it, machine hallucination isn't that big of a deal. It's only "fact-based" news, right?
"There are no human fact-checkers. Our news stories are generated 100 percent by AI. We are aware that 'AI hallucinations' might happen and that AI is far from a perfect technology," the company told us over email. "We are committed to learning fast and improving all the time to deliver the best AI news we can."
To that end, when it comes to, dunno, news, sources are extremely important. With the exception of an occasional in-text mention of where a specific figure may have come from, NewsGPT's articles overwhelmingly fail to link back to any of its references, offering alleged facts and figures, which have to come from somewhere — unless, of course, the machine makes them up — without mention of its origin.
Seems like an issue. But to NewsGPT, that, too, is just a growing pain.
"NewsGPT and AI are in hyper-growth phases," the firm said. "We are currently developing an AI 'best practice system' regarding sources and links."
But to that point, gotta say: if the tech is just scraping, paraphrasing, and regurgitating news found from other "relevant news sources" without giving credit, isn't that just... plagiarism? Of the human journalists that Levy says no one can trust? Who write for the companies that Levy says have "hidden agendas and biases"?
"By using the process of generative modeling, NewsGPT generates new and original stories," adding that their still-unspecified "AI model also looks for text that matches existing content too closely and actively tries to rectify this."
Sure. Again, though: we'll believe it when we see it. But considering that AI leaders at OpenAI, Microsoft, and Google haven't quite figured that piece out — or figured out any of these issues, really — we won't hold our breath.
We'd also be remiss not to mention that while human bias exists, machine bias certainly does too. Though Levy effectively markets NewsGPT as a faceless, apolitical ghost reporter, capable of finding and delivering only the facts, LLMs and similar tools are a mirror to humanity — often the worst parts of it — and not the antidote that folks like Levy promise it to be; the AI industry has yet to create a system that isn't riddled with deeply embedded bias.
At the end of the day, when it comes to news and journalism, generative AI programs may one day prove to have some helpful assistive qualities (Wired's approach, released this week, is notably respectable.) But as it stands, we've yet to see a miracle system that can safely and reliably deliver accurate and unbiased journalism without human intervention — and even with human involvement, these programs have failed time and again, a result of their own flaws as well as our own.
Anyway. Please don't get your news from NewsGPT.
More on AI journalism: CNET Hits Staff With Layoffs After Disastrous Pivot to AI Journalism
Share This Article