The startup may have overstated the threat.

Here You Go

In February, artificial intelligence research startup OpenAI announced the creation of GPT-2, an algorithm capable of writing impressively coherent paragraphs of text.

But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously — to produce fake news articles or spam, for example.

But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has "seen no strong evidence of misuse so far."

Still Not Perfect

According to OpenAI's post, the company did see some "discussion" regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm.

The problem might be that, while GPT-2 is one of — if not the — best text-generating AIs in existence, it still can't produce content that's indistinguishable from text written by a human. And OpenAI warns it's those algorithms we'll have to watch out for.

"We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent," the startup wrote.

READ MORE: OpenAI has published the text-generating AI it said was too dangerous to share [The Verge]

More on OpenAI: Now You Can Experiment With OpenAI’s "Dangerous" Fake News AI


Share This Article