Wait... isn't that their whole business model?

Botnet Attack

AI company Midjourney has banned all the employees of its competitor Stability AI from using its AI image-generating services, after it allegedly caught one employee scraping huge amounts of important data, including prompt and image pairs, Ars Technica reports.

This data could help Stability train or fine-tune its image-generating AI models — an unusual and ethically dubious way to get ahead of the competition, although of course, we're talking about tech that only exists in the first place because of material it scraped without the artists' permission.

The "botnet-like activity from paid accounts" triggered a 24-hour outage earlier this month, infuriating Midjourney.

Midjourney has now implemented a new policy, warning that "all employees of the responsible company" would be banned over "aggressive automation or taking down the service."

Pointing Fingers

Onlookers, however, have pointed out the irony of Midjourney, a company that has an extensive track record of scraping images off the internet without asking for permission, complaining about a competitor doing just that.

"It turns out that generative AI companies don’t like it when you steal, sorry, scrape, images from them," The Mary Sue's Siobhan Ball wrote in a blog post.

Both companies have also been targeted by several copyright lawsuits. Late last year, a group of visual artists filed an amended lawsuit against both Stability AI and Midjourney, among other companies, for misusing their work to train AI.

In the meantime, Stability hasn't admitted to any wrongdoing, with CEO Emad Mostaque saying he was confused and that the company's team "hasn't been scraping."

"Anyway I am a big Midjourney [and CEO] David [Holz] fan which is why I backed them at the start with the grant to pay for the beta," he added.

According to Mostaque, if his company was behind the scraping, it was "unintentional."

It's still unclear what exactly happened and who is to blame for the outage. Holz has since told Mostaque that he sent him "some information" to help with the company's investigation.

Mostaque told Ars that "We only scrape stuff that has proper robots.txt and is permissive," referring to a text file webmasters create to instruct web robots what data they're allowed to crawl on any given website.

It's a messy situation that's still developing. Was it a mistake, or did one company try to steal from the other in a bid to get ahead? And if they did, wasn't that their whole business model in the first place?

More on image generators: Microsoft's Copilot AI Gladly Generates Anti-Semitic Stereotypes


Share This Article