Presumably, when you dump $14 billion into a company to buy a 49 percent stake — as Meta just did in Scale AI — you're confident that said company a) will help you make a lot of money and b) knows what it's doing.

But a new scoop from Inc Magazine suggests that Scale AI — co-founded by 28-year-old zillionaire Alexandr Wang, whose first name does indeed lack a letter "e" between the "d" and "r" —  is a massive clown show behind the scenes. 

Back when it worked with Google (the two just broke up following Meta's takeover), Scale AI reportedly became overrun with countless "spammers" who fleeced the company for bogus work by taking advantage of its laughable security and vetting protocols — an episode that encapsulates its struggles to meet the demands of a huge client like Google.

Scale AI is basically a data annotation hub that does essential grunt work for the AI industry. To train an AI model, you need quality data. And for that data to mean anything, an AI model needs to know what it's looking at. Annotators manually go in and add that context.

As is the means du jour in corporate America, Scale AI built its business model on an army of egregiously underpaid gig workers, many of them overseas. The conditions have been described as "digital sweatshops," and many workers have accused Scale AI of wage theft.

It turns out this was not an environment for fostering high-quality work.

According to internal documents obtained by Inc, Scale AI's "Bulba Experts" program to train Google's AI systems was supposed to be staffed with authorities across relevant fields. But instead, during a chaotic 11 months between March 2023 and April 2024, its dubious "contributors" inundated the program with "spam," which was described as "writing gibberish, writing incorrect information, GPT-generated thought processes."

In many cases, the spammers, who were independent contractors who worked through Scale AI-owned platforms like Remotasks and Outlier, still got paid for submitting complete nonsense, according to former Scale contractors, since it became almost impossible to catch them all. And even if they did get caught, some would come back by simply using a VPN.

"People made so much money," a former contributor told Inc. "They just hired everybody who could breathe." 

The work often called for advanced degrees that many contributors didn't have, the former contributor said. And seemingly, no one was vetting who was coming in.

"There were no background checks whatsoever," a former queue manager for Remotasks, who was in charge of reviewing and approving the contributors' work, told Inc. "For example, the clients would have requirements for people working on projects to have certain degrees. But there were no verification checks... Often it was people that weren't native English speakers." 

Spammers "could get away with just totally submitting garbage and there weren't enough people to track them down," the former queue manager added. They also recalled how Scale AI's Allocations team in charge of assigning contributors once "dumped 800 spammers" into their team who proceeded to spam "all of the tasks."

Attempts at cracking down were crude. Per Inc, various memos and guidelines called for either denying or removing contributors from specific countries, including Egypt, Pakistan, Kenya, and Venezuela.

The program also got a little taste of the technology it was helping to create. Spammers were submitting so much AI-generated junk that supervisors were advised to use a tool called ZeroGPT, intended to detects ChatGPT usage, to vet entries.

It makes you wonder just how much gibberish slipped through the cracks and ended up being internalized by Google's AI models. Perhaps it could explain a little about its infamously shoddy AI Overviews feature.

For its part, a Scale AI spokesperson dismissed the claims.

"This story is filled with so many inaccuracies, it's hard to keep track," the spokesperson said in a statement to Inc. "What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers."

More on AI: WhatsApp Deploys AI, for Those Incapable of Comprehending Straightforward Messages From Their Friends and Family


Share This Article