Few companies in the history of capitalism have amassed as much wealth and influence as Meta.
A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of $1.68 trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain.
In spite of its immense influence, none of its internal algorithms can be scrutinized by public watchdogs. Its host country, the United States, has largely turned a blind eye to its dealings in exchange for free use of Meta's vast surveillance capabilities.
That lack of oversight coupled with Meta's near-omnipresence as a social utility has had devastating consequences throughout the world, manifesting in crises like the genocide of Muslims in Myanmar, or the systemic suppression of Palestinian rights organizations.
How do you uncover the harms caused by one of the most powerful companies on earth? In the case of public violence, the evidence isn't hard to trace. However, Meta's unprecedented corporate dynasty also creates less obvious harms, which scores of scholars, researchers, and journalists are devoting entire careers to uncovering.
One prominent group of said investigators is GLAAD, the Gay & Lesbian Alliance Against Defamation, which recently released its annual report on social media safety, privacy, and expression for LGBTQ people.
The report notes that Meta has undergone a "particularly extreme" ideological shift over the past year, adding harmful exceptions to its content moderation policies while disproportionately suppressing LGBTQ users and their content. The tech giant has also failed to give LGBTQ users sovereignty over their own personal data, which it collects, analyzes, and wields to generate huge profits.
While Meta collects all of our data — from which it draws over 95 percent of its revenue — the practice is particularly harmful to LGBTQ users, who then have to contend with algorithmic biases, non-consensual outing, harassment, and in some countries state oppression.
"It's a dangerous time, certainly for trans people, who as a minority have been so ridiculously maligned, but also a dangerous time for gay people, openly bi[sexual] people, people who are different in any way," says Sarah Roberts, a UCLA professor and Director of the Center for Critical Internet Inquiry.
To address these shortcomings and the dangers they introduce, GLAAD made a number of recommendations. One key suggestion was to improve moderation "by providing training for all content moderators focused on LGBTQ safety, privacy and expression." The media advocacy group doesn't mince words, adding that "AI systems should be used to flag for human review, not for automated removals."
However, it doesn't look like Meta got the message.
Weeks after GLAAD issued its findings, internal Meta documents leaked to NPR revealed the company's plan to hand 90 percent of its privacy and integrity reviews over to "artificial intelligence."
This will impact nearly every new feature introduced to its platforms, where human moderators would typically evaluate new features for risks to privacy and safety, and the wellbeing of user groups like minors, immigrants, and LGBTQ people.
Meta's internal risk assessment is an already opaque process, and Roberts notes that government attempts at risk oversight, like the EU's Digital Services Act, are likewise a labyrinth of filings which are largely dictated by the social media companies themselves. AI, chock full of biases and prone to errors — as admitted by Meta's own AI chief — is certain to make the situation even worse.
Earlier this week, meanwhile, the Wall Street Journal revealed Meta's plans to fully automate advertising via the company's generative AI software, which will allow advertisers to "fully create and target ads" directly, with no human in the loop.
This includes hyper-personalized ads, writes the WSJ, "so that users see different versions of the same ad in real time, based on factors such as geolocation."
Data hoarders like Meta — which track you even when you're not using its platforms — have long been able to profile LGBTQ users based on gender identify and sexual orientation, including those who aren't publicly out.
Removing any human from these already sinister practices serves to streamline operations and distance Meta from its own actions — "we didn't out gay users living under an oppressive government," the company can say, "even if our AI did." It's no coincidence that Meta had already disbanded its "Responsible AI" team as early as 2023.
At the root of these decisions — Meta CEO Mark Zuckerberg's right wing turn notwithstanding — is the calculated drive to maximize revenue.
"If there's no reason to rigorously moderate harmful content, then why pay so many content moderators? Why engage researchers to look into the circulation of this kind of content?" observes Roberts. "There ends up being a real cost savings there."
"One of the things I've always said is that content moderation of social media is not primarily about protecting people, it's about brand management," she told Futurism. "It's about the platform managing its brand in order to make the most hospitable environment for advertisers."
Sometimes these corporate priorities line up with progressive causes, like LGBTQ user safety or voter registration. But when they don't, Roberts notes, "dollars are dollars."
"We are looking at multibillion-dollar companies, the most capitalized companies in the world, who have operated with impunity for many, many years," she said. "How do you convince them that they should care, when other powerful sectors are telling them the opposite?"
More on Meta: Meta's Platforms Have Become a Cesspool of Hatred Against Queer People
Share This Article