Did it technically work? Sure, maybe. Is it stupid? Yes.
Whack-a-Mole
Leave it to X-formerly-Twitter to find the dumbest possible solution to a serious problem.
Last week, outrage rightfully broke out across the web when horrific AI-generated pornography of the singer and songwriter Taylor Swift went viral on the platform — the latest content moderation scandal to hit the beleaguered social media site following owner Elon Musk's many slashes to X's once-robust trust and safety workforce. The incident has been so alarming that it's prompted lawmakers in Washington, the White House included, to weigh in on the dangers of AI deepfakes and nonconsensual AI porn.
Meanwhile, what measures has Elon Musk's X, the platform where these nonconsensual images were able to go viral, taken to combat the problem? It's simply blocked searches for "Taylor Swift" across the board. Search her name, and all you get is a "something went wrong" error code. Which is... well, one way to do it.
Zero Tolerance
In a statement to The BBC, X head of business operations Joe Benarroch insisted that the Swift erasure was temporary, adding that X has a "zero-tolerance policy" for deepfake porn. He also urged that X's teams are working to remove the images and "take action" against anyone who posted them.
Considering again that X's moderation staff has been in shambles for well over a year now, this may have been X's only feasible means of stalling the spread of the photos. Still, that doesn't mean that this was a good solution — if you can even call it that — or a step that any healthy social platform would have to take.
It's like if someone went to the hospital for what should have been a routine procedure to curb an infection on their arm, but because the hospital happened to fire anyone with the right expertise, it instead had to cut off their whole limb. Did it technically work? Sure, maybe, at least for now. But it's not a move that the hospital should have had to make to begin with.
Rising Tide
It's worth noting that the Swift incident comes at a pressing moment for X, as broader concern over its lack of content moderation continues to mount.
Hate speech has noticeably risen on the website since Musk's 2022 takeover, and later this week, X CEO Linda Yaccarino is scheduled to face questions from lawmakers about the platform's troubling inability to curb the spread of child sexual abuse material. And though X, as Bloomberg reported this weekend, claims to be planning a "trust and safety center of excellence" in Austin, it hasn't given any sort of timeline for the initiative.
In the meantime, we can likely expect more X moderation failures to follow — and as a result, more people, public and not, are likely to get hurt.
Share This Article