The Metaverse is always greener.
Do Svidaniya
It turns out users aren't the only ones fleeing Meta.
In the midst of a crucial AI-intellectual-property case, Meta's top corporate council Mark Lemley has announced he "fired Meta as a client" due to "Mark Zuckerberg and Facebook's descent into toxic masculinity and neo-Nazi madness."
In an exclusive interview with Wired, the well-renowned copyright lawyer unloaded on the company and the cultural moment it's leaning into.
"I am very troubled by the direction in which the country is going," Lemley said. "I am particularly troubled that a number of folks in the tech industry seem to be willing to go along with it, no matter how extreme it gets."
Though he's never talked with Zuckerberg face-to-face, Lemley thinks both fellow executive Elon Musk and the Meta CEO "have been particularly egregious in their behavior."
As Zuckerberg sheds his old liberal skin to embrace the new world order, critics have argued that he risks ostracizing Meta users in exchange for his newfound political favor.
Litigating the Future
And as Zuckerberg and Meta lurch right, it's important to remember that their institutional power didn't spring up overnight.
Split with the social giant notwithstanding, Langley goes on to explain that he still thinks that Meta is in the right in its most recent copyright dispute — a debate over whether the $1.59 trillion Meta should be allowed to scrape copywritten material for AI training without consent.
"The strongest arguments are the ones where the output of a work ends up being substantially similar to a particular copyrighted input," the lawyer told Wired, referring to copyright disputes with Meta. "Turns out, it's hard to purge all references to Mickey Mouse from your AI dataset, for instance. If people want try to generate a Mickey Mouse image, it's often possible to do something that looks like Mickey Mouse."
For him, the ethics of AI transparency are not up for debate, but are the domain of coolheaded copyright litigation — which has so far been unable to stop Big Tech from getting everything it wants.
But why does the human-created content used to train AI include Mickey Mouse in the first place? If it's hard to purge content once it's been used to train a model, wouldn't that merit stronger scrutiny before, not after these for-profit models spit our art, music and writing back out at us?
Given that most researchers and engineers don't know where AI training data comes from, it seems to behoove us to hold these increasingly powerful corporations accountable for the public data they scrape — not despite their toxic turn, but precisely because of it.
More on Meta: Facebook Planning to Flood Platform with AI-Powered Users
Share This Article