AI company Anthropic suffered a massive leak of the source code to its Claude Code AI assistant earlier this week, triggering a panicked game of cat and mouse as company representatives sent out copyright takedown requests targeting thousands of copies of its pilfered work.
The code allowed tinkerers to reverse engineer aspects of the blockbuster chatbot, highlighting concerns that the leak could give Anthropic’s competitors a major leg up. The leak also gave eyebrow-raising clues into upcoming or experimental efforts, including unreleased AI models and a “Tamagotchi”-like feature, called “buddy,” that “sits beside your input box and reacts to your coding.”
Perhaps the strangest yet: code snippets also showed that Anthropic is actively tracking how often users are using vulgar language.
“Claude Code has a regex that detects wtf,’ “ffs”, “piece of s***”, “f*** you”, “this sucks” etc.” tweeted developer Rahat Chowdhury. “It doesn’t change behavior… it just silently logs is_negative: true to analytics.”
“Anthropic is tracking how often you rage at your AI,” he added. “Do with this information what you will.”
“This is one of the signals we use to figure out if people are having a good experience,” Claude Code creator Boris Cherny replied. “We put it on a dashboard and call it the ‘f***s’ chart.”
Chowdhury also found that “there is a full mood classification for their insights but its employee only.”
“When an Anthropic employee gets frustrated, it pops up a prompt asking them to share their transcript, basically ‘hey you seem upset, wanna file a bug report?'” he wrote.
Beyond giving us a fascinating insight into how Anthropic has been building its blockbuster assistant, Cherny has been on a tear on social media, trying to pick up the pieces following his employer’s embarrassing blunder.
“It was human error,” he insisted in a Wednesday tweet. “Our deploy process has a few manual steps, and we didn’t do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks.”
Cherny also insisted that more AI was the answer to ensure such a leak won’t happen again.
“Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process,” he wrote. “In this case more automation and [C]laude checking the results.”
The developer also clarified that “no one was fired” following the leak, calling it “an honest mistake.”
But now that the cat is out of the bag, developers continue to pore over the wealth of data. Student developer Sigrid Jin’s recreated source code repository on GitHub — dubbed “Claw Code,” in a reference to the open-source AI agent OpenClaw — has been forked, or essentially copied, almost 100,000 times. He told Business Insider that the debacle could result in greater democratization of these kinds of tools.
“Non-technical people are using these agents to build real things,” Jin said. “We are talking about cardiologists making patient care apps and lawyers automating permit approvals.”
“It has turned into a massive sharing party,” he added.
More on the leak: Leaked Claude Code Shows Anthropic Building Mysterious “Tamagotchi” Feature Into It