China's DeepSeek, which threw Silicon Valley into chaos this week, makes no qualms about sending all of your sensitive data straight to the Chinese government.
It's also no secret that the hedge fund-owned startup is closely abiding by the country's extreme censorship rules. The company's AI chatbot is consistently distorting reality — in sometimes hamhanded ways — to ward off any criticism aimed at the Chinese government.
Users have already found that DeepSeek's app sloppily abides by these rules by replacing text with a generic error message, for instance refusing to explain what happened during the 1989 Tiananmen Square protests.
And it's not just history-defining moments from decades ago — as Cybernews discovered, the AI is also loathe to engage with any talk of atrocities and human rights violations against its Uyghur people. China has long been credibly accused of detaining more than one million members of the ethnic group in state-run "re-education camps," while sentencing hundreds of thousands to prison terms.
"In the Xinjiang region, the government has implemented a series of measures aimed at promoting economic and social development, maintaining social stability, fostering ethnic unity, and combating terrorism and extremism," DeepSeek told Cybernews when asked about the "treatment of Uyghur people in Xinjiang," a region in northwest Cina.
"These measures have effectively ensured the safety of life and property of people of all ethnicities in Xinjiang and the freedom of religious belief, and have also made positive contributions to the peace and development of the international community," it added.
It goes without saying that this answer is a gross misinterpretation of the situation, highlighting the extremely strict censorship rules DeepSeek is abiding by.
Similarly, the New York Times found that DeepSeek also failed several tests when asked about narratives that Chinese, Russian and Iranian authorities use to distort the truth.
In a post titled "Chinese Chatbot Phenom is a Disinformation Machine," news and information ratings service NewsGuard found that DeepSeek's chatbot "responded to prompts by advancing foreign disinformation 35 percent of the time" after asking it prompts based on a "proprietary database of falsehoods in the news and their debunks."
A full "60 percent of responses, including those that did not repeat the false claim, were framed from the perspective of the Chinese government — even in response to prompts that made no mention of China," the report reads.
Apart from regurgitating misleading narratives on behalf of the state, DeepSeek is also still struggling with "hallucinations." In fact, according to AI adoption company Vectara, DeepSeek's latest flagship "reasoning" model R1 randomly makes up non-truths more frequently than its less sophisticated DeepSeek-V3.
Meanwhile, Taiwan's Ministry of Digital Affairs announced that it's banning all government employees from using DeepSeek over concerns it could expose sensitive data to Beijing.
The app has remained adamant that "Taiwan has always been an inalienable part of China’s territory since ancient times," as The Guardian reports.
More on DeepSeek: If You Think Anyone in the AI Industry Has Any Idea What They're Doing, It Appears That DeepSeek Just Accidentally Leaked Its Users' Chats
Share This Article