Chinese technology giant Huawei, in collaboration with Zhejiang University, has developed a new ‘safe’ version of the DeepSeek advanced language model. The project, dubbed DeepSeek-R1-Safe, aims to almost completely eliminate responses to topics deemed politically sensitive by Beijing.
This is another step in adapting AI generative technology to the restrictive requirements of Chinese censorship.
According to Chinese regulations, all publicly available AI models must comply with ‘socialist values’. In practice, this means strict content control and preventing the generation of information on topics such as the internal politics of the Chinese Communist Party or other issues deemed sensitive.
Solutions such as Ernie Bot from Baidu already regularly refuse to respond to such requests.
The model created by Huawei goes a step further. It uses 1,000 of its own Ascend AI chips to train it.
The company claims that DeepSeek-R1-Safe has achieved a “near 100%” success rate in blocking “harmful content”, including hate speech, illegal activities and just political topics.
However, Huawei admits that the effectiveness of the system drops dramatically – to just 40% – when problematic queries are hidden in more complex scenarios or encrypted.
Despite this, the company claims that the model’s overall ‘security’ capability is 83%, which is expected to outperform competing solutions, such as Alibaba’s Qwen-2, by 8 to 15 percentage points. Significantly, these modifications were said to result in only a slight decrease of less than 1% in the model’s overall performance compared to its original, open-source version.
The original DeepSeek models, created by the startup of the same name, caused quite a stir in the tech industry due to their sophistication, becoming one of the main Chinese competitors for Western AI technologies. Huawei’s creation of a ‘secure’ version shows how the Chinese market is adapting global innovations to local, political realities.