Fact Check
AI: from technology myth to security alarm
By Lan Xinzhen  ·  2025-08-25  ·   Source: NO.35 AUGUST 28, 2025

The official Weixin account of the Ministry of State Security (MSS) recently issued a security warning about the risks of using AI, including the potential risk of data poisoning. At a time when ChatGPT, Sora and other generative AI products are sweeping the globe and humanoid robots become explosively popular, this warning poses two chilling questions: As AI becomes more quickly and broadly integrated into our lives, are we prepared enough to use it safely? And is AI already becoming a double-edged sword from which we must protect ourselves?

Data poisoning is a newly emerging form of cyberattack in which perpetrators deliberately manipulate training data used to train AI models in order to manipulate the models' behavior and outputs. In addition to this deliberate manipulation, data poisoning can also result from failures to properly identify and delete harmful information from training data, making the outputs unreliable. Research has shown that a level of false text as tiny as 0.01 percent may increase harmful outputs by as much as 11.2 percent.

From the perspective of the general public, AI is a revolution in efficiency, but for the national security authorities, AI has the potential to fuel a "war of cognition," because the threat posed by AI to humanity already goes beyond the realm of technology. The MSS describes AI as a potential "ideological instigator" that manipulates public opinion and intensifies social conflicts. In hot-button social events, for example, AI can generate massive volumes sentiment-laden content and disguise it as public opinion to manipulate the direction of debate; or it can precisely push specific content to a specific target audience within information cocoons, so as to reshape the group's thinking. The terrifying aspect of this "war of cognition" lies in the fact that it no longer needs human participation, as AI can independently complete the whole chain of data collection, sentiment analysis, content generation and precise targeting.

The approach to achieving AI security proposed by the MSS is to strengthen source control to prevent the generation of contamination. Based on laws and regulations such as the Cybersecurity Law, the Data Security Law and the Personal Information Protection Law, a tiered protection system for AI data will be established to prevent the generation of contaminated data and effectively prevent AI data security threats. Risk assessment will be strengthened to safeguard data flow. Other measures include cleaning and repairing contaminated data in accordance with regulatory standards on a regular basis and formulating detailed rules for data cleaning in accordance with laws, regulations and industrial standards.

Human society's current dilemma regarding AI security is essentially a conflict between technology ethics and geopolitics. On the one hand, countries are introducing their respective AI security regulatory frameworks: China issued an interim regulation on the management of generative AI services, the U.S. National Institute of Standards and Technology released its AI Risk Management Framework and the European Union introduced a tiered approach to AI regulation in its AI Act. On the other hand, technological nationalism is weaponizing AI security. For instance, some U.S. lawmakers have been pushing for installing "hardware kill switches" and mandatory tracking devices in AI chips through legislation. Recently, China's cyberspace regulator summoned U.S. tech giant Nvidia over security risks concerning its H20 AI chip sold to China. This "security dilemma" leads to a paradox: When AI security governance becomes a tool for a major power to contain others, the AI security that the global community needs becomes unattainable.

On a global scale, it is necessary to build a humanistic AI security ecosystem. AI security should not be a zero-sum game. Ethical conventions for AI should be translated into executable code rules and embedded in the underlying algorithms.

Copyedited by G.P. Wilson

Comments to lanxinzhen@cicgamericas.com

 
 
 
China
Opinion
World
Business
Lifestyle
Video
Multimedia
 
China Focus
Documents
Special Reports
 
About Us
Contact Us
Advertise with Us
Subscribe
Partners: China.org.cn   |   China Today   |   China Hoy   |   China Pictorial   |   People's Daily Online   |   Women of China   |   Xinhua News Agency
China Daily   |   CGTN   |   China Tibet Online   |   China Radio International   |   Global Times   |   Qiushi Journal
Copyright Beijing Review All rights reserved  互联网新闻信息服务许可证10120200001  京ICP备08005356号  京公网安备110102005860