China
Tackling AI-generated rumors to protect the public
By Ji Jing  ·  2024-08-06  ·   Source: NO.32 AUGUST 8, 2024
A primary school student answers a question on cybersecurity during a law popularization class organized by the Wuxing District People's Procuratorate in Huzhou, Zhejiang Province, on July 3 (XINHUA)

The mother of Li Meng (pseudonym), a resident of Tianjin Municipality, was recently misled by an article that had been posted on a social media platform. The article, framed as a popular science piece, described a girl who had developed a terminal illness after playing with a cat. Concerned, Li's mother strongly opposed her daughter keeping cats, fearing she might also contract the "fatal disease."

Li and her mother had a heated argument about the article, according to a report by Beijing-based newspaper Legal Daily. Li's mother believed in the authenticity of the videos, images and purported research by doctors and medical teams included in the article, assuming the information was all true. Upon further examination, Li discovered that the article had been generated by artificial intelligence (AI).

The social media platform later debunked the rumor.

This incident is not an isolated one. Recently, public security authorities across China have reported multiple cases of AI being used to spread false information. These AI-generated rumors have caused significant social panic and harm.

Rumormongering in overdrive

Last October, a 4-year-old girl went missing on a beach in Shanghai, attracting widespread netizen attention. According to the Office of the Cyberspace Affairs Commission of the Communist Party of China Shanghai Municipal Committee, some online accounts spread rumors such as "the lost girl was abandoned by her father on purpose" and "the father of the lost girl is her stepfather," which hurt the girl's parents.

The Shanghai Public Security Bureau's investigation revealed that the owners of the accounts fabricating these rumors were all employees of the same Internet company in Shandong Province, east China. The police found that the company had hired six writers to create over 200 fake posts about the missing girl using AI. These posts were then disseminated by more than 114 online accounts owned by the company within six days, with many posts receiving over 1 million views.

A Legal Daily reporter experimented with several popular AI software tools and discovered that by providing just a few keywords, these tools could generate a news report within seconds. The reporter noted that many AI-generated rumors included phrases like "according to reports" and "related departments are conducting in-depth investigations into the cause of the accident," making them difficult to distinguish from genuine news.

In addition to news reports, AI can also generate popular science articles, images and videos. Once edited by humans and supplemented with some facts, this type of AI-generated content becomes even harder to identify as fake.

Zeng Chi, a researcher of journalism at the Research Center of Journalism and Social Development at Renmin University of China, told Legal Daily that generative AI has "a strong affinity with rumors as both make something out of nothing," creating seemingly true and reasonable information.

Shen Yang, a professor of online public opinion at the School of Journalism and Communication at Tsinghua University, told Xinhua News Agency that AI has greatly accelerated the speed of information generation and dissemination, while also increasing the difficulty of judging information authenticity and accuracy.

(LI SHIGONG)

A profitable business

The motivation behind spreading rumors often lies in obtaining more views to earn rewards on online platforms, as well as increasing follower numbers and traffic to promote sales. On social media platforms, creating articles using AI has become a profitable business. For instance, on platforms like Baijiahao, an online content creation platform established by major Chinese search engine Baidu, content creators can earn nearly 1,000 yuan ($138.3) if their articles receive over 1 million views.

Earlier this year, public security authorities in Xi'an, Shaanxi Province, uncovered a case where rumormongers used AI software to generate news articles for profit. The software could produce 190,000 articles in a single day, fabricating news stories such as "a water pipe explosion in Xi'an," which caused widespread alarm among locals.

Some individuals use AI to spread rumors to boost sales of their products on e-commerce platforms. In February, the Shanghai public security authorities discovered a short video about an entertainment star's "unfortunate fate and regretful death" that attracted numerous likes. Upon investigation, the video was found to be a fake and the creator confessed that she ran an online store selling local specialties, which were not selling well. She had conjured up the sensational fake news to draw traffic to her store.

On June 20, Shanghai police issued a notice stating that two saleswomen had fabricated and publicized false information, including a "knife attack at Zhongshan Park subway station," to gain attention for their social media accounts and increase sales. One of the women had used AI software to create a video depicting the subway attack. Both women were administratively detained by the police.

In response to the prevalence of AI-generated rumors, the Central Government has taken action. The Office of the Central Cyberspace Affairs Commission issued a notice in April to launch a special campaign aimed at "rectifying the reckless pursuit of online traffic by individual media." According to the notice, content generated by AI and other technologies must be clearly labeled.

Yu Guoming, a professor specializing in new media studies at the School of Journalism and Communication at Beijing Normal University, told Xinhua that AI content generation platforms should further improve their labeling systems for the generated content, for example by attaching digital watermarks.

Liu Xiaochun, Director of the Internet Law Research Center at the University of Chinese Academy of Social Sciences, suggested online platforms strengthen their research on intelligent identification mechanisms for AI-generated rumors and reduce the profit space for those using AI to produce and spread rumors.

Zhang Qiang, a partner at Beijing Yinghe Law Firm, told Legal Daily that the public still lacks a better understanding of generative AI.

"The media has a role to play in educating the public about AI-generated information. At the same time, law enforcement should be strengthened to rectify and punish rumormongering and fraud involving the use of AI," he said.

(Print Edition Title: Combating Digital Deception)     

Copyedited by Elsbeth van Paridon

Comments to jijing@cicgamericas.com

China
Opinion
World
Business
Lifestyle
Video
Multimedia
 
China Focus
Documents
Special Reports
 
About Us
Contact Us
Advertise with Us
Subscribe
Partners: China.org.cn   |   China Today   |   China Hoy   |   China Pictorial   |   People's Daily Online   |   Women of China   |   Xinhua News Agency
China Daily   |   CGTN   |   China Tibet Online   |   China Radio International   |   Global Times   |   Qiushi Journal
Copyright Beijing Review All rights reserved 京ICP备08005356号 京公网安备110102005860