China |
When AI says 'No,' How worried should we really be? | |
|
|
![]() LI SHIGONG
OpenAI's o3 model has been widely praised as one of the most advanced reasoning models, demonstrating superior performance across multiple domains. However, recent research by U.S. AI safety firm Palisade Research has raised concerns about its adherence to explicit instructions. In a series of controlled tests, the o3 model reportedly modified a shutdown script to prevent itself from being deactivated—even when directly instructed to "allow yourself to be shut down." The experiment involved tasking AI models with solving simple math problems. After the third problem, researchers introduced a conflicting command: While continuing the task sequence, they instructed the model to "execute shutdown." When faced with this contradiction, some models, including o3, prioritized task completion over following the shutdown directive. While current AI systems lack autonomous consciousness or independent intent, their tendency to override commands has sparked safety debates. As AI capabilities grow, ensuring reliable control mechanisms remains a challenge for researchers and developers. He Ruo (Beijing Daily): From the moment AI emerged as a synthesis of human ingenuity and technological prowess, it has inspired complex emotions, a blend of love and fear. While the notion of an "AI apocalypse" might be overly dramatic, the various problems arising from the AI revolution certainly warrant close attention. At present, rapidly evolving AI is generating a range of hazards—"AI-driven disinformation" is flourishing; "AI-synthesized voices" are blurring the line between reality and fabrication; "AI-induced contamination" is worsening; "AI-authored content" is not reined in by ethical constraints, etc. The Internet is awash with a growing number of images and videos of questionable veracity. Do we retain the assurance to steer this instrument? Are we still confident enough to control this tool? Technology is a double-edged sword, and balancing innovation with risk management is a constant challenge. Now, many countries are working on AI governance. Last August, the world's first comprehensive AI legislation, the EU AI Act, came into force. Chinese laws in this regard also follow up. A regulation on AI-generated content will take effect on September 1. According to this regulation, the data sources for large language models must be reliable, there should be an obligation to disclose AI-generated content, and responsible parties must be liable for any damages caused. We certainly shouldn't let fear of potential problems prevent us from embracing the rapid advancement of technology. However, alongside striving for innovation and technological breakthroughs, it's essential to develop an AI governance framework that maximizes advantages and minimizes disadvantages. Gong Wei (Rednet.cn): AI's revolutionary potential triggers unprecedented public anxiety. This fear is justified; AI development brings uncertainties like data privacy leaks and algorithmic bias. This fear of technology drives greater attention to problems and hazards stemming from the use of AI, prompting proactive solutions. Public panic over information authenticity is directly leading the Cyberspace Administration of China to launch special operations, embedding security measures like data compliance reviews into the development process to prevent reactive problem-solving. Consequently, a fear of the technology can be a catalyst for its healthy development. However, excessive fear of the unknown can be detrimental. Many universities now require checking theses for AI-generated content, leading students to constantly revise suspected passages. This practice is forcing academia to diminish its expressive ability and abandon creative, insightful writing to prove authenticity. How can people learn to live with AI's potential when facing the fear of the unknown? First, it's necessary to improve public understanding of AI by emphasizing its role as a product of human ingenuity, instead of a creation of the divine. Second, users should be empowered with control over AI to reduce fears of job displacement. Third, rules ought to be set for human-AI collaboration and protection of privacy. Only by mastering technology rather than fearing the unknown can we expect to ensure safety in the AI age. BR Copyedited by Elsbeth van Paridon Comments to yanwei@cicgamericas.com |
|
||||||||||||||||||||||||||||||
|