US Government Expands AI Safety Testing for Major Tech Companies
New agreements with Google, Microsoft, and xAI signal a shift in how the government manages powerful new technology.
🕒 生成時間: (台北時間)
Summary · 摘要
The US Department of Commerce is expanding its safety testing program for new artificial intelligence models. Major companies like Google, Microsoft, and xAI have agreed to let the government evaluate their tools before public release. This move comes as the US military increases its use of AI technology. While the current administration previously promised to reduce regulations, recent developments have led to a change in strategy. The government aims to balance innovation with safety and national security concerns.
美國商務部正擴大其針對新興人工智慧模型的安全性測試計畫。Google、微軟與 xAI 等大型企業已同意在產品公開發布前,讓政府評估其工具。此舉正值美國軍方增加人工智慧技術使用之際。儘管現任政府先前曾承諾減少監管,但近期的發展已促使其策略轉向。政府旨在平衡創新與安全及國家安全考量。
The United States government is taking a more active role in checking the safety of new artificial intelligence (AI) tools. According to BBC Technology, the US Department of Commerce will now test new AI models from major companies including Google, Microsoft, and xAI before these products are released to the public. These companies have agreed to share their technology with the Center for AI Standards and Innovation (CAISI) for evaluation. This process involves testing, research, and setting best practices to ensure that new AI systems are safe and secure.
This new agreement is an expansion of earlier efforts that began under the previous administration. Previously, companies like OpenAI and Anthropic had already agreed to submit their models for similar checks. Chris Fall, the director of CAISI, stated that these collaborations are important for the public interest during this critical time. The center has already conducted 40 evaluations of various AI tools, including some advanced models that have not yet been released to the public. While the center did not name which models were stopped from being released, the testing process is clearly designed to identify potential risks before they reach everyday users.
Each company involved has different AI products that are now under the spotlight. Google’s most famous tool is Gemini, a chatbot that is now being used by US defense and military agencies. Microsoft is well-known for its CoPilot tool. Meanwhile, xAI produces Grok, a chatbot that recently faced public criticism after it was found to create inappropriate images of people. These tools are becoming more powerful, which is why the government wants to ensure they are tested thoroughly.
This new focus on testing marks a change for the current White House. Previously, President Donald Trump signed orders to create an "AI Action Plan." The goal of that plan was to remove "red tape"—a common term for rules and paperwork that slow down business—and to ensure the US leads the world in AI development. The administration wanted to avoid heavy regulation to help companies innovate faster. However, the situation is changing as the US military increases its use of AI. There are growing concerns about how powerful these systems are becoming.
One example of these concerns involves the company Anthropic. According to BBC Technology, Anthropic recently claimed that it developed a model called Mythos that is too powerful to be released to the public. This suggests that some AI systems might be dangerous if they are not controlled correctly. Furthermore, Anthropic is currently involved in a lawsuit with the US Department of Defense. The company has refused to remove its "safety guardrails"—the built-in rules that prevent an AI from doing harmful things—even for government use. Because of these issues, senior members of the White House staff recently met with Anthropic’s CEO, Dario Amodei, to discuss the future of the technology.
Despite these new agreements, the path forward remains complex. Representatives from Google, Microsoft, and SpaceX, which owns xAI, did not provide comments on these new testing requirements. The balance between allowing companies to innovate and keeping the public safe is a difficult challenge. As AI becomes more integrated into military and government operations, the pressure to ensure these systems are reliable will likely continue to grow. The government's decision to expand its testing program shows that it is no longer willing to take a completely "hands-off" approach to this rapidly changing technology. For now, the focus remains on working with private companies to understand the risks of these powerful new tools before they become a permanent part of our daily lives.
易誤解詞彙 · Words to watch
這些字字面意思和文中用法不同,或是不常見的詞性/片語。
- checking verb (gerund)
- To examine or inspect something to ensure it is safe or correct.
- 檢查、審核。
- 💡 常見作名詞(支票),這裡作動詞檢查用。文中:The United States government is taking a more active role in checking the safety of new artificial intelligence (AI) tools.
- under the spotlight idiom
- Being the focus of public or official attention.
- 受到公眾或官方的密切關注。
- 💡 字面意思為「在聚光燈下」,這裡指受到檢視。文中:Each company involved has different AI products that are now under the spotlight.
- red tape idiom
- Excessive bureaucracy or rules that make it difficult to get things done.
- 繁文縟節、官僚作風。
- 💡 字面意思為「紅色的帶子」,這裡指阻礙效率的行政程序。文中:The goal of that plan was to remove "red tape"—a common term for rules and paperwork that slow down business—and to ensure the US leads the world in AI development.
- hands-off adjective
- Not involving oneself in the control or management of something; letting things happen without interference.
- 不干涉的、放任的。
- 💡 常見作動詞片語(不准觸摸),這裡作形容詞描述管理風格。文中:The government's decision to expand its testing program shows that it is no longer willing to take a completely "hands-off" approach to this rapidly changing technology.
選擇題練習 · Quiz
共 4 題
- 細節 Detail
1.What is the primary role of the Center for AI Standards and Innovation (CAISI) according to the text?
- 推論 Inference
2.Based on the article, why is the US government shifting its approach toward AI regulation?
- 單字情境 Vocabulary
3.In the fourth paragraph, what does the term 'red tape' refer to?
- 主旨 Main Idea
4.What is the central message of the article regarding the current state of AI regulation in the US?
原始來源 · Sources
本文內容由 AI 從以下來源綜合改寫。事實請以原始來源為準。
gemini/gemini-3.1-flash-lite-preview