US Government to Test New AI Models for Security Risks
Leading tech companies Google, Microsoft, and xAI agree to allow early government access to their advanced AI systems.
🕒 生成時間: (台北時間)
Summary · 摘要
The US Department of Commerce will now test new artificial intelligence (AI) models from Google, Microsoft, and xAI for security risks before they are released to the public. This agreement expands on earlier deals with OpenAI and Anthropic, showing a growing concern about the power of advanced AI. The Center for AI Standards and Innovation (CAISI) will evaluate these models to identify potential threats like cyberattacks or military misuse. This move marks a shift for the Trump administration, which previously favored less regulation for tech companies. Officials hope this early access will help understand and manage the national security implications of powerful AI systems.
美國商務部將在Google、微軟和xAI的新人工智慧(AI)模型向公眾發布之前,對其進行安全風險測試。這項協議擴展了與OpenAI和Anthropic的早期合作,顯示出對先進人工智慧力量日益增長的擔憂。「人工智慧標準與創新中心」(CAISI)將評估這些模型,以識別潛在的威脅,例如網路攻擊或軍事誤用。此舉標誌著川普政府的轉變,該政府此前傾向於對科技公司採取較少監管。官員們希望這種早期接觸將有助於理解和管理強大人工智慧系統對國家安全的影響。
The United States government is stepping up its efforts to understand and manage the risks of powerful artificial intelligence (AI) systems. Leading tech companies Google, Microsoft, and xAI have now agreed to allow the US Department of Commerce to test their new AI models for security issues before these tools are made available to the public, according to BBC Technology.
This new agreement is an expansion of earlier partnerships with other major AI companies like OpenAI and Anthropic, which were made during the Biden administration. The Center for AI Standards and Innovation (CAISI), part of the Department of Commerce, will be responsible for these evaluations. CAISI's director, Chris Fall, noted that these expanded collaborations are important "at a critical moment" for public safety, BBC Technology reports.
The evaluations will cover different aspects, including testing the capabilities of the AI tools, doing research together, and developing the best ways to use commercial AI systems safely. For example, Google's well-known AI tool, Gemini, is used in many Google products and also by US defense and military agencies. Microsoft's main AI tool is CoPilot, while xAI's only AI product is Grok, a chatbot that has faced public criticism for some issues, BBC Technology explains.
Concerns in Washington are growing about the potential national security risks that powerful AI systems could create. By getting early access to these advanced models, US officials aim to find threats like cyberattacks or misuse in military situations before the tools are widely used, Al Jazeera English reports. The development of advanced AI systems, such as Anthropic's Mythos model, has caused global discussion, including among US officials and businesses, about their potential to make hackers much more effective.
Microsoft stated that it will work with US government scientists to test AI systems in ways that look for "unexpected behaviors." Together, they will create shared data and methods for testing the company's models. Microsoft has also signed a similar agreement with the United Kingdom’s AI Security Institute, according to Al Jazeera English.
CAISI, which is the government's main center for testing AI models, has already completed more than 40 evaluations. These include tests on some "state-of-the-art models that remain unreleased," BBC Technology noted. The center did not say which specific models have been prevented from being released. Developers often provide versions of their models with safety features removed so the center can look for national security risks, Al Jazeera English added.
This move also represents a change in approach for the Trump administration. Previously, the Trump White House had largely taken a "hands-off" approach to overseeing or regulating AI and technology companies, BBC Technology reports. Last year, President Donald Trump signed executive orders that formed his administration's "AI Action Plan," which he said would "remove red tape and onerous regulation" around AI development. He aimed to ensure the US would "win" through advancements and control of the technology.
However, with the US military increasing its use of AI, and recent claims from Anthropic that it developed a model called Mythos that is too powerful to release to the public, the White House seems to be changing its view. Senior members of Trump's staff met last month with Anthropic CEO Dario Amodei, even though the company is involved in a lawsuit with the US Department of Defense over its refusal to remove safety protections for government use of its models, BBC Technology reported.
The Verge also noted that OpenAI and Anthropic have "renegotiated their existing partnerships with the center to better align with priorities in President Donald Trump’s AI Action Plan." This suggests a more active role for the government in evaluating AI. The White House might even go further in the future, as a report from The New York Times mentioned that Trump is considering an executive order to bring together tech executives and government officials to oversee new AI models, The Verge added.
Chris Fall, CAISI director, emphasized the importance of "independent, rigorous measurement science" to understand advanced AI and its national security implications, according to The Verge. This ongoing collaboration between government and tech companies highlights the growing awareness of both the potential and the dangers of rapidly developing AI technology.
易誤解詞彙 · Words to watch
這些字字面意思和文中用法不同,或是不常見的詞性/片語。
- stepping up phrasal verb
- Increasing the amount or intensity of an activity.
- 增加某項活動的數量或強度;加強。
- 💡 此片語表示「加強、加速」,而非字面上的「踩上去」。文中:The United States government is stepping up its efforts to understand and manage the risks of powerful artificial intelligence (AI) systems.
- hands-off adjective
- Not interfering or getting involved in something; allowing others to manage things themselves.
- 不干預、不插手;讓他人自行管理。
- 💡 此詞形容一種「不干預」的態度或做法。文中:Previously, the Trump White House had largely taken a "hands-off" approach to overseeing or regulating AI and technology companies, BBC Technology reports.
- red tape idiom
- Official rules and procedures that seem unnecessary and cause delays.
- 繁瑣的官方規定和程序,常導致延誤。
- 💡 此為慣用語,指「繁文縟節、官僚作風」,而非字面上的「紅色膠帶」。文中:Last year, President Donald Trump signed executive orders that formed his administration's "AI Action Plan," which he said would "remove red tape and onerous regulation" around AI development.
- align with phrasal verb
- To be in agreement or consistent with something else.
- 與某事物保持一致或符合。
- 💡 此片語表示「與…保持一致」或「符合」。文中:The Verge also noted that OpenAI and Anthropic have "renegotiated their existing partnerships with the center to better align with priorities in President Donald Trump’s AI Action Plan."
選擇題練習 · Quiz
共 4 題
- 細節 Detail
1.Which of the following AI models is specifically mentioned as having faced public criticism for some issues?
- 推論 Inference
2.Based on the article, what can be inferred about the US government's current stance on AI regulation compared to its past approach under the Trump administration?
- 單字情境 Vocabulary
3.In the sentence, "Developers often provide versions of their models with safety features removed so the center can look for national security risks," what does 'removed' most closely mean as used in this context?
- 主旨 Main Idea
4.What is the primary focus of the article?
原始來源 · Sources
本文內容由 AI 從以下來源綜合改寫。事實請以原始來源為準。
- BBC Technology — US to safety test new AI models from Google, Microsoft, xAI (May 6, 2026)
- Al Jazeera English — Microsoft, Google, xAI give US access to AI models for security testing (May 6, 2026)
- The Verge — Google, Microsoft, and xAI will allow the US government to review their new AI models (May 5, 2026)
gemini/gemini-2.5-flash