English News / 英文新聞閱讀
科技 · Technology · · 682 words · B1-B2

US Government Expands Safety Testing for Powerful AI Models

Tech giants agree to share new technology with officials to prevent national security risks.

🕒 生成時間: (台北時間)

⚠️ 本文由 AI 綜合多家報導生成,事實請以原始來源為準。

Summary · 摘要

The US government is expanding its safety testing program for advanced artificial intelligence. Major companies including Google, Microsoft, and xAI have agreed to let officials examine their new models before public release. This effort aims to identify security threats like cyberattacks and military misuse. The program is managed by the Center for AI Standards and Innovation. Officials hope this collaboration will help manage the risks of rapidly developing technology.

美國政府正在擴大其針對先進人工智慧的安全測試計畫。包括 Google、微軟與 xAI 在內的大型公司已同意讓官員在模型公開發布前進行審查。此舉旨在識別網路攻擊與軍事濫用等安全威脅。該計畫由人工智慧標準與創新中心負責管理。官員們希望透過此項合作,能有助於控管快速發展的科技所帶來的風險。

閱讀模式 ·

The United States government is taking new steps to monitor the rapid growth of artificial intelligence (AI). According to the BBC, the US Department of Commerce will now test new AI models from major technology companies including Google, Microsoft, and xAI before these tools are released to the public. This agreement is a significant expansion of earlier efforts that began under the previous administration.

These companies have agreed to work with the Center for AI Standards and Innovation (CAISI). This organization acts as the main hub for the government to evaluate AI tools. According to Al Jazeera, the center has already completed more than 40 evaluations of various AI systems, including some advanced models that have not yet been released to the public. Chris Fall, the director of CAISI, explained that independent and careful scientific measurement is essential to understanding the national security implications of these powerful systems.

This new partnership comes at a time of growing concern in Washington about the potential dangers of AI. Officials are worried that advanced models could be used to help hackers or cause other security problems. For example, Al Jazeera reports that concerns have grown following the creation of a new model called Mythos by the company Anthropic. Some experts believe this model is so powerful that it could be dangerous if released without proper checks. By gaining early access to these "frontier models"—the most advanced AI systems currently being developed—government officials hope to find and fix threats before the software is widely available.

Each company involved has different reasons for participating. Microsoft stated that while it already tests its own products, it believes that testing for national security risks must be a shared effort with the government. The company plans to work with government scientists to test its systems for unexpected behaviors. Meanwhile, Google and xAI have been less vocal about the specific details of their involvement. According to the BBC, a representative for Google’s DeepMind declined to comment, and a spokesperson for xAI did not respond to requests for information.

This development marks a change in how the current White House handles technology. President Donald Trump previously signed executive orders that aimed to reduce government rules on AI development. The goal was to help the United States win the global race in technology by removing "red tape," which refers to complex or unnecessary official rules. However, the increasing use of AI by the US military and the emergence of very powerful models have led the administration to shift its approach. The government now seems more focused on balancing innovation with safety.

Beyond national security, the way people interact with AI is also changing. While the government focuses on safety, companies like Apple are working on ways to give users more control. According to The Verge, Apple is planning an update for its operating systems that may allow users to choose their favorite AI model for daily tasks. This would let people use different AI "extensions"—extra software that adds new features—to power tools like Siri. Apple is reportedly testing this with models from companies like Google and Anthropic.

These two trends show that AI is becoming a central part of both public policy and private life. On one hand, the government is working to ensure that new technology does not pose a threat to the nation. On the other hand, tech companies are competing to make these tools more useful and customizable for individual users. As these models become more capable, the debate over how to regulate them will likely continue. For now, the focus remains on collaboration, with the government and private sector working together to understand what these machines can truly do.

易誤解詞彙 · Words to watch

這些字字面意思和文中用法不同,或是不常見的詞性/片語。

hub noun
The central or most important part of a system or organization.
中心、樞紐。
💡 常見於交通(如機場樞紐),這裡指組織的核心運作中心。文中:This organization acts as the main hub for the government to evaluate AI tools.
red tape idiom
Excessive bureaucracy or adherence to official rules and formalities.
繁文縟節、官僚作風。
💡 字面意思為「紅帶子」,但在商業與政治語境中專指冗長的行政程序。文中:The goal was to help the United States win the global race in technology by removing "red tape," which refers to complex or unnecessary official rules.
vocal adjective
Expressing opinions or feelings freely or loudly.
直言不諱的、表達強烈意見的。
💡 常見與聲音(voice)相關,這裡形容對某事公開發表意見的態度。文中:Meanwhile, Google and xAI have been less vocal about the specific details of their involvement.

選擇題練習 · Quiz

4

  1. 細節 Detail

    1.According to the article, how has the government's approach to AI regulation changed under the current administration?

  2. 推論 Inference

    2.What can be inferred about the relationship between the US government and companies like Google and xAI?

  3. 單字情境 Vocabulary

    3.In the fifth paragraph, what does the term 'red tape' refer to?

  4. 主旨 Main Idea

    4.Which of the following best summarizes the main message of the article?

請回答全部 4 題後再提交

原始來源 · Sources

本文內容由 AI 從以下來源綜合改寫。事實請以原始來源為準。

Generated by: gemini/gemini-3.1-flash-lite-preview