English News / 英文新聞閱讀
科技 · Technology · · 865 words · B1-B2

US Expands AI Safety Testing as Tech Giants Face New Scrutiny

Major companies agree to government checks on their models, while legal battles over copyright and military use continue.

🕒 生成時間: (台北時間)

⚠️ 本文由 AI 綜合多家報導生成,事實請以原始來源為準。

Summary · 摘要

The United States government has announced a new agreement requiring tech giants to submit their artificial intelligence models for safety testing. This move expands previous efforts by the Biden administration to regulate powerful AI systems before they are released to the public. At the same time, Meta faces a major lawsuit from book publishers who claim the company stole their copyrighted work to train its AI. Meanwhile, workers at Google DeepMind in the UK are forming a union to protest the company’s growing ties with the military. These developments show that the AI industry is under increasing pressure from both the government and the public.

美國政府宣布一項新協議,要求科技巨頭提交其人工智慧模型進行安全測試。此舉擴展了拜登政府此前在將強大人工智慧系統發布給公眾之前進行規範的努力。與此同時,Meta 正面臨來自出版社的重大訴訟,指控該公司竊取其受版權保護的作品來訓練其人工智慧。與此同時,位於英國的 Google DeepMind 員工正組成工會,抗議該公司日益增長的軍事聯繫。這些發展顯示,人工智慧產業正面臨來自政府與公眾日益增加的壓力。

閱讀模式 ·

The United States government has taken a significant step in regulating artificial intelligence by requiring major technology companies to submit their newest AI models for safety testing. According to the BBC Technology, the US Department of Commerce announced that Google, Microsoft, and xAI have agreed to voluntarily allow the government to test their tools. This agreement is managed by the Center for AI Standards and Innovation, known as CAISI. The goal is to evaluate the security and capabilities of these powerful systems before they are released to the general public.

This development marks a shift in how the US handles AI technology. Under the previous administration led by President Joe Biden, the focus was on creating voluntary safety standards. Now, under President Donald Trump, the government is moving toward more direct partnerships with tech firms. CAISI Director Chris Fall stated that independent and rigorous testing is essential to understand the national security risks of advanced AI. The government aims to identify potential threats, such as cyberattacks or military misuse, before the tools become widely available.

The agreement comes at a time of growing concern about the power of AI. Recently, the AI company Anthropic revealed a model called Mythos, which some experts believe is too powerful to be released safely. This has worried officials in Washington, who fear that such powerful systems could be used by hackers or enemy nations. To address these fears, the US government is working closely with companies to ensure their AI does not pose a danger. Microsoft stated that it will work with government scientists to test its AI systems in ways that reveal unexpected behaviors. The two companies will also share data and methods for testing to improve safety standards globally.

However, not all tech companies are participating in this new safety pact. The Pentagon recently announced agreements with seven major tech giants, including Google, Microsoft, and Amazon, to use AI in classified military systems. Notably, Anthropic is not part of this group. This is because Anthropic is currently in a legal dispute with the US Department of Defense. The company refuses to remove safety guardrails—built-in restrictions that prevent AI from doing harmful things—from its models when used by the government. This standoff highlights the tension between companies wanting to keep their AI safe and the military wanting unrestricted access.

While the government focuses on national security, the AI industry is also facing serious legal challenges regarding copyright. According to The Verge, five major book publishers and author Scott Turow have filed a lawsuit against Meta. They claim that Meta’s Llama AI models copied their books word-for-word. The publishers argue that Meta used data from pirate websites to train its AI, which is illegal. They are asking the court to stop Meta from using this material and to force the company to list all the copyrighted works it used. Meta has responded by saying that training AI on copyrighted material is considered fair use under the law. The company stated it will fight the lawsuit aggressively.

This legal battle is not unique to Meta. Anthropic recently agreed to pay $1.5 billion to settle a similar lawsuit from authors. These cases show that AI companies are struggling to find a legal balance between using public data to train their models and respecting the rights of creators. As AI becomes more powerful, the question of who owns the data used to build it remains unresolved.

Beyond legal and safety concerns, there is also growing unrest among the workers who build these technologies. According to The Guardian, employees at Google DeepMind in the UK have voted to form a union. One of the main reasons for this decision is the company’s increasing involvement with the military. Workers expressed concern that their technology might be used for authoritarian purposes or in conflicts like the war in Gaza. They feel that by working for Google, they are indirectly supporting these actions. The union aims to give workers a voice in how their technology is used.

This trend of worker activism is spreading. Google recently dropped a pledge it made years ago not to develop militarized AI. This change has upset many employees who believed the company was committed to ethical AI development. The UK workers are the first in a leading AI lab to seek formal union recognition, setting a precedent for other tech workers worldwide.

Looking ahead, the AI industry stands at a crossroads. The US government is pushing for tighter control and security testing, while companies are facing lawsuits over copyright and internal protests over ethics. The agreement between Google, Microsoft, and xAI suggests that the government will play a larger role in shaping AI development. However, the resistance from Anthropic and the lawsuits against Meta show that this path is not easy. As AI technology continues to advance, the debate over safety, legality, and ethics will likely become even more intense. The coming months will be critical in determining how these powerful tools are regulated and who controls their future.

易誤解詞彙 · Words to watch

這些字字面意思和文中用法不同,或是不常見的詞性/片語。

standoff noun
A situation where two sides are in a conflict and neither side is willing to change its position or give up.
僵局;對峙。指雙方僵持不下,誰也不願退讓的狀態。
💡 常見作名詞,指「對峙」或「僵局」。文中用來形容企業堅持保留安全限制,而軍隊要求無限制訪問之間的緊張關係。文中:This standoff highlights the tension between companies wanting to keep their AI safe and the military wanting unrestricted access.
drop verb
To stop supporting, using, or following something.
放棄;停止使用。這裡指公司撤銷或不再遵守之前的承諾。
💡 常見作動詞「掉落」,但在商業或政策語境中,常指「放棄(計畫、承諾、立場等)」。文中指Google撤銷了之前不開發軍事化AI的承諾。文中:Google recently dropped a pledge it made years ago not to develop militarized AI.
pose verb
To present or create a problem, danger, or risk.
構成(威脅、危險等)。
💡 常見作名詞「姿勢」,這裡作動詞,專指「造成」或「構成」某種負面影響或危險。文中指確保AI不會構成危險。文中:To address these fears, the US government is working closely with companies to ensure their AI does not pose a danger.
settle verb
To resolve a legal dispute by agreeing to a payment or terms without going to a final court verdict.
(透過賠償或協議)和解;解決(訴訟)。
💡 常見作動詞「定居」或「安頓」,在法律語境中特指「庭外和解」。文中指Anthropic支付巨額資金以解決作者的訴訟。文中:Anthropic recently agreed to pay $1.5 billion to settle a similar lawsuit from authors.

選擇題練習 · Quiz

4

  1. 細節 Detail

    1.According to the article, what is the primary reason Anthropic is not participating in the Pentagon's agreement with major tech giants?

  2. 推論 Inference

    2.Based on the article, what can be inferred about the relationship between the US government's approach to AI regulation under President Trump compared to the previous administration?

  3. 單字情境 Vocabulary

    3.In the context of the article, what does the word 'standoff' most likely mean in the phrase 'This standoff highlights the tension between companies... and the military...?'

  4. 主旨 Main Idea

    4.Which of the following best summarizes the main idea of the article?

請回答全部 4 題後再提交

原始來源 · Sources

本文內容由 AI 從以下來源綜合改寫。事實請以原始來源為準。

Generated by: ollama/qwen3.6:35b-a3b