US court rules against Pentagon in killer AI dispute
The Department of War had ordered contractors to drop Anthropic after it refused to allow military use of its technology
US court rules against Pentagon in killer AI dispute
© Jonathan Raa/NurPhoto via Getty Images
A US federal judge has blocked a Pentagon order designating Anthropic a national security risk, saying US officials likely broke the law and retaliated against the AI company over its public comments on how its technology should be used.
Anthropic, a leading developer of large language models, has been locked in a dispute with the Department of War over military use of its Claude system, with defense officials pushing to allow the technology for “all lawful uses.”
The company resisted, citing concerns that it could be used for mass domestic surveillance or fully autonomous weapons. The Pentagon ended talks, imposed the designation, and ordered contractors to stop using Claude.
On Thursday, US District Judge Rita Lin also blocked an order to cut all government contracts with Anthropic, calling it a “classic” First Amendment retaliation.
https://www.rt.com/business/636431-anthropic-pentagon-lawsuit/