- Best-in-class reasoning and writing
- Strong ecosystem and integrations
- Advanced multimodal capabilities
Researchers at Georgetown University have analyzed thousands of procurement requests issued by China’s People’s Liberation Army (PLA). The documents reveal how broadly Beijing is already testing artificial intelligence for military use—from drone swarms and deepfake tools to autonomous decision-making systems.
SpaceX, Elon Musk’s aerospace company, and its recently established AI subsidiary xAI are taking part in a new classified Pentagon competition to develop autonomous, voice-controlled drone swarms, Bloomberg reports, citing sources.
Hollywood organizations have sharply criticized the AI video generator Seedance 2.0, calling it a tool that has “rapidly become a vehicle for blatant copyright infringement.”
The U.S. Army used Anthropic’s Claude in an operation to capture Venezuelan President Nicolás Maduro, The Wall Street Journal reports, citing sources.
ByteDance’s new video model, Seedance, is so powerful that it is allegedly committing large-scale copyright violations — and Hollywood is reacting quickly.
Anthropic is unwilling to grant the Pentagon unrestricted access to its AI models, triggering a serious dispute between the AI company and the U.S. Department of Defense, Axios reports. The Pentagon is reportedly considering limiting or ending its cooperation with Anthropic altogether.
Anthropic has pledged to offset electricity costs for consumers arising from the construction of new data centers. The company says it will fully cover grid expansion expenses, invest in new power generation capacity, and limit energy consumption at its data centers during peak demand periods. CEO Dario Amodei told NBC News that the costs of AI models should be borne by Anthropic, not by the public.
The Pentagon is pressing leading AI companies — including OpenAI, Anthropic, Google, and xAI — to make their AI tools available on classified military networks without the usual usage restrictions, according to Reuters, citing multiple sources
OpenAI is deploying a specialized version of ChatGPT to detect internal information leaks, according to The Information, citing a source familiar with the matter. When a news article about internal developments at OpenAI is published, security staff reportedly feed the text into this dedicated ChatGPT system, which has access to internal documents as well as employees’ Slack and email communications
Former OpenAI researcher Zoë Hitzig has resigned over the introduction of advertising in ChatGPT. In a commentary for The New York Times, she made clear that she no longer trusts her former employer’s direction.