- Best-in-class reasoning and writing
- Strong ecosystem and integrations
- Advanced multimodal capabilities
Researchers at Georgetown University have analyzed thousands of procurement requests issued by China’s People’s Liberation Army (PLA). The documents reveal how broadly Beijing is already testing artificial intelligence for military use—from drone swarms and deepfake tools to autonomous decision-making systems.
The U.S. Department of Defense (DoD) wants to deploy artificial intelligence technology without restrictions, while Anthropic is demanding safeguards against the use of its models for autonomous weapons control and domestic surveillance. A $200 million contract is currently at risk.
On Tuesday, January 13, the Spanish government approved a draft bill aimed at combating fake images generated by artificial intelligence and tightening consent rules for the use of images.
Dozens of “nudify” apps are available in Apple’s and Google’s app stores, allowing users to photograph people and use AI to generate nude images of them, according to experts at TTP.
OpenAI CEO Sam Altman warns of a creeping security crisis driven by a “YOLO” mindset toward AI agents — and says the company will slow hiring
The UK government has selected Anthropic to develop an AI assistant for the GOV.UK website. The Department for Science, Innovation and Technology (DSIT) aims to use the tool to help citizens access public services and receive personalized guidance. Initially, the assistant will focus on supporting job seekers with career advice, improving access to training and reskilling programs, and explaining available benefits and services.
Security experts have warned about the risks of using the AI assistant Clawdbot, which may inadvertently expose personal data and API keys.
The European Commission has launched a new investigation against X under the Digital Services Act (DSA).
A new investigation by NewsGuard reveals a troubling reality: today’s leading AI chatbots can almost never identify AI-generated videos as fake. Even more concerning, ChatGPT fails to recognize synthetic videos created by its own company.
Meta is suspending global access to its AI characters for teenagers. Starting in the “coming weeks,” teen users will no longer be able to interact with AI characters in Meta’s apps until a revised version is ready. The restriction applies to all users who have listed a teenage birth date, as well as individuals who present themselves as adults but are identified as minors by Meta’s age-detection technology.