“Soon we will have the world’s leading AI models on every unclassified and classified network across our department,” the official stated.

Hegseth said the chatbot will begin operating within the Department of Defense by the end of January. It will “provide all necessary data” from military IT systems, including information from intelligence units.

In his remarks, the defense secretary stressed the need to streamline and accelerate technological innovation within the armed forces. According to him, the Pentagon possesses “battle-tested operational data gathered over two decades of military and intelligence operations.”

“Artificial intelligence is only as good as the data it receives. We will ensure that data is available,” Hegseth added.

He also emphasized that he wants to see “responsible AI systems” deployed across the Pentagon. Hegseth promised to “cut through the overgrown bureaucratic thicket and clear out the clutter — preferably with a chainsaw.”

“We must ensure the dominance of American military AI so that no adversary can use the same technology to threaten our national security or our citizens,” the Pentagon chief said.

Despite controversy

The announcement comes just days after Grok was embroiled in another scandal, this time over the use of the tool to generate sexual content.

Malaysia and Indonesia have blocked access to the chatbot, while regulators in the EU, the UK, Brazil, and India are demanding investigations into Grok’s role in the spread of deepfakes.

The UK-based Internet Watch Foundation reported that its analysts had identified “criminal images” of children aged 11 to 13 that were allegedly created using the chatbot.

Previously, Grok has faced repeated criticism for spreading false or questionable information.

In December, the chatbot provided inaccurate details about a mass shooting at Bondi Beach in Australia. When asked about a video showing a bystander, Ahmed al-Ahmed, confronting the shooter, the AI responded:

“This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it. As a result, a branch fell onto a damaged car. Searches across multiple sources did not yield confirmed information about the location, date, or injuries. It may be staged, and its authenticity is unverified.”

In July, users also noticed that the neural network appeared to rely on Elon Musk’s opinions when generating responses. This reportedly applied to topics such as the Israel–Palestine conflict, abortion, and immigration law.

These observations have led to speculation that the chatbot was deliberately configured to take Musk’s political views into account when answering controversial questions.

Conclusion

The Pentagon’s decision to integrate Grok despite its recent controversies highlights a growing tension between strategic urgency and governance risk in military AI adoption.