US Senator Questions Pentagon Decision to Allow Grok Into Classified Networks
Elizabeth Warren said the model’s “apparent lack of adequate guardrails” could create “serious risks to the safety of U.S. military personnel.
Topics
News
- TCS, Mistral Outline Diverging Enterprise AI Strategies at Nvidia GTC
- IBM Bets on Data Streaming With $11 Billion Confluent Deal
- US Senator Questions Pentagon Decision to Allow Grok Into Classified Networks
- RBI Explores AI-Based Monitoring to Strengthen Fraud Controls: Report
- Meta Locks In $27 Billion AI Infrastructure Deal With Nebius
- Nvidia Puts Partnerships and OpenClaw at the Center of Its AI Strategy
[Image source: Diksha Mishra/MITSMR India]
On March 15, US Senator Elizabeth Warren wrote to Defense Secretary Pete Hegseth seeking details about the department’s decision to allow Elon Musk’s chatbot Grok into classified systems.
The concerns follow comments made by Hegseth in the second week of January, when he outlined plans to introduce advanced AI models across both classified and unclassified Pentagon networks. The effort is part of a broader push to integrate large volumes of military and intelligence data into these systems.
“Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department,” Hegseth said during a speech at SpaceX in South Texas.
He said the department would make internal data available to support these systems. “AI is only as good as the data that it receives, and we’re going to make sure that it’s there,” he said.
Warren’s letter questions whether the current approach includes sufficient safeguards.
“Grok, the controversial AI model developed by xAI, has provided disturbing outputs for users, including giving users advice on ‘how to commit murders and terrorist attacks,’ generating antisemitic content, and creating child sexual abuse material,” she wrote.
She said the model’s “apparent lack of adequate guardrails” could create “serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems.”
The letter also raises concerns about the evaluation process before the system was approved for use.
“It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok’s security safeguards, data handling practices, or safety controls,” Warren wrote.
According to the letter, internal and external reviews have highlighted potential vulnerabilities. It refers to a classified assessment by the National Security Agency that identified specific risks and reports that the system may be more susceptible to data manipulation attacks.
Warren warned that a failure to address these issues could have serious consequences.
“Were Grok to leak government information, this could reveal sensitive military plans, U.S. intelligence efforts, and potentially put service members in danger,” the letter said.
She asked the Department of Defense to provide details of its agreement with xAI, including measures to prevent data leaks and protect against cyberattacks, as well as information on testing and intended use.
A senior Pentagon official stated that Grok has been onboarded for use in classified environments but is not yet operational. The system is expected to be integrated into GenAI.mil, an internal platform that offers access to AI tools within approved government cloud systems. The platform is currently used for tasks such as research, document drafting, and data analysis, though its role could expand.
The decision comes as the Pentagon adjusts its approach to working with private AI companies.
The department recently designated Anthropic as a “supply-chain risk” after the company declined to provide unrestricted access to its systems. It has since entered into agreements with other firms, including OpenAI, to use their models.
Hegseth has said he wants AI systems in the military to be responsible, but also made clear that the department will prioritize operational flexibility. He said the Pentagon’s systems would operate “without ideological constraints that limit lawful military applications.”


