Sam Altman Calls for a Head of Preparedness at OpenAI
The new role reflects the company’s shift from building ever-stronger models to confronting the consequences of deploying them at scale.
Topics
News
- Inside Uttar Pradesh Police’s Bet on AI-First Policing
- AfterLife Uses AI Telephony, UPI to Simplify Digital Wills in India
- India Tech M&As Hit a Three-Year High on Consolidation Push
- Sam Altman Calls for a Head of Preparedness at OpenAI
- India Faces Lower Risk of AI Job Disruption Than the West, Says IT Secretary
- Tata Steel Discloses $1.6 Billion Dutch Class Action Over Emissions
OpenAI CEO Sam Altman has announced that the company is hiring a Head of Preparedness, calling it a “critical role at an important time” as frontier AI systems begin to throw up challenges alongside their benefits.
In a post on X, Altman said OpenAI has already seen early signs of these challenges as models grow more capable.
In 2025, he noted, the company got a preview of how advanced AI models could affect mental health, and more recently, models have become so proficient in computer security that they are uncovering critical vulnerabilities.
“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman wrote, referring to both cybersecurity and biological capabilities as areas that demand rigorous oversight.
The goal, he said, is to enable cybersecurity defenders with cutting-edge AI capabilities while preventing attackers from exploiting the same tools for harm, ideally making digital systems more secure overall.
Similar caution is needed around biological capabilities and the safety of systems that may eventually self-improve.
Altman offered a blunt warning to potential applicants: “This will be a stressful job, and you’ll jump into the deep end pretty much immediately.”
The new role sits within OpenAI’s Safety Systems team, which is responsible for ensuring the company’s most capable models are developed and deployed responsibly.
The Head of Preparedness will lead OpenAI’s strategy for tracking emerging capabilities, modelling new risks of severe harm, and designing mitigations across domains such as cybersecurity and biosecurity. They will also help guide high-stakes launch decisions and refine safety frameworks as new risks emerge.
The position comes with a compensation package of about $555,000 a year plus equity, reflecting the senior leadership responsibility tied to overseeing these efforts.
OpenAI has already invested heavily in preparedness work across multiple generations of frontier models, building capability evaluations, threat models, and mitigation strategies. As safeguards grow more complex, scaling this work has become a top priority.