OpenAI CEO Sam Altman announced the company is seeking a Head of Preparedness to address growing challenges posed by advanced AI models. The San Francisco-based role offers $555,000 annually plus equity, making it one of the highest-paying AI safety positions in the industry.
“This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman wrote on X.
Mental Health and Cybersecurity Risks
Altman acknowledged that the potential impact of AI models on mental health was something OpenAI saw a preview of in 2025.
The company faced numerous accusations about ChatGPT’s effects on users’ mental health throughout the year, including several wrongful death lawsuits.
The CEO also highlighted concerns about AI models’ increasing ability to identify critical computer security vulnerabilities. These dual challenges form the core focus of the new preparedness role.
High-Stress Position
Altman was direct about the job’s demands. “This will be a stressful job and you’ll jump into the deep end pretty much immediately,” he wrote.
The Head of Preparedness will be responsible for expanding, strengthening, and guiding OpenAI’s existing preparedness program within the safety systems department.
Key duties include tracking frontier capabilities that create new risks of severe harm, building frameworks for risk assessment, and developing strategies to address ethical implications.
Turbulent History for Safety Team
OpenAI’s safety teams have undergone significant changes over the past two years. Former Head of Preparedness Aleksander Madry was reassigned in July 2024.
The role was temporarily taken over by executives Joaquin Quinonero Candela and Lilian Weng.
Weng left the company months later. In July 2025, Quinonero Candela moved away from the preparedness team to lead recruiting at OpenAI, leaving the critical position vacant.
Evolving Risk Landscape
The job listing emphasizes the need to “evolve the preparedness framework as new risks, capabilities, or external expectations emerge.” This includes developing threat models, conducting capability evaluations, and implementing cross-functional mitigations.
The role addresses both immediate concerns and long-term existential threats from advanced AI systems. Candidates must have experience in AI safety, machine learning, or cybersecurity, with hands-on experience evaluating large-scale risks.
Context of Rapid Growth
The hiring comes as OpenAI pushes aggressive revenue targets. Altman recently suggested the company aims to grow from its current $13 billion annual revenue to $100 billion within two years. New consumer devices and platforms designed to “automate science” are reportedly in development.
OpenAI is also considering a $100 billion fundraising round at a valuation up to $750 billion, potentially preceding one of the largest IPOs in history.

