Even Sam Altman Now Thinks OpenAI Needs An AI Safety Role
OpenAI has publicly announced recruitment for a new senior role titled Head of Preparedness, a position designed to oversee threat evaluations and risk mitigation for advanced artificial intelligence systems as they grow in capability and public impact, CEO Sam Altman said in a social media post.
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025
This development reflects the company’s expanding focus on safety, mental health implications, cybersecurity vulnerabilities, and emerging threats as AI models become more powerful and widespread.
According to the official job description, the Head of Preparedness will lead OpenAI’s preparedness framework by building comprehensive capability evaluations, coordinating threat models, and implementing scalable mitigation strategies across multiple risk domains, including cybersecurity and biological risk.
The role will operate within OpenAI’s Safety Systems team and reports directly to senior leadership, with responsibilities spanning end-to-end preparedness execution.
Altman characterized the role as “critical at an important time,” noting the increasing complexity of modern AI systems and their potential to uncover security vulnerabilities and influence mental health outcomes.
The company is offering competitive compensation for the position, with annual salaries reported to reach approximately $555,000 plus equity. OpenAI’s recruiting materials emphasize the need for deep technical judgment, threat modeling expertise, and operational coordination across safety functions.
The Head of Preparedness role arrives amid broader industry discussions about the responsibilities of leading AI developers to anticipate and manage the unintended consequences of highly capable models. OpenAI has previously established internal safety teams to analyze AI risks, including catastrophic scenarios and long-term existential implications.
Public signals from Altman indicate that frontline concerns include not only digital security threats but also emerging challenges tied to AI-generated content and its effects on human behavior, as models increasingly participate in everyday online interactions.
The role’s focus on both threat anticipation and mitigation places it at the intersection of technical AI development and real-world safety practice, with implications for how new capabilities are tested and deployed.
The preparedness position is part of OpenAI’s larger safety systems strategy, which aims to balance rapid model advancement with structured risk assessments and safeguards. The job description specifies that the Head of Preparedness will work to build coordinated mitigations that scale with evolving model capabilities.
Recruiting for this role suggests that OpenAI is intensifying its internal emphasis on safety leadership, particularly as frontier models become more autonomous and powerful.

Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow.
