OpenAI Launches Safety Fellowship to Build the Next Generation of AI Researchers

OpenAI has announced a new Safety Fellowship program designed to support independent researchers pursuing rigorous work on artificial intelligence safety and alignment. The initiative, running from September 14, 2026 through February 5, 2027, represents a significant effort to develop the next generation of talent focused on ensuring advanced AI systems remain safe and beneficial .

What Research Areas Does the Fellowship Prioritize?

The fellowship targets researchers interested in critical safety questions relevant to both existing and future AI systems. OpenAI has identified several priority research areas that fellows can pursue during the program :

  • Safety Evaluation: Developing methods to rigorously test and assess how AI systems perform under various conditions and potential risks.
  • Ethics and Robustness: Exploring ethical frameworks and building AI systems that remain reliable and predictable even when facing unexpected situations.
  • Scalable Mitigations: Creating solutions that can address safety concerns as AI systems become more powerful and complex.
  • Privacy-Preserving Safety Methods: Designing approaches to improve AI safety without compromising user privacy or data security.
  • Agentic Oversight: Establishing frameworks for monitoring and controlling autonomous AI agents that can take actions in the world.
  • High-Severity Misuse Domains: Studying how to prevent AI systems from being weaponized or used for harmful purposes at scale.

OpenAI emphasizes that it is especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community. The organization prioritizes research ability, technical judgment, and execution over specific credentials, welcoming applicants from diverse backgrounds including computer science, social science, cybersecurity, privacy, and human-computer interaction .

How to Apply and What Support Will Fellows Receive?

The fellowship offers substantial support to help researchers conduct high-impact work. Fellows will receive a monthly stipend, compute resources to run experiments and train models, and ongoing mentorship from OpenAI researchers and engineers. Workspace is available in Berkeley at Constellation, though fellows may also work remotely if preferred .

By the end of the program, fellows are expected to produce significant research output such as a peer-reviewed paper, a new benchmark for evaluating AI systems, or a publicly available dataset that advances the field. This requirement ensures that the fellowship generates tangible contributions to AI safety research rather than serving merely as a training program .

The application timeline is straightforward. Applications are currently open and will close on May 3, 2026. OpenAI will review all submissions and notify successful applicants by July 25, 2026, giving fellows time to prepare before the September start date. Interested researchers can apply through the official application form and should direct questions to openaifellows@constellation.org .

Who Is Eligible and What Should Applicants Know?

OpenAI welcomes applicants from a range of academic and professional backgrounds. The organization does not require specific credentials or degrees; instead, it evaluates candidates based on their research ability, technical judgment, and track record of execution. Letters of reference will be required as part of the application process .

One important clarification for potential applicants: fellows will not have access to OpenAI's internal systems or proprietary models. Instead, they will receive API credits and other computational resources as appropriate for their research. This approach allows fellows to conduct independent research while maintaining the security and integrity of OpenAI's internal infrastructure .

The fellowship represents a broader industry trend of major AI companies investing in external safety research. By supporting independent researchers outside the organization, OpenAI is helping to build a more robust ecosystem of AI safety expertise. This distributed approach to safety research can lead to more diverse perspectives and methodologies than research conducted entirely within a single company .

For researchers concerned about the trajectory of artificial intelligence development, the fellowship offers a concrete opportunity to contribute to critical safety work. The program acknowledges that ensuring AI systems are safe, aligned with human values, and resistant to misuse requires sustained effort from talented researchers across institutions and disciplines.