In a strategic move towards bolstering its safety measures, OpenAI has formed a safety and security committee helmed by senior executives. This announcement follows the disbandment of its previous oversight board in mid-May.
The newly appointed committee will play a pivotal role in advising OpenAI's board on crucial safety and security matters concerning the company's projects and operations.
This development coincides with OpenAI's initiation of training for its "next frontier model," as disclosed in a recent blog post. The company expressed anticipation for the forthcoming systems, foreseeing a significant leap in capabilities on the path to Artificial General Intelligence (AGI), a milestone where AI matches or surpasses human intelligence.
The safety committee comprises esteemed members of OpenAI's board of directors, including Bret Taylor, Adam D’Angelo, and Nicole Seligman, alongside Sam Altman, the CEO.
This initiative comes in the wake of the dissolution of OpenAI's previous team dedicated to long-term AI risks. The departure of key figures such as co-founder Ilya Sutskever and researcher Jan Leike further underscored shifts within the organization.
Reflecting on recent departures, Altman acknowledged the challenges and reiterated OpenAI's commitment to advancing AI safety. Over the next 90 days, the safety committee will conduct a comprehensive evaluation of OpenAI's processes and safeguards, with a subsequent presentation of recommendations to the board.
The broader discourse on AI safety gains prominence as AI models like ChatGPT evolve rapidly. Stakeholders grapple with questions surrounding the arrival of AGI and its associated risks, underscoring the significance of proactive safety measures within the AI landscape.