Highlights:

  • The company stated that the new safety and security committee’s top priority is to enhance AI risk mitigation workflows.
  • OpenAI heavily utilizes graphics cards provided by Microsoft Corp.’s public cloud for a significant portion of its AI research.

Recently, OpenAI created a safety committee dedicated to ensuring the safe conduct of its machine learning research.

The Safety and Security Committee, consisting of nine members, will be chaired by OpenAI CEO Sam Altman and three other directors, including board Chair Bret Taylor. The remaining five members are OpenAI engineering executives Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki.

The panel was established shortly after news surfaced that the GPT-4 developer had dissolved a team dedicated to AI safety. Known as the Superalignment team, it was founded in July under the guidance of OpenAI Co-founder Ilya Sutskever and Head of the alignment Jan Leike, aiming to address potential risks from the company’s upcoming AI technologies.

Earlier this month, Sutskever and Leike departed from OpenAI. Leike, in a post on X after stepping down, wrote that “over the past few months my team has been sailing against the wind.” Members of the Superalignment group are said to have either resigned or transferred to different teams within OpenAI. Recently, Leike has joined Anthropic PBC.

Since all of the panel members are OpenAI insiders, some observers have pointed out that this could lead to little to no independent oversight.

As per the company’s statement, the primary focus of the newly formed safety and security committee will be to enhance AI risk mitigation processes. Within 90 days, the panel aims to present its suggestions to OpenAI’s entire board. Following this, the company intends to announce which recommendations it will implement publicly.

The committee’s formation was announced by OpenAI recently, along with another update: The next frontier model is being trained by its engineers. The company stated, “We anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.” The term artificial general intelligence, or AGI, refers to hypothetical future AI models that possess human-like accuracy in a wide range of tasks.

Mira Murati, Chief Technology Officer, recently informed Axios about an upcoming “major update” to GPT-4 scheduled for later this year. This update is anticipated to be more substantial than GPT-4o, a refined version of the large language model announced by OpenAI approximately two weeks ago. It remains uncertain whether the forthcoming product launch will introduce an upgraded iteration of GPT-4 or an entirely new AI system.

For a large portion of its AI research, OpenAI uses graphics cards housed in the public cloud of Microsoft Corp. Kevin Scott, the Chief Technology Officer at Microsoft, allegedly compared OpenAI’s next-generation model to a whale and GPT-4 to an orca last week. This suggests that the next LLM will have additional parameters, which are AI configuration settings that are crucial in figuring out how a neural network analyzes input.

Recent indications imply that OpenAI might have created a prototype version of its next-generation language model. Business Insider reported in March that a limited group of users gained access to a version of GPT-5. One user characterized this LLM as having “significantly superior” GPT-4, indicating promising advancements.