OpenAI Forms Safety Committee Amid Advanced AI Development

Published 3 months ago

OpenAI, a front-runner in artificial intelligence innovation, has recently established a new Safety and Security Committee. The formation of this committee is led by CEO Sam Altman, and it includes notable figures such as Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former Sony general counsel. The committee is responsible for enhancing safety protocols for OpenAI’s ongoing projects.

Addressing Concerns Over Safety Prioritization

The creation of this committee is timely, as OpenAI recently faced criticism from departing key personnel, including co-founder Ilya Sutskever and researcher Jan Leike. Both expressed concerns over the company’s focus on product development at the expense of safety. With the formation of the new committee, OpenAI aims to address these concerns by bolstering safety measures.

Training Commences for Next-Generation AI Model

Alongside the formation of the committee, OpenAI has begun training a new AI model, which is touted as a significant advancement beyond the current GPT-4 model. The upcoming model is expected to lead the industry in both capability and safety.

Committee to Offer Recommendations in 90 Days

The safety committee is tasked with reviewing and developing OpenAI’s safety processes. It is set to offer recommendations within the next 90 days. Following this period, OpenAI plans to publicly share the adopted recommendations to ensure transparency and accountability in its operations.

Commitment to Responsible AI Development

OpenAI remains committed to leading discussions on AI safety and security. This move reflects the organization’s dedication to responsible AI development. As AI technology continues to evolve, OpenAI recognizes that ensuring robust safety measures is paramount in meeting both present and future challenges.