logo

1 year ago

OpenAI launches independent safety board to review model releases

AI safety
model releases
oversight
Video version coming soon
OpenAI launches independent safety board to review model releases

OpenAI Establishes Independent Safety Board

OpenAI has announced the creation of an independent Board oversight committee, evolving from its Safety and Security Committee. This new body will have the power to delay AI model releases if safety concerns arise.

The committee, chaired by Zico Kolter, includes Adam D'Angelo, Paul Nakasone, and Nicole Seligman. It will oversee model launches and receive briefings on safety evaluations for major releases.

The committee will exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.

OpenAI aims to enhance industry collaboration, increase transparency about its safety work, and explore opportunities for independent testing of its systems. This move reflects a growing focus on AI safety and responsible development in the tech industry.

  • OpenAI transforms Safety Committee into independent Board oversight committee with model release authority.
  • New committee can delay AI model launches to address safety concerns.
  • Board members will receive briefings on safety and security matters.
  • OpenAI seeks more industry collaboration and information sharing to enhance AI security.
  • Company aims to increase transparency and independent testing of its systems.
Explore more articles like this

Subscribe to the Crypto Redefined newsletter

A weekly toolkit that breaks down the latest DeFi developments, offers sharp analysis, and uncovers new financial opportunities to help you make smart decisions with confidence. Delivered every Friday

logo