OpenAI Forms Safety & Security Committee
OpenAI has inaugurated a new Safety and Security Committee led by Board Members Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). In addition to the Board members, the committee will bring in internal and external policy experts.
The committee will be tasked with evaluating and proposing changes to existing processes with respect to the safety and security of the work that OpenAI is doing. It will provide its recommendations to OpenAI’s Board of Directors within 90 days of being formed. Once the Board has reviewed them, OpenAI will publicly share the new recommendations it adopts.
Also on the committee is retired US Army General Paul M. Nakasone, appointed to the Board of Directors in mid-June. The longest-serving leader of US Cyber Command and a former National Security Agency (NSA) director, OpenAI said a “first priority” of his appointment was to join the safety committee. His appointment was harshly criticized by NSA whistleblower Edward Snowden, who urged his social media followers to “not ever trust OpenAI or its products.”
In announcing the new committee, OpenAI said: “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
OpenAI has faced criticism over whether it invests enough in safety and security. In early June, it announced the departure of co-founder Ilya Sutskever, followed rapidly by the resignation of the other co-lead of the Superalignment team. The team, created to ensure safety while building AI models exhibiting Artificial General Intelligence (AGI), was dissolved following these key departures.
Sutskever has since joined another artificial intelligence company called Safe Superintelligence Inc., which aims to “advance capabilities as fast as possible while making sure our safety always remains ahead.”