Connect with us

Bussiness

Former OpenAI board members say the company can’t be trusted to govern itself

Published

on

Two former OpenAI board members say artificial intelligence companies can’t be trusted to govern themselves and that third-party regulation is necessary to hold them accountable.

The two ex-board members wrote in an op-ed for The Economist that they stood by their decision to remove Altman, citing statements from senior leaders who said that the cofounder created a “toxic culture of lying” and engaged in “behavior [that] can be characterized as psychological abuse.”

Since Altman returned to the board in March, OpenAI has been questioned about its commitment to safety and criticized for using an AI voice that sounded eerily similar to actor Scarlett Johansson for Chat GPT-4o.

With Altman back at the helm, Toner and McCauley wrote that OpenAI can’t be trusted to hold itself accountable.

“We also feel that developments since he returned to the company —including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance,” they wrote.

For OpenAI to succeed in its stated mission to benefit “all of humanity,” Toner and McCauley argued that governments need to intervene and establish “effective regulatory frameworks now.”

The former board members wrote that they once believed that OpenAI could govern itself, but “based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

OpenAI, Toner, and McCauley did not immediately respond to a request for comment from BI.

Policymakers must ‘act independently’ of AI companies

Toner and McCauley qualified their calls for government regulation by acknowledging that poorly designed laws can potentially hinder “competition and innovation” by burdening smaller companies.

“It is crucial that policymakers act independently of leading AI companies when developing new rules,” they wrote. “They must be vigilant against loopholes, regulatory ‘moats’ that shield early movers from competition, and the potential for regulatory capture.”

In April, the Department of Homeland Security announced the establishment of the Artificial Intelligence Safety and Security Board, which will provide recommendations for “safe and secure development and deployment of AI” throughout the US’s critical infrastructures.

The board’s 22 members include Altman and chief executives of large tech companies, including Nvidia CEO Jensen Huang and Alphabet CEO Sundar Pichai.

Although the safety board also includes representatives from tech nonprofits, leaders of for-profit companies are overrepresented.

AI ethicists who spoke to Ars Technica expressed concern that the outsize influence of profit-motivated companies could result in policies that favor industry over human safety.

“If we can all agree that we care about keeping people ‘safe’ with respect to how AI is used, then I think we can agree it’s important to have people at the table who specialize in centering people over technology,” Margaret Mitchell, an AI ethics expert at Hugging Face, told Ars Technica.

A DHS spokesperson did not respond to a request for comment.

Continue Reading