Connect with us

Bussiness

Another safety researcher is leaving OpenAI

Published

on

Another safety researcher is leaving OpenAI

  • Miles Brundage, who advises OpenAI leadership on safety and policy, announced his departure.
  • He said he was leaving the company to have more independence and freedom to publish.
  • The AGI Readiness team he oversaw will be disbanded.

Miles Brundage, a senior policy advisor and head of the AGI Readiness team at OpenAI, is leaving the company. He announced the decision on Wednesday in a post on X, which was accompanied by a Substack article explaining the decision. The AGI Readiness team he oversaw will be disbanded, with its various members distributed among other parts of the company.

Brundage is the latest high-profile safety researcher to leave OpenAI. In May, the company dissolved its Superalignment team, which focused on the risks of artificial superintelligence, after the departure of its two leaders, Jan Leike and Ilya Sutskever. Other executives who have departed in recent months are Mira Murati, its chief technology officer; Bob McGrew, its chief research officer; and Barret Zoph, a vice president of research.

OpenAI did not respond to a request for comment.

For the past six years Brundage has advised OpenAI’s executives and board members about how to prepare for the rise of artificial intelligence that rivals human intelligence — something many experts think could fundamentally transform society.

He’s been responsible for some of OpenAI’s biggest innovations in safety research, including instituting external red teaming, which involves outside experts looking for potential problems in OpenAI products.

Brundage said he was leaving the company to have more independence and freedom to publish. He referred to disagreements he had with OpenAI about limitations on research he was allowed to publish and said that “the constraints have become too much.”

He also said that working within OpenAI had biased his research and made it difficult to be impartial about the future of AI policy. In his post on X, Brundage described a prevailing sentiment within OpenAI that “speaking up has big costs and that only some people are able to do so.”

Continue Reading