Bussiness
OpenAI’s mission to develop AI that ‘benefits all of humanity’ is at risk as investors flood the company with cash
Sam Altman founded OpenAI in 2015 with a lofty mission: to develop artificial general intelligence that “benefits all of humanity.”
He chose to be a nonprofit to support that mission.
But as the company gets closer to developing artificial general intelligence, a still mostly theoretical version of AI that can reason as well as humans, and the money from excited investors pours in, some are worried Altman is losing sight of the “benefits all of humanity” part of the goal.
It’s been a gradual but perhaps inevitable shift.
OpenAI announced in 2019 that it was adding a for-profit arm — to help fund its nonprofit mission — but that true to its original spirit, the company would limit the profits investors could take home.
“We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance,” OpenAI said at the time. “Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a ‘capped-profit’ company.”
It was a deft move that, on its surface, appeared intended to satisfy employees and stakeholders concerned about developing the technology safely, and those who wanted to see the company more aggressively produce and release products.
But as investment poured into the for-profit side, and the company’s notoriety — and Altman’s notoriety — increased, some got nervous.
OpenAI’s board briefly ousted Altman last year over concerns that the company was too aggressively releasing products without prioritizing safety. Employees, and most notably Microsoft (with its multibillion-dollar investment), came to Altman’s rescue. Altman returned to his position after just a few days.
The cultural rift, however, had been exposed.
Two of the company’s top researchers — Jan Leike and Ilya Sutskever — both soon resigned. The duo was in charge of the company’s so-called superalignment team, which was tasked with ensuring the company developed artificial general intelligence safely — the central tenet of OpenAI’s mission.
OpenAI then dissolved the superalignment team in its entirety later that same month. After leaving, Leike said on X that the team had been “sailing against the wind.”
“OpenAI must become a safety-first AGI company,” Leike wrote on X, adding that building generative AI is “an inherently dangerous endeavor” but that OpenAI was now more concerned with building “shiny products.”
It seems now that OpenAI has nearly completed its transformation into a Big Tech-stye “move fast and break things” behemoth.
Fortune reported that Altman told employees in a meeting last week that the company plans to move away from nonprofit board control, which it has “outgrown,” over the next year.
Reuters reported on Saturday that OpenAI is now on the verge of securing another $6.5 billion in investment, which would value the company at $150 billion. But sources told Reuters that the investment comes with a catch: OpenAI must abandon its profit cap on investors.
That would place OpenAI ideologically distant from its dreamy early days, when its technology was meant to be open source and for the benefit of everyone.
OpenAI told Business Insider in a statement that it remains focused on “building AI that benefits everyone” while continuing to work with its nonprofit board. “The nonprofit is core to our mission and will continue to exist,” OpenAI said.