Chatgpt Creator Warns AI Could Cause the End of Humanity 0:51

New York (CNN) -- OpenAI's supervisors worried that the company was making the technological equivalent of a nuclear bomb, and its manager, Sam Altman, was going so fast that it risked a global catastrophe.

So the board fired him. That may ultimately be the logical solution.

But the manner in which Altman was fired — abruptly, opaquely, and without notice to some of OpenAI's major shareholders and partners — defied logic. And there was a risk of inflicting more damage than if the council had taken no action.

A company's board of directors has an obligation, first and foremost, to its shareholders. OpenAI's largest shareholder is Microsoft, the company that gave Altman & Co. $13 billion to help Bing, Office, Windows and Azure overtake Google and stay ahead of Amazon, IBM and other AI wannabes.

  • Sam Altman, the man behind ChatGPT, is fired as CEO of his company OpenAI

However, Microsoft was not informed of Altman's firing until "just before" the public announcement, according to CNN contributor Kara Swisher, who spoke with sources with knowledge of the board's removal of its CEO. Microsoft's stock sank following Altman's ouster.


Employees were also not told the news in advance. Neither did Greg Brockman, the company's co-founder and former president, who in a post on X said he learned of Altman's firing moments before it happened. Brockman, a major supporter of Altman and his strategic leadership of the company, resigned Friday. Other Altman loyalists also left the company.

Suddenly, OpenAI was in crisis. News that Altman and other former OpenAI loyalists were on the verge of starting their own company risked undoing everything the company had worked so hard to achieve in recent years.

So, a day later, the board of directors called a truce and tried to lure Altman back. It was an outrageous turn of events and an embarrassing self-destruction by a company that considered itself the most promising producer of the most exciting new technology.

What's new about ChatGPT updates? 1:02

Strange board structure

OpenAI's odd board structure complicates matters.

The company is a non-profit organization. But Altman, Brockman and chief scientist Ilya Sutskever in 2019 formed OpenAI LP, a for-profit entity that exists within the structure of the larger company. That for-profit venture took OpenAI from worthless to a $90 billion valuation in just a few short years, and Altman is largely credited with being the mastermind of that plan and the key to the company's success.

However, a company with big backers like Microsoft and venture capital firm Thrive Capital has an obligation to grow its business and make money. Investors want to make sure their money pays off, and they don't have a reputation for being patient.

  • Why Sam Altman Was Fired From OpenAI And What's Next After The Surprise Restructuring

Likely because of this, Altman pressured the for-profit company to innovate faster and bring its products to market. In Silicon Valley's grand tradition of "moving fast and breaking things," those products don't always work so well at first.

That's fine, perhaps, when it comes to a dating app or social media platform. It's a totally different thing when it's a technology so good at mimicking human speech and behavior that it can trick people into believing their fake conversations and images are real.

And that's what apparently scared the company's board of directors, which was still largely controlled by the company's nonprofit wing. Swisher reported that OpenAI's recent developer conference served as a turning point: Altman announced that OpenAI would make tools available for anyone to use, including its own version of ChatGPT.

For Sutskever and the board of directors, that was going too far.

New features introduced in ChatGPT 1:07

A warning that is not without merit

According to Altman himself, the company was playing with fire.

When Altman created OpenAI LP four years ago, the new company noted in its bylaws that it remained "concerned" about AI's potential to "bring about rapid change" in humanity. This could happen unintentionally, with technology performing malicious tasks due to faulty code, or intentionally, by people subverting AI systems for bad purposes. As a result, the company pledged to prioritize safety, even if it meant reducing shareholder profits.

Altman also urged regulators to put limits on AI to prevent people like him from inflicting serious harm on society.

  • Who is Sam Altman, the man behind ChatGPT? What You Need to Know Before Your Congressional Hearing

"Will [AI] be like the printing press, which spread knowledge, power and learning around the world, which empowered ordinary, everyday individuals, which led to greater flourishing, which led above all to greater freedom?" he said at a Senate subcommittee hearing in May, when pressed for regulation. "Or is it going to be more like the atomic bomb: a technological breakthrough, but the (serious, terrible) consequences continue to haunt us to this day?"

Proponents of AI believe that the technology has the potential to revolutionize every industry and improve humanity in the process. It can improve education, finances, agriculture, and health care.

However, it has the potential to wipe out 14 million jobs over the next five years, the World Economic Forum warned in April. AI is especially adept at spreading harmful misinformation. And some, like Elon Musk, a former member of OpenAI's board of directors, fear that technology will surpass humanity in intelligence and end life on the planet.

How Not to Handle a Crisis

With those threats — real or perceived — it's no wonder the board was concerned that Altman was moving at too fast a pace. She may have felt compelled to get rid of it and replace it with someone who, in her opinion, was more careful about potentially dangerous technology.

But OpenAI doesn't operate in a vacuum. It has shareholders, some of whom have invested billions in the company. And the so-called adults in the room were acting, as Swisher put it: like a "clown car crashing into a gold mine," quoting Meta CEO Mark Zuckerberg's famous quote about Twitter.

Involve Microsoft in the decision, inform employees, work with Altman on a dignified exit plan... all of them would have been solutions more typically employed by a board of a company the size of OpenAI, and all with potentially better results.

Microsoft, despite its huge stake, does not have a seat on OpenAI's board, due to the company's strange structure. Now that could change, according to several news reports, including those from TheWall Street Journal and The New York Times. One of the company's demands, including Altman's return, is to have a seat at the table.

  • Sam Altman, the man behind ChatGPT, testifies before Congress about the risks of artificial intelligence

With OpenAI's ChatGPT-like capabilities built into Bing and other commodities, Microsoft believed it had invested wisely in the promising new technology of the future. That's why the news of Altman's firing, along with the rest of the world, on Friday night must have come as a surprise to Satya Nadella, CEO of Microsoft, and his team.

The junta angered a powerful ally and could be forever changed by the way it handled Altman's ouster. It could end up with Altman back at the helm, a for-profit company on its nonprofit board, and a major cultural shift at OpenAI.

Another possibility is that it becomes a competitor to Altman, who could ultimately decide to start a new company and take talent away from OpenAI.

In any case, OpenAI is now likely to be in a worse position than it was on Friday before firing Altman. And it was a problem he could have avoided, ironically, by slowing down.

Artificial IntelligenceOpenAI