Explaining The Sam Altman/OpenAI Thing

Hey, remember that whole “Sam Altman fired as CEO/reinstated as CEO of OpenAI” thing a couple of weeks ago? Here’s the archive story.

Sam Altman was reinstated late Tuesday as OpenAI’s chief executive, successfully reversing his ouster by the company’s board last week after a campaign waged by his allies, employees and investors, the company said.

The board would be remade without several members who had opposed Mr. Altman.

“We have reached an agreement in principle for Sam to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo,” OpenAI said in a post to X, formerly known as Twitter. “We are collaborating to figure out the details. Thank you so much for your patience through this.”

The return of Mr. Altmanand the potential remaking of the board, capped a frenetic five days that upended OpenAI, the maker of the ChatGPT chatbot and one of the world’s highest-profile artificial intelligence companies.

“i love openai, and everything i’ve done over the past few days has been in service of keeping this team and its mission together,” Mr. Altman said in a post to X. “with the new board and w satya’s support, i’m looking forward to returning to openai, and building on our strong partnership with msft.”

OpenAI’s board surprised Mr. Altman and the company’s employees on Friday afternoon when it told him he was being pushed out. Greg Brockman, the company’s president who co-founded the company with Mr. Altman and others, resigned in protest.

The ouster kicked off efforts by Mr. Altman, 38, his allies in the tech industry and OpenAI’s employees to force the company’s board to bring him back. On Sunday evening, after a weekend of negotiations, the board said it was going to stick with its decision.

But in a head-spinning development just hours later, Microsoft, OpenAI’s largest investor, said that Mr. Altman, Mr. Brockman and others would be joining the company to start a new advanced artificial intelligence lab.

Nearly all of OpenAI’s more than 700 employees signed a letter telling the board they would walk out and follow Mr. Altman to Microsoft if he wasn’t reinstated, throwing the future of the start-up into jeopardy.

Four board members — Ilya Sutskever, an OpenAI founder; Adam D’Angelo, the chief executive of Quora; Helen Toner, a director of strategy at Georgetown’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and computer scientist — had initially decided to push Mr. Altman out.

Well, here’s Patrick Boyle to provide some context:

A few takeaways:

  • There are two OpenAIs: “The non-profit OpenAI, Inc. registered in Delaware, and its for-profit subsidiary OpenAI Global, LLC.”
  • Musk was an early, and big, investor in the non-profit. “The founders pledged over one billion dollars to the venture, but actually only contributed around $130 million dollars- the majority of which came from Elon Musk.”
  • When he felt OpenAI was falling behind in 2018, he wanted to take over OpenAI himself. When the board rejected that, he resigned and took future pledged money with him, which blew a huge hole in their budget. (Whatever you think of Musk, I don’t think not being busy enough is his problem.)
  • Then came the for-profit doppelganger.
  • “The profits being capped at 100 times any investment.”
  • “The company explained this decision saying, ‘We need to invest billions of dollars in the coming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.’ This transition from nonprofit to for-profit required OpenAI to balance its desire to make money with its stated commitment to ethical AI development.”
  • “This unconventional structure meant that Open AI had a board of directors, which in theory controls the entire corporate structure (which includes the charity and the capped profit company) – but which unlike other boards is not accountable to shareholders. The directors are in fact not allowed to own any stock to prevent a conflict of interest, because they are specifically not supposed to be aligned with shareholders.”
  • “The companies operating agreement – to investors – says – in writing: ‘It would be wise to view any investment in OpenAI in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.’ Documents like this – that were written by an actual lawyer – highlight the problems we are starting to see from the combined popularity of science fiction in Silicon Valley and widespread microdosing of hallucinogens.”
  • “In the real world, where the role of money is reasonably well defined, Open AI is an unprofitable company and is expected to need to raise a lot more money over time from investors like Microsoft, to keep up with the high costs of building more sophisticated chatbots.”
  • “Despite this lack of profitability, the company is valued by investors at 86 billion dollars, and Bloomberg reported last weekend that ‘some investors were considering writing down the entire value of their OpenAI holdings to zero.'”
  • “Former colleagues would have an open door to follow and join a new AI unit, according to Microsoft chief Satya Nadella. As much of a win as this might have appeared for Microsoft (people were saying that they had managed to buy the hottest AI firm for zero), this might not have been the optimal outcome for them, as they would likely have had to deal with antitrust regulators and lawsuits from other Open AI investors.”
  • “The majority of Open AI’s 700 or so employees signed an open letter to the board demanding that the board resign and that they rehire Altman. The letter stated that the board had told the employee leadership team that allowing the company to be destroyed ‘would be consistent with the mission.’ The employees said that unless their demands were met, they would resign from Open AI and join the new subsidiary of Microsoft being headed up by Altman and Brockman.”
  • “You have to wonder what the employee contracts at Open AI look like that the entire staff could leave to work for a major investor in the company leaving Open Ai as an empty shell.”
  • “Typically, executives like Altman would have contracts that prevent them from hiring away key staff once they are no longer at the firm, and staff would have signed NDA’s preventing them from taking any technology with them.”
  • “The OpenAI story is a bit of a crazy one, where Microsoft and a number of other sophisticated investors agreed to put billions of dollars in, and employees got stock grants, all at an $86 billion valuation, without the contractual or fiduciary rights that investors might normally expect.”
  • Rival Anthropic has a similar structure.
  • “Bad corporate governance has been a growing issue particularly in Silicon Valley where companies like Google, Facebook and Snap structured their IPO’s such that founders were left with unchallenged power to do almost anything that they want.” Google and Facebook are garbage companies, but there are some scenarios where only founders can keep the company on a long-term vision rather than goosing quarterly profits (Jobs at Apple comes to mind).
  • Warren Buffet has a similar mechanism (A shares of stock only he controls) to keep control of Berkshire Hatheway.
  • “Since you are buying shares of companies in perpetuity, leadership who are not accountable to shareholders can take value destructive paths without answering to anyone. Meta’s Reality Labs division, which houses its efforts to build the metaverse, has lost around $46.5 billion dollars since 2019. Would Mark Zuckerberg have been able to waste this much money if he was accountable to investors?” I have a fairly strong suspicion that division is being used to hide all sorts of shenanigans.
  • Boyle is deeply suspicious of “stakeholder capitalism” as opposed to the old-fashioned, profit-maximizing kind.”
  • The thing missing from this summary, and all the coverage of the story I’ve seen, is why Altman was originally let go, and none of the principals involved seem to be talking about it…

    Tags: , , , , , , , , , , , ,

    11 Responses to “Explaining The Sam Altman/OpenAI Thing”

    1. Howard says:

      All I keep thinking is … Elon Musk funded OpenAI as a non-profit in order to rein in AI / confront the alignment problem / keep humans in the loop …

      … something something reduce existential risk …

      And now?

    2. 10x25mm says:

      “Typically, executives like Altman would have contracts that prevent them from hiring away key staff once they are no longer at the firm, and staff would have signed NDA’s preventing them from taking any technology with them.”

      The Federal Trade Commission proposed a new rule that will ban employers from imposing noncompetes on their workers in January 2023. FTC-2023-0007-0001 will take effect early next year. Most companies are already behaving as if noncompetes have been voided.

    3. R C Dean says:

      One side note:

      How could Microsoft hire away all the staff?

      In California, non-competes (I won’t work for a competitor) are completely unenforceable, by statute.

      No-hire/non-solicitation (I won’t hire away my former colleagues/employees to work for someone else) are enforceable only in pretty limited circumstances.

      NDAs (I won’t tell anybody your secrets) are almost impossible to enforce, especially when you are hiring people for what’s in their heads, not what software/designs/etc. they can bring with them on a thumb drive.

    4. […] NOT SURE THERE’S ANY EXPLAINING IT BUT THE STORY SURE IS WILD: Explaining The Sam Altman/OpenAI Thing. “The companies operating agreement – to investors – says – in writing: ‘It would be […]

    5. raph pol says:

      The best rumour I’ve heard about why Altman was told to leave was that he was investing in another company that was a side venture to the OpenAI.

    6. Kirk says:

      Meh.

      Current state-of-the-art in AI leaves me more than a little dubious of the proposition that they’re suddenly going to make a breakthrough and create one that will make the Singularity happen. Or, go rogue and kill the lot of us.

      What’s more likely is that there’s going to be a massive, massive failure of all this crap, and about all we’ll get out of it will be a set of very prosaic tools that disappoint everyone’s expectations. And, no doubt, more annoying advertising.

      We don’t even have a good working definition for “intelligence” in humans. The IQ tests we’ve relied on for generations have given us a bunch of effectively autistic dolts running the world in what can only be described as the opposite of meritocracy, because none of what they’re doing is working. Go look at San Francisco or Seattle, and then ask yourself this: If we were to put the average blue-collar foreman or manager who came up from the bottom in charge of all this… Would it still look like this?

      I don’t think it would. There’d be shit for theories and “best practices”, and that guy or girl would likely just roll up their sleeves and do what common sense told them to do, and we’d have cities free of human feces and needles in pretty short order. We’d also see a substantial decline in “homeless”, because I suspect that our working-class managerial types would take one look at that proposition of “free needles and all the drugs you can take” and say “Yeah. No. Get a f*cking job, you lazy f*ck, and if you won’t…? We’ll give you one.”

      Followed, no doubt, by a bunch of stuff that would have the “human rights” types up in arms. But, the streets would be clean…

      Consider that thought-experiment, and then ask yourself: How smart are all these college-credentialed types, anyway? I mean, really? Are they?

      My working class common-sense type friends and family would clean that shit up with a quickness, were they in charge. Why aren’t they? Oh, yeah… No credentials. Other than the ones they made for themselves via hard work, and which were not conferred upon them by the same institutions running our IQ test regimes…

    7. FrancisT says:

      This ars technica report and the linked to New Yorker article may provide more context.

      https://arstechnica.com/ai/2023/12/openai-board-reportedly-felt-manipulated-by-ceo-altman/

      Altman appears to have tried to out one of members of the board that disagreed with him by suggesting to various other members that the rest of the board agreed with him that the troublesome board member(s) should be removed. When the board members discovered this they, not unreasonably, lost trust in him.

    8. Kate says:

      The speculation that Altman was hiding from the board an extinction-level advance in Artificial General Intelligence is (by several orders of magnitude) less likely than the suspicion he was hiding the fact that the product he’s developing at present is media hyped shit, makes shit up, and is useless as shit.

    9. […] “The companies operating agreement – to investors – says – in writing: ‘It would be wise t… […]

    10. […] Battleswarm blog: Explaining The Sam Altman/OpenAI Thing. […]

    11. […] that had its glory days, like so many others in Cuba, and A word about Mr. Kissinger BattleSwarm: Explaining The Sam Altman/OpenAI Thing, also, Lt. Gov. Patrick: Dade Phelan Is “Impossible To Work With” Behind The Black: Iran and […]

    Leave a Reply