There has to be a bigger story to this.
Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.
>Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.
Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.
>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.
I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.
People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?
Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.
If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.
Why worry about the Sauds when you've got your own home grown power hungry individuals.
What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.
This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing
It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.
> you have the single greatest shitshow in tech history
the second after Musk taking over Twitter
>Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.
Cambridge Analytics and The Facebook we must do better greatest hits?
Taking money from Saudi's alone should raise a big red flag.
> the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
> rich and powerful people using the technology to enhance their power over society.
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
> money from the Saudis on the order of billions of dollars to make AI accelerators
Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.
At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.
The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.
Of course this is about the money, one way or another.
> Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.
Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.
If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure
To me this is the ultimate Silicon Valley bike shedding incident.
Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.
> There has to be a bigger story to this.
On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.
So they actually kicked him out because he transformed a non-profit into a money printing machine?
What does TC style mean?
MBS? Seriously? How badly do you need the money.. good luck not getting hacked to pieces when your AI insults his holiness
> taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
This is absolutely peak irony!
US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets
Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!
I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.
100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.
Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.
Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
> But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.
Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?
It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)
Do we even have an idea of how the vote went?
Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.
It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.
It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).
I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.
If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.
It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?
The girl she said not to worry about.
Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.
The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.
Now it's increasingly look like Sam will be heading back into the role of CEO of openai.
Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.
Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.
My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.
That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.
> Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.
You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".
The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.
That's the only thing that make sense with Ilya & Murati signing that letter.
This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive
1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.
2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.
Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.
The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.
> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.
> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.
> - Emmett Shear Sept 16, 2023
Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!
Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.
If Emmett will run this the same way he ran Twitch, I'm not expecting much action from him.
People kept asking where he was during his years of being Twitch CEO, it's not unlike him to be MIA now either.
As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.
That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.
However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)
He has said more than he said during his entire 5 years at Twitch
Here he is! Blathering about AI doom 4 months ago, spitting Yudkowsky talking points:
Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.
Why should he care about updating internet randoms? It's none of our business. The people who need to know what's going, know what's going on.
He is trying to determine if they have already made an Alien God.
Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.
Giving different opinions on same person is a reason to fire a CEO?
This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.
As mentioned by another person in this thread , it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?
I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.
As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.
So yes, it’s absolutely a valid strategy.
The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.
I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)
Steve Jobs famously had two iPhone teams working on concepts in parallel. It was click wheel vs multi-touch. Shockingly the click wheel iPhone lost.
Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:
1. stick with DOS
2. go with OS/2
3. go with Windows
Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.
Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.
Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.
I wonder if this is what the staff are thinking right now. It must feel awful if they are.
Happens all the time.
Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.
I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.
Giving two groups of researchers the same problem is guaranteeing one team will scoop the other. Hard to divvy up credit after the fact.
Also when a project is vital to a company, you cannot just give it to one team. You need to derisk
How did they get 4 board to fire him because he tried to A B test a project?
Was that verbatim the reason or an angry persons characterisation?
> One explanation was that Altman was said to have given two people at OpenAI the same project.
Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.
>Have these people never worked at any other company before?
Half the board has not had a real job ever. I’m serious.
My dad interviewed someone who was applying for a job. Standard question, why did you leave the last place?
"After six months, they realised our entire floor was duplicating the work of the one upstairs".
To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)
(Especially if they aren't made aware of each other until the end.)
I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.
A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?
Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.
Maybe it's was not a ordinary project or not ordinary people.
Still too much in the dark to judge.
In over 10 years of experience, I have never known this to happen.
Actually, they haven’t. One is some policy analyst and the other is an actor’s wife.
wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying? same for MS
So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?
Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.
Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.
So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.
I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.
This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).
Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.
If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.
Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.
i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!
The only thing akin to that would be an AI safety concern and the new CEO specifically said that wasn’t the issue.
And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.
It seems like a simple power struggle where the board and employees were misaligned.
Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)
Not specifically related to this latest twist, sorry, but DeepMind’s Geoffrey Irving trusts the board over Altman: https://x.com/geoffreyirving/status/1726754270224023971
"I have no details of OpenAI's Board’s reasons for firing Sam"
Not the strongest opening line I've seen.
Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.
When you have such a massive conflict of interest and zero facts to go on - just sit down.
also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."
Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.
But as we all know - Ilya did a 180 (surprised the heck out of me).
"Sustkever is said to have offered two explanations he purportedly received from the board"
I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.
Would you go so far as to say he was not consistently candid...?
Also, since he's on the board, and it wouldn't have been Brockman or Altman who gave him this info... there are only three people left:
"non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."
Launched Thursday morning:
Fortunately no conflict of interest there. Ignore the guy behind the curtain.
Both 'reasons' are a bullsh*t. But interesting is Sustkever was the key person, it wouldn't happen without him. And now he says board told him why he was doing it? He didn't reiterate he regrets about it. So looks like he was one of the driving forces, if not the main. Of course he doesn't want the reputation of 'the man who killed OpenAI'. But he definitely took part and could prevent it.
Nytimes mentioned that just a month back someone else was promoted to the same level as Ilya. Sounds like more than a coincidence.
So Surdkever fires Altman, then signs a letter saying they’ll quit unless he’s reinstated.
There’s only 4 board members, right?
Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?
Have they been feeding motions into chatgpt and asking “should add I do this?”
Seems most likely Sustkever wanted him fired and then realized his mistake. Ultimately the board was probably quietly seething about the direction the company was headed, got mad enough to retake the reigns with that stunt and then realized what that actually meant.
Now they are trying to unring the bell but cannot.
> Have they been feeding motions into chatgpt and asking “should add I do this?”
The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m
It'd have to be a very stupid version of chatgpt
Can the 3 board members also kick out Sustkever from the board?
That headline is bad, not sure if it's deliberate.
The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.
But the article itself says:
> OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.
Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".
This reads like the Board 4 are not allowed to say, or are under NDA, or do not dare say, or their lawyers told them not to say, the actual reason. Because this is obviously not the actual reason.
Without all the fluff:
One explanation was that Altman was said to have given two people at OpenAI the same project.
The other was that Altman allegedly gave two board members different opinions about a member of personnel
Ilya himself was a member of the board that voted to fire Altman. I don't know if he's lying to his teeth in these comments, making up an alibi, or is genuinely trying to convince people was acting as a rubber stamp and doesn't know anything.
As this article seems to have the latest information, let's treat it as the next instalment. There's also Inside The Chaos at OpenAI - 38341399, which I've re-upped because it has backstory that doesn't seem to have been reported elsewhere.
Edit: if you want to read about our approach to handling tsunami topics like this, see 38357788.
-- Here are the other recent megathreads: --
Sam Altman is still trying to return as OpenAI CEO - 38352891 (817 comments)
OpenAI staff threaten to quit unless board resigns - 38347868 (1184 comments)
Emmett Shear becomes interim OpenAI CEO as Altman talks break down - 38342643 (904 comments)
OpenAI negotiations to reinstate Altman hit snag over board role - 38337568 (558 comments)
-- Other ongoing/recent threads: --
OpenAI approached Anthropic about merger - 38357629
95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - 38357233
Satya Nadella says OpenAI governance needs to change - 38356791
OpenAI: Facts from a Weekend - 38352028
Who Controls OpenAI? - 38350746
OpenAI's chaos does not add up - 38349653
Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - 38348968
OpenAI's misalignment and Microsoft's gain - 38346869
Emmet Shear statement as Interim CEO of OpenAI - 38345162
>There's also Inside The Chaos at OpenAI ... it has backstory that doesn't seem to have been reported elsewhere
Probably because that piece is based on reporting for upcoming book by Karen Hao:
>Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...
I see why you recommended that Atlantic article, its very very good.
By the time this saga resolves, the number of threads linked here could suffice as chapters of a book
If I were OpenAI employee, I would have been uber pissed.
Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.
I would have joined MSFT out of spite.
Absolutely agree, would be beyond pissed. A once in a lifetime chance at generational wealth blown.
These people joined a non-profit though. Am I right in thinking that you wouldn't join a non-profit expecting a large future payout?
I really can't imagine. I am super pissed and only over something I love that I pay 20 bucks a month for. I can't imagine the feeling of losing this kind of payout over what looks like complete bullshit. Not just the payout but being part of a team doing something so interesting and high profile + the payout.
I just don't know how they put the pieces back together here.
What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.
These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.
Given the nonsensical reason provided here, I am led to believe that this entire farce is aimed at transforming OpenAI from a non-profit to a for-profit company one way or another, e.g., significantly raising the profit cap, or even changing it completely to a for-profit model. There may not be a single entity scheming or orchestrating it, but the collective forces that could influence this outcome would be very pleased to see it unfold in this way.
But was delivering it into the hands of Microsoft really how they wanted it to happen?
At Amazon a senior manager would probably be fired for not giving a project to multiple teams.
thats not very frugal; please provide a source or citation for your claim.
These simply can't be the real reasons.
And evidently the employees have reacted as they likely would. The two points given sound like mundane corporate mess ups that are hardly worth firing the CEO in such a drastic fashion.
a link to the letter from employees
curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?
to think these are the folks with agi at their fingertips
He is publicly regretful: https://twitter.com/ilyasut/status/1726590052392956028
What will happen to employee’s stock options if they all mass quit and moved to Microsoft?
The options will be worth $0, right?
From what I understand, Microsoft realizes this and gives them the equivalent of their OAI stock options in MSFT stock options if they join them now. For some employees, this may mean $10MM+
Microsoft would likely match their PPUs at the tender offer valuation.
OpenAI has no stock options.
If the outcome of all of this is that Altman ends up at Microsoft and hiring the vast majority of the team from OpenAI, it's probably wise to assume that this was the intended outcome all along. I don't know how else you get the talent at a company like OpenAI to willingly move to Microsoft, but this approach could end up working.
These are the dumbest reasons possible, certainly not worth destroying a company on the move or people's livelihoods over.
Based on what've seen so far, one of the following possibilities is the most likely:
1. Altman was actually negotiating an acquisition by Mircosoft without being transparent with the board about it. Given how quickly they were hired by Microsoft after the events, this is likely.
2. Altman was trying to raise capital from a source that the board wouldn't be too keen on. Without the board's knowledge. Could be a sovereign fund or some other government backed organisation.
I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?
"Before OpenAI ousting, CEO Altman tried to raise billions in the Middle East for chip venture"
If Altman ends up going back to OpenAI, then shouldn't Sutskever be fired/kicked off the board too?
They may retain him, but his time of being on the board or any board is at an end.
The rest of the board. My god. Why were they there?
"Two explanations" isn't accurate, its more like Ilya gave two examples of Sam not being candid with the board. "Two explanations" makes it sound like two competing explanations. What Ilya gave was two examples of the same problem.
I can't help thinking that Sam Altmans universal popularitity with OpenAI staff might be because they all get $10million each if he comes back and resets everything back to how it was last week.
Given these non-reasons, everyone threatening to quit makes a lot of sense.
We've gone beyond insanity at this point. Just clown show.
This has been tech's most entertaining weekend in the past decade.
Sadly, at the expense of the OpenAI employees and dream, who had something great going for them at the company. Rooting for them.
It's incredibly strange to me that this all happened right after Sam's sister publicly accused him of sexual abuse. It's insane that no one is even acknowledging this could have something to do with it..
For what it's worth: Watching her videos, I'm not sure I necessarily believe her claims - but that position goes against every tenet of the current cultural landscape, so the fact it is being completely ignored is ringing alarm bells for me.
If the CEO of any other massively hyped bleeding edge tech companies sister claimed publicly and loudly that they were abused as a very young child, we would hear about it - and the board would be doing damage control trying to eliminate the rot. Why is this case different?
Now we have a situation where all of the current employees have signed this weird loyalty pledge to Sam, which I think will wind up making him untouchable in a sense - they have effectively tied the fate of everyone's job to retaining a potential child rapist as head of the company.
You have to wonder at this point how much of this is the current board members trying to somehow save face.
I can’t imagine their careers after this will be easy…
You are far more charitable than I. (I have no idea why I’m worked up. I don’t work at OAI.) They pulled the dumbest virtual corporate hostage crisis, for ostensibly flimsy reasons, and even has mainstream media wondering whether they’re just crazy. People are just begging to know why, and they seemingly have nothing. It’s incredible. Good lord, if there’s a lesson, it’s that these people should never have been nor should ever be in charge of anything of any importance. (Again, no idea why I’m worked up — I don’t actually care about Sam Altman.) Oh, no, sorry, that’s not the lesson. The lesson is picking board members is probably the most important thing you’ll do. Don’t be cavalier. It will bite you.
An "Independent" board is supposed to be a good thing, right?
Doesn't this clown show show that if a board has no skin in the game --apart from reputation-- they have no incentive to keep the company alive?
I think it more shows that the blend of profit/nonprofit was a failure.
I think this was a unique situation due to timing. OpenAI had 9 board members at the beginning of the year, but 3 (Reid Hoffman, Shivon Zilis, and Will Hurd) had to leave for various reasons (e.g. conflict of interest, which IMO should have also taken D'Angelo off the board), and this would have never happened if they were still on the board. So you were left with a rare situation where the board was incredibly immature/inexperienced for the importance of OpenAI.
It has been reported that Altman was working on increasing the size of the board again, so it's reasonable to think that some of the board members saw this as their "now or never" moment, for whatever reason.
The issue was getting nobodies on the board who don’t have experience sitting on boards or working with startups. It’s very evident by how this was handled.
They may well have skin in the game, but not this game. That's exactly why you don't want a board member with a potential conflict of interest.
It shows nonprofit boards wield outsize power and need strict governance, e.g., conflicts of interest, empty board seats.
Adam DAngelo, once one of the more level-headed Facebook alumni, and bar far the most experienced OpenAI Board member, is now nowhere to be found? Is he hiding out with Sam Trabucco somewhere?
His laywer likely told him to lay very low. In his basement or something.
MSFT buys ownership of OpenAI's for/capped-profit entities, implements a more typical corporate governance structure, re-instates Altman and Brockman.
OpenAI non-profit continues to exist with a few staff and no IP but billions in cash.
This whole situation is being used to drive the price down to reduce the amount the OpenAI non-profit is left with.
SV don't try the "capped-profit owned by a non-profit" model again for quite some time.
Maybe Altman takes some equity in the new entity.
Hearing news about OpenAI approaching Anthropic for merger talks, it is not too far fetched to assumed that OpenAI will rid itself of the for-profit arm, that MS has 49% stake in, to MS itself.
It is impossible for OpenAI to work with or for MS, with MS holding all the keys, employees, compute resources, etc. I come to understand that the 10 Billion from MS has mostly Azure credits. And for that OpenAI gave up 49% stake (in its capped, for -profit wholly owned subsidiary) along with all the technology, source code and model weights that OpenAI will make, in perpetuity.
The deal itself is an amazing coup for MS, almost making the OpenAI people (I think Sam made the deal at the time), look like bumbling fools. Give away your lifetime of work for a measly 10 Billion? When they are poised to almost be hundreds of Billions worth?
All these problems are the result of their non-profit holding capped-profit structure, and lack of a clear vision and misleading or misplaced end goals.
700 of the 770 employees back Sam Altman. So all the talk about engineers giving higher importance to "values" and "AI Safety" is moot. Everyone in SV is motivated by money.
Why would MSFT buy the for-profit entity when they already have the employees and IP?
Why would the board endorse the sale?
It’s amazing how every action the board takes (or the new CEO chosen by the board) just makes them look worse.
I’d like to offer my consulting services: my new consulting company will come in, and then whatever you want to do we will tell you not to. We provide immense value by stopping companies like OpenAI from shooting off their foot. And then their other foot. And then one of their hands.