Today on Decoder, I’m talking with Ronan Farrow, one of the biggest stars of investigative reporting working today. He broke the Harvey Weinstein story, among many, many others. And just last week, he and co-author Andrew Marantz published an incredible deep-dive feature in The New Yorker about OpenAI CEO Sam Altman, his trustworthiness, and the rise of OpenAI itself.
One note before we go any further here —The New Yorker published that story and Ronan and I had this conversation before we knew the full extent of the attacks on Altman’s home, so you won’t hear us talk about that directly. But just to say it, I think violence of any kind is unacceptable, these attacks on Sam were unacceptable, and that the kind of helplessness that people feel, which leads to this kind of violence, is itself unacceptable, and it’s worth a lot more scrutiny from both the industry and our political leaders. I hope that’s clear.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
All that said, there is a lot swirling around Altman that’s fair game for rigorous reporting — the kind of reporting Ronan and Andrew set out to do. Thanks to the popularity of ChatGPT, Altman has emerged as the most visible figurehead of the AI industry, having turned a once nonprofit research lab into an almost trillion-dollar private company in just a few years. But the myth of Altman is deeply conflicted, equally defined both by his obvious dealmaking ability and his reported tendency to… well, lie to everyone around him.
The story is over 17,000 words long, and it contains arguably the definitive account of what happened in 2023 when the OpenAI board of directors very suddenly fired Altman over his alleged lying, only for him to be almost instantly rehired. It’s also a deep dive into Altman’s personal life, his investments, his courting of Middle Eastern money, and his own reflections on his past behavior and character traits that led one source to say he was “unconstrained by the truth.” I really suggest you read the entire story; I suspect it will be referenced for many years to come.
Ronan talked to Altman many times over the 18 months he spent reporting this piece, and so one of the main things I was curious about was whether he sensed any change in Altman over that time. After all, a lot has happened in AI, in tech, and in the world over the past year and a half.
You’ll hear Ronan talk about that very directly, as well as his sense that people have become much more willing to talk about Altman’s ability to stretch the truth. People are starting to wonder, out loud and on the record, whether the behavior of people like Altman is concerning, not just for AI or tech but also for society’s collective future.
Okay: Ronan Farrow on Sam Altman, AI, and the truth. Here we go.
This interview has been lightly edited for length and clarity.
Ronan Farrow, you’re an investigative reporter and contributor to The New Yorker. Welcome to Decoder.
Glad to be here. Thanks for having me.
I am very excited to talk to you. You just wrote a big piece for The New Yorker. It’s a profile of Sam Altman and, sort of with it, OpenAI. My read of it is that, as all great features do, it, with rigorous reporting, validates a lot of feelings people have had about Sam Altman for a very long time. You’ve obviously published it, you’ve gotten reactions to it. How are you feeling about it right now?
Well, I’ve been heartened, actually, by the extent to which it’s broken through in a time where the attention economy is so kind of schizophrenic and shallow. This is a story that, in my view, affects all of us. And when I spent a year and a half of my life, and my co-author, Andrew Marantz, also spent that time of his, really trying to do something forensic and meticulous, it’s always because I feel like there are bigger structural issues that affect people beyond the individual and company at the heart of the story.
Sam Altman, against the backdrop of Silicon Valley hype culture and startups that balloon to massive valuations based on promises that may or may not come to pass in the future, and an increasing embrace of a founder culture that thinks telling different groups different conflicting things is a feature, not a bug…Even against that backdrop, Sam Altman is an extraordinary case where everyone in Silicon Valley who expects those things can’t stop talking about this question of his trustworthiness and his honesty.
We knew already that he was fired over some version of allegations of dishonesty or serial alleged lying. But extraordinarily, despite the fact that there’s been wonderful reporting, Keach Hagey has done great work on this. Karen Hao has done great work on this. There really wasn’t a definitive understanding of the actual alleged proof points and the reasons why those have stayed out of the public eye.
So point number one is that I feel heartened by the fact that some of those gaps in our public knowledge, and even in the knowledge of Silicon Valley insiders, have now been filled a little bit more. Some of the reasons that there were gaps have been filled in a little bit more.
We report on cases where people inside this company really felt like things were covered up or deliberately not documented. One of the new things in this story is a pivotal law firm investigation by WilmerHale, which is obviously a fancy, credible, big law firm that did investigations of Enron and WorldCom, which, by the way, were all voluminous, like hundreds of pages published. WilmerHale did this investigation that was demanded by board members who had fired Altman as a condition of their departure when he got rid of them, and he came back. And extraordinarily — in the eyes of many legal experts I spoke to, and shockingly in the eyes of many people in this company — they kept it out of writing. All that ever emerged from that was an 800-word press release from OpenAI that described what happened as a breakdown in trust. And we confirmed that this was kept to oral briefings.
There are cases where, for instance, a board member seemingly wants to vote against the conversion from OpenAI’s original nonprofit form into a for-profit entity, and it’s recorded as an abstention. There’s like a lawyer in the meeting saying, “Well, that could trigger too much scrutiny.” And the person who wants to vote against gets recorded as an abstention to all appearances. There’s a factual dispute. OpenAI claims otherwise, as you might imagine. These are all cases where you have a company that, by its own account, holds our future in its hands.
The safety stakes are so acute that they have not gone away. This is the reason this company was founded as a nonprofit focused on safety, and where things were being obscured in a way that credible people around this found it less than professional. And you couple that with a backdrop where there’s so little political appetite for meaningful regulation. I think it’s a very combustible situation.
The point for me is not just that Sam Altman deserves these questions so acutely. It’s also that any of these guys in this field, and many of the key figures, exhibit, if not this particular idiosyncratic, alleged lying-all-the-time trait, certainly some degree of a race-to-the-bottom mentality, where the people who were safetyists have watered down those commitments and everyone-is-in-a-race posture.
I think, as we look at recent leaks out of Anthropic, there’s a person who poses the question of who should have their finger on the button in this piece. The answer is, if we don’t have meaningful oversight, I think we have to be asking serious questions and trying to surface as much information as we can about all of these guys. So I’ve been heartened by what feels like a meaningful conversation about that, or the beginnings of one.
The reason I asked it that way is that you worked on this for a year and a half. You talked to, I believe, 100 people with your co-author, Andrew. That’s a long time for a story to cook. I think about the last year and a half in AI in particular, and boy, have the attitudes and values of all these characters shifted very quickly.
Maybe none more so than Sam Altman, who started off as the default winner because they had released ChatGPT and everyone thought that would just take over for Google. And then Google responded, which seemed to surprise them that Google would try to protect its business, maybe one of the best businesses in tech history, if not business history. Anthropic decided that it would focus on the enterprise. It seems to be taking a commanding lead there because the enterprise use of AI is so high.
Now, OpenAI is refocusing its product away from “we’re going to take on Google” to Codex, and they’re going to take on the enterprise. I just can’t quite tell whether, during the course of your reporting over the last year and a half, if it feels like the characters you were talking to changed? Like their attitudes and their values, did those change?
Yes. I think first of all, that the critique that is explored in this piece, coming from many people inside these companies at this point — that this is an industry that, despite the existential stakes, is descending into something of a race to the bottom on safety and where speed is trumping everything else — that concern has grown more acute. And I think those concerns have been more validated as the past year and a half has transpired. Simultaneously, attitudes about Sam Altman have specifically changed. When we started talking to sources for this, people were really, really leery of being quoted about this and going on the record about this.
By the end of our reporting, you have a body of reporting where people are talking about this very openly and explicitly, and you have board members saying things like, “He’s a pathological liar. He’s a sociopath.” There’s a range of perspectives from, “This is dangerous given the safety stakes, and we need leaders of this tech that have elevated integrity,” all the way up to like, “Forget the safety stakes, this is behavior that is untenable for any executive of any major company, that it just creates too much dysfunction.”
So the conversation has become much more explicit in a way that feels maybe belated, but is heartening in one sense. And Sam Altman, to his credit… The piece is very fair and even generous, I would say, to Sam. This is not the kind of piece where there was a lot of “got you” stuff. I spent many, many hours on the phone with him as we were finishing this up and really heard him out.
As you can imagine, in a piece like this, not everything makes it in. Some of those cases in this one were because I was listening sincerely. And if Sam was actually making an argument that I felt carried water, that something, even if it was true, could be sensationalist, I really erred on the side of keeping this forensic and measured. So I think that is being received rightly, and I just hope this factual record that’s accumulated over this period of time can trigger a more bracing conversation about the need for oversight.
That’s actually my next question. I think you talked to Sam a dozen times over the course of reporting this story. Again, that’s a lot of conversations over a long period of time. Did you think Sam changed over the course of the reporting over the past year and a half?
Yeah. I think one of the most interesting subplots in this is that Sam Altman is also talking about this trait more explicitly than he has in the past. The posture of Sam in this piece is not like, “There’s nothing there, this is not true; I don’t know what you’re talking about.” The posture he has is that he says this is attributable to a people-pleasing tendency and a kind of conflict aversion. He’s acknowledging that it caused problems for him, particularly earlier in his career.
He is saying, “Well, I am moving past that, or have to some extent moved past that.” I think what’s really interesting to me is the contingent of people we talked to who were not just sort of safety advocates, not just the underlying technical researchers who very often tend to have these acute safety concerns, but also pragmatic, big-time investors. They are backers of Sam’s, who, in some cases, look at this question and talk about even having played a key role in his coming back after the firing. Now, on this question of whether he’s reformed, and to what extent is that change meaningful, they say, “Well, we gave him the benefit of the doubt at the time.”
I’m thinking of one prominent investor in particular who said, “But since then, it seems clear he wasn’t taken out behind the woodshed,” which was the phrase that this one used, to the extent that was necessary. As a result, it seems like this is now a stable trait. We’re seeing this in an ongoing way. You can look at some of OpenAI’s biggest business relationships and the way they kind of carry the weight of that mistrust in an ongoing way.
Like with Microsoft, you talk to executives over there, and they have really acute and recently catalyzed concerns. There’s this instance where, on the same day OpenAI is reaffirming its exclusivity with Microsoft with respect to underlying stateless AI models it’s also announcing a new deal with Amazon that’s to do with selling enterprise solutions for building AI agents that are stateful, meaning they have memory.
You talk to Microsoft people, and they’re like, “That’s not possible to do without interacting with the underlying stuff that we have an exclusivity deal on.” So that’s just one of many small examples where this trait has tendrils into ongoing business activity all the time and is a subject of active concern within OpenAI’s board, within its executive suite, and in the wider tech community.
You keep saying that “trait.” There’s a line in the story that to me feels like the thesis, and it’s a description of the trait you’re describing. It’s that “Sam Altman is unconstrained by the truth” and that he has “two traits that are almost never seen in the same person: the first is a strong desire to please people, to be liked in any given interaction, and the second is an almost sociopathic lack of concern for the consequences that may come from deceiving someone.”
I have to tell you, I read that sentence 500 times, and I tried to imagine always saying what people wanted to be liked and then not being upset when they felt lied to. And I could not make my emotional state understand how those things can exist in the same person. You’ve talked to Sam a lot, and you’ve talked to people who have experienced these traits. How does he do it?
Yeah. It’s interesting on a human level because I do approach bodies of reporting like this with a real focus on humanizing whoever’s at the heart of it and seeking deep understanding and empathy. When I kind of tried to approach this from a more human standpoint and say, “Hey, this would be devastating for me if so many people that I’ve worked with said I’m a pathological liar. How do you carry that weight? How do you talk about that in therapy? What is the story you tell yourself about that?”
I got some sort of, in my view, maybe West Coast platitudes about that like, “Yeah, I like breath work.” But not a lot of the kind of bracing sense of deep self-confrontation that I think a lot of us would probably have if we were seeing this kind of feedback about our behavior and our treatment of people.
I think that actually goes to the broader answer to the question, too. Sam asserts that this trait has caused problems, but also that it’s part of what has empowered him to accelerate OpenAI’s growth so much that he is able to unite and please different groups of people. He’s constantly convincing all of these conflicting constituencies that what they care about is what he cares about. And that can be a really useful skill for a founder. I’ve talked to investors who then say, “Well, maybe it’s a less useful skill for actually running a company because it sows so much discord.”
But on the Sam personal side, I think the thing that I pick up on when I try to connect on a human level is the apparent lack of deeper confrontation, reflection, and self-accountability, which also informs that superpower or liability for a company preparing for an IPO.
He is someone who, in the words of one former board member named Sue Yoon, who’s on the record in the piece saying that to the point of “fecklessness” is the phrase she uses, is able to really believe the shifting reality of his sales pitches or is able to convince himself of them. Or at least if he doesn’t believe them, he is able to bluster through them without meaningful self-doubt.
I think the thing that you’re talking about, where you or I might, as we’re saying the thing and realizing that it conflicts with the other assurance we’ve made, kind of have a moment of freezing up or checking ourselves. I think that doesn’t happen with him. And there’s a wider Silicon Valley hype culture and founder culture that kind of embraces that.
It’s funny. The Verge is built on what amounts to a product reviews program. It’s the heart of what we do here. I hold a trillion dollars of Apple R&D once a year and say, “This phone is a seven.” And it sort of legitimizes all of our reporting and our opinions elsewhere. We have an evaluative function, and we spend so much time just looking at the AI products and saying, “Do they work?”
That feels like it’s missing from a lot of the conversation about AI as it is today. There’s endless conversation about what it might be able to do, how dangerous it might be. And then you drill down, and you say, “Does it actually do the thing it’s supposed to do today?” In some cases, the answer is yes. But in many, many cases, the answer is no.
That feels like it connects to the hype culture you’re describing and also to the sense that, well, if you say it’s going to do something and it doesn’t, and someone feels bad, that’s fine because we’re onto the next thing. That’s in the past. And in AI in particular, Sam is so good at making the grand promises.
Just this week, I think the same day as your story was published, OpenAI released a policy document that said we have to rethink the social contract and have AI efficiency stipends from the government. This is a grand promise about how some technology might shape the future of the world and how we live, and all of that relies on the technology working in exactly the way that maybe it’s promised to work or it should work.
Did you ever find Sam doubting AI turning into AGI or superintelligence or getting to the finish line? Because that’s the thing that I wonder about the most. Is there any reflection about whether this core technology can do all of the things that they say it can do?
It’s exactly the right set of questions. There are credible technologists that we spoke to in this body of reporting — and obviously Sam Altman is not one; he’s a business person — who say the way that Sam talks about the timeline for this tech is just way off. There are blog posts going back a few years where Sam is saying, “We’ve already reached the event horizon. AGI is basically here. Superintelligence is around the corner. We’re going to be on other planets. We’re going to be curing all forms of cancer.” Truly, I’m not embellishing.
The cancer one is actually interesting, that Sam is hyping up the person who theoretically cured their dog’s cancer with ChatGPT, and that simply did not happen. They talked to ChatGPT, and that helped them guide some researchers who actually did the work, but the one-to-one, this tool cured this dog is not actually the story.
I’m glad you raised that point because I want to go on to this bigger point about when both the potential and the risk of the technology are really going to vest. But it’s worth mentioning these little asides that constantly happen from Sam Altman, where he seems to embody this trait all over again.
I mean, to use the example of the WilmerHale report, where we had this information that had been kept out of writing, and wanted to know whether the oral brief along the way was given to anyone other than the two board members Sam helped install to oversee it. And he said, “Yeah, yeah, no, I believe it was given to everyone who joined the board after.” And we have a person with direct knowledge of the situation saying that it is simply a lie. And that really does appear to be the case, that it is untrue. If we want to be generous, perhaps he was misinformed.
There are a lot of these casual assurances. And I use that example in part because that’s a great example of dissembling, let’s call it, that can have real consequences legally. I don’t need to tell you, under Delaware corporate law, if this company IPOs, shareholders could, under section 220, complain about this and demand underlying documentation. There are already board members saying things like, “Well, wait a minute, that briefing should have happened.”
So these things that seem to jump out of his mouth all the time, they can have real market-moving effects, real effects for OpenAI. Bringing it back to the kind of utopian hype language that’s resurfaced, I think not coincidentally on the day this piece came out, it also effects all of us, because the dangers are so acute with respect to the way it’s being deployed in weaponry, the way it’s being used to identify chemical warfare agents, the disinformation potential, and because of the way in which the utopian hype does seem to be prompting a lot of credible economists to say, “This has all the signs of a bubble.”
Even Sam Altman has said, “Someone’s going to lose a lot of money here.” That could really crater a lot of American and global economic growth, if there’s like a true puncturing of a bubble involving all of these companies doing deals with each other, going all in on AI while borrowing so heavily. So what Sam Altman says matters, and I think the preponderance of people around him, you mentioned we talked to more than a hundred, it was actually well over a hundred. We had a conversation at the finish line where it’s like, “Would it be too petty to say it’s like this much higher number?” And we were like, “Yeah, let’s downplay. We’ll play it cool.” But there were so many people and such a significant majority of them saying, “This is a concern.” And I think that’s all why.
Let me ask you about that number. As you mentioned, people got more and more open with the concerns as time went on. It feels like the pressure around the bubble — the race to win, to pay off all this investment, to emerge as the winner, to IPO — has changed a lot of attitudes. It certainly created more pressure on Sam and OpenAI.
We published a story this week just about the vibes of OpenAI. Your story is part of it, but massive staffing changes in the executive ranks at OpenAI — people are coming and going. The researchers are all headed away, largely to Anthropic, which I think is really interesting. You can just see this company is feeling the pressure, and it is responding to that pressure in some way.
But then I think back to Sam getting fired. This is just memorable for me. It’s memorable for no one else, but I took a source call at the Bronx Zoo at 7PM on a Friday, and it was someone saying they’re going to try to get Sam back. And then we spent the weekend chasing that story down. And I was just like, “I’m at the zoo. What do you want me to do here?” And the answer was, “Stay on the phone.” Well, my daughter was like, “Get off the phone.” And that’s what I did.
It was ride or die to get Sam back. That company was like, “No, we’re not letting the board fire Sam Altman.” The investors, they’re quoted in your piece, “We went to war,” I think, is the Thrive Capital position, “to get Sam back.” Microsoft went to war to get Sam back. It’s later, and now everyone’s like, “We’re going to IPO. We got to the finish line. We got our guy back, and he’s going to get us to the finish line. We’re concerned he’s a liar.”
Why was it a war to get him back then? Because it doesn’t seem like anything has actually changed. You talk about the memos that Ilya Sutskever and [Anthropic CEO] Dario Amodei kept while they were contemporaries of Sam Altman. Ilya’s number one concern was that Sam is a liar.
None of that has changed. So why was it war to bring him back then? And now that we’re at the finish line, it seems like all the concerns are out in the open.
Well, first of all, sorry to your daughter and my partner and all the other people around the journalists.
[Laughs] It was quite a weekend for everyone.
Yeah, it does take over one’s life, and this story definitely has mine, over the last period of time. It actually relates to this theme of journalism and access to information, I think. The investors who went to war for Sam and all played roles in making sure he came back, and the board that had been specifically designed to protect a nonprofit’s mission to put safety over growth and to fire an executive if they couldn’t be trusted with that, they went away. That was all because, yes, the market incentives were there, right?
Sam was able to convince people, “Well, the company’s just going to fall apart.” But the reason he had support was a lack of information. Those investors, in many cases, now say, “I look back, and I think I should have had more concerns if I had known fully what the claims were and what the concerns were.”
Not all of them; opinions vary, and we quote a range of opinions, but there are significant ones who were acting on very partial info. The board that fired Sam was, in the words of one person who used to be on the board, “very JV,” and they fumbled the ball hard. And we document the underlying complaints, and people can decide for themselves whether it accumulates into the kind of urgent concern they felt it was, but that argument and that information were not being presented.
They received what some of them now acknowledge as bad legal advice. To describe it, you’ll remember the quote, and probably a lot of your listeners and viewers will remember the quote as a lack of candor. That was what it was reduced to, and then they essentially wouldn’t take calls.
They would not take calls. I’m sure you tried. Everyone I know tried, and it got to the point where, as a journalist, you’re not supposed to give your sources advice, but I was like, “This will go away if you don’t start explaining yourself.”
And that’s what happened. Forget journalists. You had Satya Nadella saying, “What the hell happened? I can’t get anyone to explain to me.” And that’s the company’s major financial backer. And then you have Satya calling [LinkedIn co-founder] Reid Hoffman and Reid calling around and saying, “I don’t know what the fuck happened.”
They’re understandably in that void of information, looking for the traditional non-AI indicators that would justify such an urgent, sudden firing. Like, okay, was it sex crimes? Was it embezzlement? And the entire subtle, but I think meaningful, argument that this tech is different and that this kind of a steady accumulation of smaller betrayals could have meaningful stakes both for this business and maybe for the world, was largely lost. So capitalist incentives won out, but also the people who made it went out and were not always operating with complete information.
I want to just ask about the “what everyone thought it was” aspect for one moment, because I certainly saw the news, and I said, “Oh, something bad must have happened.” You’ve done a lot of #MeToo reporting, famously. You broke the Harvey Weinstein story.
You spent a lot of time reporting on these claims that I think you decided were ultimately unfounded: that Altman sexually assaulted minors or hired sex workers, or even murdered an OpenAI whistleblower. I mean, you are the person who can report this stuff the most rigorously. Did you decide that it came to nothing?
Well, look, I’m not in the business of saying something has come to nothing. What I can say is I spent months looking at these claims and did not find corroboration for them. And it was striking to me that these guys, these companies that have so much power over our futures, truly are spending a disproportionate amount of their time and resources in a childish mud fight.
One executive describes it as “Shakespearean.” The amount of private investigator money and the opposition dossiers being compiled is relentless. And the unfortunate thing is that the kind of salacious stuff, which gets parroted by Sam’s competitors, is just assumed fact, right? There’s this allegation that he pursues underage boys, and at many cocktail parties in Silicon Valley, you hear this. On the conference circuit, I’ve heard it just repeated by credible, prominent executives: “Everybody knows this is a fact.”
The sad thing is that I talk about where this comes from, the various vectors by which it’s transmitted. Elon Musk and his associates are seemingly pushing really hardcore dossiers that kind of amount to nothing. They’re vaporous when you actually start to look at the underlying claims. The sad thing is that it really obscures the more evidence-based critiques here that I think really deserve urgent oversight and consideration.
The other theme that really comes through in the story is almost a sense of fear that Sam has so many friends — he’s invested in so many companies from his previous role as CEO of Y Combinator, just to his personal investing, some of which are in direct conflict with his role as CEO of OpenAI — and there’s silence around him.
It struck me as I was reading one line in particular. You describe Ilya Sutskever’s memos, and they’re just out in Silicon Valley. Everyone calls them the Ilya memos. But there’s even silence around that. They’re passed around, but they’re not discussed. Where do you think that comes from? Is it fear? Is it a desire to get angel investment? Where does that come from?
I think it’s a lot of cowardice, I’ll be honest. Having reported on national security stories where the sources are whistleblowers who stand to lose everything and face prosecution, they still do the right thing and talk about things to create accountability. I’ve worked on the sex crimes-related stories that you mentioned, where sources are deeply traumatized and fear a very personal kind of retribution.
In many cases around this beat, you’re dealing with people with their own profile and power. They’re either famous people themselves or they’re surrounded by famous people. They have robust business lives. In my view, it is actually very low exposure for them to talk about this stuff. And thankfully, the needle is moving as we talked about earlier, and people are now talking more.
But for such a long time, people really just shut up about it because I think the Silicon Valley culture is just so ruthlessly self-interested and ruthlessly business and growth-oriented. So I think this afflicts even some of the people who were involved in firing Sam, where you saw in the days after, yes, one factor that led to him coming back and the firing of old board members was that he rallied investors who were confused to his cause.
But another is that so many other people around it who had the concerns and voiced them urgently just folded like napkins and changed their tune the moment they saw the wind was blowing the other way, and they wanted in on the profit train.
It’s pretty dark, honestly, from my standpoint as a reporter.
Some of those people are Mira Murati, who, I believe, for 20 minutes was the new CEO of OpenAI. She was then replaced. It was a very complicated dynamic, and obviously, Sam came back. The other person is Ilya Sutskever, who was one of the votes to remove Sam, and then he changed his mind, or at least said he changed his mind, and then he left to start his own company. Do you know what made him change his mind? Was it just money?
Well, and to be clear, I’m not singling those two out. There are other board members who were involved in the firing who also fell very silent after. I think it’s like a wider collective problem. These are, in some cases, people who had the moral fiber to sound alarms and take radical action, and that is to be commended. And that’s how you assure accountability. That could have helped a lot of people who are affected by this technology. It could have helped an industry to remain more meaningfully safety-focused.
But dealing with whistleblowers and people who try to prompt that accountability a lot, you also see that it takes the fiber of sticking it out and standing by your convictions. And this industry is truly full of people who just do not stand by their convictions.
Even though they think that they’re building a digital God that will somehow either eliminate all labor or create more labor, or something will happen.
Well, that’s the thing. So the culture of not standing by your convictions and all ethical concerns falling by the wayside the moment there’s any heat or anything that could threaten your own standing in the business is maybe all well and good to some extent for business-as-usual companies that are making whatever kind of widget.
But these are also the same people who are saying, “This could literally kill us all.” And again, you don’t have to go to the Terminator Skynet extreme. There is a set of risks that are already materializing. It is real, and they are right to warn about that, but you’d have to have someone else armchair psychologize how those two things can live in the same people where they’re sounding the urgent warnings, they’re maybe putting a toe in and trying to do something, and then they’re just folding and falling silent.
That is precisely why you can have these kinds of instances of things being kept out of writing and things being swept under the rug, and no one talking about it this openly for years after the fact.
The natural, responsible party here would not be the CEOs of these companies; it would be governments. In the United States, maybe it’s state governments, maybe it’s the federal government.
Certainly, these companies all want to be global. There are lots of global implications here. I watched OpenAI, Google, and Anthropic all goad the Biden administration into releasing an AI executive order. It was pretty toothless in the end. It just said they had to talk about what their models were capable of and release some safety testing. And then they all backed Trump, and Trump came in and wiped all that out and said, “We have to be competitive. It’s a free-for-all. Go for it.”
At the same time, they’re all trying to raise funds from Middle Eastern countries that have lots of oil money and want to change their economies. Those are politicians. I feel like politicians should definitely understand someone is talking out of both sides of their mouth, and they’re not going to be too upset if someone’s disappointed in the end, but the politicians are getting taken for a ride, too. Why do you think that is?
This is really, I think, why the piece matters in my view and why it was worth spending all this time and detail on. We are in an environment where the systems that, as you say, should be providing oversight are just hollowed out. That’s a post-Citizens United America, where the flow of money is so unfettered, and it’s a particular concentration of that problem around AI, where there are these PACs that are proliferating and flooding money into quashing meaningful regulation at both a state and a federal level.
You have [OpenAI co-founder] Greg Brockman, Sam’s second in command, directly contributing in a major way to a couple of those. It leads to a situation where there really is capture of legislators and potential regulators, and that is a hard spiral to get out of. The sad thing is, I think that there are simple policy moves, some of which are being trialed elsewhere in the world, that would help with some of these accountability problems.
You could have more mandatory pre-deployment safety testing, which is something that is already happening in Europe for frontier models. You could have more stringent written public record requirements for the kinds of internal investigations where we saw things being kept out of writing in this case. You could have a more robust set of national security review mechanisms for the kinds of Middle Eastern infrastructure ambitions that Sam Altman was pushing.
As you say, he was doing this bait and switch with the Biden administration, saying, “Regulate us, regulate us,” and helping them craft an executive order, and then the moment Trump gets in, truly in the very first days, just going no holds barred, “Let’s accelerate and let’s build a massive data center campus in Abu Dhabi.” You could have, this is a really simple one, like whistleblower protections. There is no federal statute protecting AI company employees who disclose these kinds of safety concerns that are being aired in this piece.
We have cases where Jan Leike, who was a senior safety guy at OpenAI, was leading super alignment at the company. He writes to the board, essentially whistleblower material, saying the company is going off the rails on its safety mission. Those are the kinds of people who should actually have an oversight body they can go to, and they should have explicit statutory protections of the kinds we see in other sectors. This is simple to replicate a Sarbanes-Oxley-style regime.
I think that despite how acute the problem is of Silicon Valley assuming control of all of the levers of power, and despite how hollowed out some of these institutions that might provide oversight and guardrails are, I still do believe in the basic math of democracy and of self-interested politicians. And there is more and more polling data emerging that a majority of Americans think that the concerns, questions, or risks of AI currently outweigh the benefits.
So I think the flood of money into politics from AI, it’s within all of our power to make that a source of a question mark with respect to politicians. When Americans go to vote, they should be scrutinizing whether the people they vote for, especially if they are uncritical and anti-regulation, given all these concerns, are bankrolled by big tech special interests. So I think if people can read pieces like this, listen to podcasts like this, and care enough to think critically about their decisions as voters, there is a real opportunity to generate a constituency in Washington of representatives who keep an eye on and force oversight.
That might be one of the most optimistic things I’ve ever heard anyone say about the current AI industry. I appreciate it. I’m obsessed with the polling that you’re talking about. There’s a lot of it now. It’s all pretty consistent, and it looks like the more young people, in particular, are exposed to AI, the more distrustful and angry they are about it. That’s the valence of all the polling. And I look at that, and I think, well, yeah, smart politicians would just run against that. They would just say, “We’re going to hold big tech accountable.”
Then I think about the past 20 years, a politician saying they’re going to hold big tech accountable, and I’m struggling to find even one moment of big tech being held accountable. The only thing that makes me think this might be different is, well, you actually have to build the data centers, and you can vote against that, and you can petition against that, and you can protest against that.
I think there’s a politician who just had their house shot at because they voted for a data center. The tension is reaching, I would call it, a fever pitch. You’ve described the insularity of Silicon Valley. This is a closed ecosystem. It feels like they think they can run the world. They’re putting a ton of money into politics, and they’re running up against the reality that people don’t love the products, which doesn’t give them a lot of cover. The more they use the products, the more upset they are, and the politicians are beginning to see there are real consequences to supporting the tech industry over the people they represent.
You’ve talked to so many people. Do you think it is possible for the tech industry to learn the lesson that is right in front of them?
You say it feels like they think they can run the world without accountability. I don’t even think that needs the “feels like” qualifier. I mean, you look at the language Peter Thiel is using, it’s explicit. Of course, that’s an extreme example. And Sam Altman, though he is close with and informed by Thiel’s ideology to some extent, is a very different kind of person who might sound different and more measured up to a point.
But I do think the wider ideology that you get from Thiel, which is basically: We’re done with democracy, we don’t need it anymore. We have so much that we just want to build our own little bunkers. We’re not dealing with the Carnegies anymore or the Rockefellers anymore, where they’re bad guys, but they feel they need to participate in a social contract and build things for people. There’s a real nihilism that’s set in.
And I do think it’s just been a mutually reinforcing spiral in recent American history of moguls and private companies acquiring super governmental power while democratic institutions that might hold them accountable are hollowed out. I do not feel optimistic about the idea that those guys might just wake up one day and think, “Huh, actually maybe we do need to participate in society and help build things for people.”
I mean, you look at like the microcosmic example of The Giving Pledge, where there was a moment where it was seemly to be charitable, and that moment is now past and even ridiculed. That is a problem, the broader problem of lack of accountability that I think can only be solved extrinsically. That has to be voters mobilizing and resurrecting the power of government oversight. And you’re exactly right to say that the main vector through which people could maybe achieve that is local. It’s to do with where infrastructure is being built.
You mentioned some of the white-hot tension around this that’s leading to violence and threats, and obviously, nobody should be violent or threatening. And I’m also not here to make specific policy recommendations other than to just present some of the policy steps that seem basic and are working elsewhere in the world, right? Or those who have worked in other sectors. I’m not here to say which of those should be executed and how.
I do think something needs to happen, and it needs to be external, not just trusting these companies. Because right now we have a situation where the companies that are developing the tech and are equipped best to understand the risks, and in fact are the ones warning us of the risks, are also the ones with nothing but incentive to go fast and ignore those risks. And you just don’t have anything to counterbalance that. So whatever reforms might take in terms of specifics, something has to run up against that. And I do still return to that optimism that the people still matter.
I generally buy your argument. Let me just make the one tiny counterargument that I think I can articulate. The other thing that could happen outside of the ballot box is that the bubble pops, right? That not all these companies get to the finish line, and that there isn’t product market fit for consumer AI applications. And again, I don’t quite see it yet, but I’m a consumer tech reviewer, and maybe I just have higher standards than everybody else.
There is product market fit in the business world, right? Having a bunch of AI agents write a bunch of software seems to be a real market for these tools. And you can read the arguments from these companies saying, “We’ve solved coding, and that means we can solve anything. If we can make software, we can solve any problems.”
I think there are real limits to the things software can do. That’s great in the business world. Software can’t solve every problem in reality, but they have to get there. They got to finish the job, and maybe not everybody makes it to the finish line. And there is a crash, and this bubble pops, and maybe OpenAI or Anthropic or xAI, one of these companies fails, and all this investment goes away.
Do you think that would affect this? Actually, let me ask the first question first. OpenAI is right on the cusp of an IPO. There are a lot of doubts about Sam as a leader. Do you think they’re going to make it to the finish line?
I’m not going to prognosticate, but I think you raise an important point, which is that market incentives do matter internally to Silicon Valley, and the precarity of the current bubble dynamics does stand to interrupt the, again, potentially, according to critics, race to the bottom on safety.
I would also add to that, if you look at historical precedence where there’s a similarly and seemingly impenetrable set of market incentives and potentially deleterious effects for the public, there’s impact litigation. And you see that as an area of concern lately. Sam Altman is out there this week endorsing legislation that would shield AI companies from some of the types of liability that OpenAI has been exposed to in wrongful death suits, for instance. Of course, there’s a desire to have that shield from liability.
I think that the courts can still be a meaningful mechanism, and it’ll be really interesting to see how these suits shape up. You already saw, for instance, the class-action suit, of which I and many, many other authors I know are members, against Anthropic for their use of books that were under copyright. If there are smart legal minds and plaintiffs who care, as we’ve historically seen in cases from big tobacco to big energy, you can also get some guardrails and some incentives to slow down, be careful, or protect people that way.
It does feel like the entire cost structure of the AI industry hangs on a very, very charitable interpretation of fair use. Doesn’t come up enough. The cost structure of these companies could spiral out of control if they have to pay you and everyone else whose work they’ve taken, but it’s inconvenient to think about, so we just don’t think about it. Right next to that, all of these products are now running at a loss. Like today, they’re all running at a loss. They’re burning more money than they can make. At some point, they have to flip the switch.
Sam is a businessman. As you’ve mentioned several times, he’s not a technologist. He’s a business person. Do you think he’s ready to flip the switch and say, “We’re going to make a dollar?” Because when I ask, “Do you think OpenAI is going to make it?” It’s when they’ve got to make a dollar. And so far, Sam has made all of his dollars by asking other people for their money instead of having his companies make money.
Well, that’s a big lingering question for Silicon Valley, for investors, for the public. You see some statements and moves out of OpenAI that seem to evince a kind of panic about that. Shutting down Sora, shutting down some ancillary projects, trying to zero in on the core product. But then on the other hand, you still see, at the same time, tons of mission creep, right? Even a small example — it’s obviously not core to their business — is the TBPN acquisition.
By the way, right as we were reaching the finish line and fact-checking, the company facing this kind of journalistic scrutiny acquires a platform where they can have more direct control over the conversation. I think that there are a lot of investors who are concerned, based on the conversations I’ve had, that this problem of promising all things to all people also extends to this lack of focus in the core business model. And I mean, you’re closer to the kind of prognosticating and watching the market than I am probably. I’ll leave you and the listeners to be the judge of whether they think OpenAI can flip the switch.
Well, I asked the question because you’ve got a quote in the piece from a senior Microsoft executive, and it is that, “Sam’s legacy might end up more similar to Bernie Madoff or Sam Bankman-Fried,” rather than Steve Jobs. That is quite a comparison. What’d you make of that comparison?
I think that’s a paraphrase. The Steve Jobs part wasn’t part of the quote. But there’s an interesting sort of sobriety to it because it’s phrased as like, “I think there’s a small but real chance that he winds up being an SBF or a Madoff-level scammer.” Meaning, to my mind, not that Sam is being accused of those specific types of fraud or crimes, but that the degree of dissembling and deception from Sam may have a chance of ultimately being remembered at that scale.
Yeah, I think what’s most striking about that quote, honestly, is that you call around at Microsoft and you don’t get a like, “That’s crazy. We’ve never heard that.” You get a lot of like, “Yep, a lot of people here think that” which is remarkable. And I think it does go to these nuts and bolts business questions.
One investor told me, for instance, in light of the way in which this trait has persisted in the years after the firing” — and this also thought this was an interesting sober thought — that it’s not necessarily that Sam should be at the absolute bottom of the list, like should be the lowest of the low in terms of the people that absolutely must not build this technology, for what it’s worth. There are several people who said Elon Musk is that person. But that this trait puts him maybe at the bottom of the list of people who should build AGI, and beneath several other leading figures in this field.
So I thought that was an interesting appraisal, and that’s the kind of thinking I think that you get from the real pragmatists who maybe aren’t buying into the safety concerns as much. They’re just growth-oriented, and they think that OpenAI now has a problem with Sam Altman.
The Microsoft piece is really interesting. That company thought they were on top of the world. That they had made this investment and they were going to leapfrog everyone, especially and most importantly, Google, and get back into the good graces of consumers. The level to which they feel burned by this adventure — this is a very soberly run company — I don’t think can be overstated.
You mentioned the characters and the personality traits. I want to end here with a question from our listeners. I said on our other show, The Vergecast, that I was going to be talking to you, and I said, “If you have questions for Ronan about this story, let me know.” So we have one here that I think ties in neatly with what you’re describing. I’m just going to read it to you:
“How do the justifications for bad behavior, cutthroat actions of Altman and other AI leaders, differ from the justifications Ronan has heard from other high-profile leaders in politics and media? Don’t they all justify their actions by saying this is how the world gets changed? If I don’t do this, someone else will?”
Yeah, there’s a lot of that going around. I would say what is distinctive to AI is that the existential stakes being so uniquely high means both the statements of risk are extreme, right? You have Sam Altman saying, “This could be lights out for all of us.” And also, critics might say, the mania that the questioner is referring to is extreme, right?
The thing that Sam accused Elon of, on the record, was that maybe he wants to save humanity, but only if it’s him. The kind of ego component of wanting to win, which is a framing Sam uses all the time, and that this is one for the history books, this could change everything. So therefore, even above and beyond the “you’ve got to break a few eggs” mindset of most Silicon Valley enterprises, there is, in the minds of some figures leading AI, I think, a complete rationalization for any and all fallout.
And forget breaking eggs. I/ think a lot of the underlying safety researchers would say potentially risking breaking the country, breaking the world, and breaking millions of people whose jobs and safety hang in the balance — that’s what’s unique about it. That’s where I close, reflecting on this body of reporting, really believing this is about more than Sam Altman. This is about an industry that is unconstrained and a spiraling problem of America being unable to constrain it.
Yeah. Well, we had some optimism there, but I think that’s a good place to leave it.
[Laughs] End on a downbeat.
Of course. That’s every great story, really. The Musk-Altman trial is upcoming. I think we’re going to learn a lot more here. I suspect I will want to talk to you again. Ronan Farrow, thank you so much for being on Decoder.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
Decoder with Nilay Patel
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!





