Dismantling the Empire of AI with Karen Hao

Years before OpenAI became a household name, Karen Hao was one of the very first journalists to gain access to the company. What she saw when she did unsettled her. Despite a name that signaled transparency, executives were elusive, the culture secretive. Despite publicly heralding a mission to build AGI, or artificial general intelligence, company leadership couldn’t really say what that meant, or how they defined AGI at all. The seeds of a startup soon to be riven by infighting, rushing to be first to market with a commercial technology and a powerful narrative, and led by an apparently unscrupulous leader, were all right there.

OpenAI deemed Hao’s resulting story so negative it refused to speak with her for three years. More people should have read it, probably. Since then, OpenAI has launched Dall-E and ChatGPT, amassed a world-historic war chest of venture capital, and set the standard for generative AI, the definitive technological product of the 2020s. And Hao has been in the trenches, following along, investigating the company every step of the way. The product of all that reportage, her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is now officially out. It’s excellent.

In fact, I would go so far as to say that if you were looking to understand modern Silicon Valley, the AI boom, and the impact of both on the wider world by reading just one book, that book should be Empire of AI.

So, given that it could not be more in the Blood in the Machine wheelhouse, I invited Hao to join me for the first-ever BITM livestream, which we held yesterday afternoon, to discuss its themes and revelations. It went great, imo. I wasn’t sure how many folks would even drop by, as I’d never tried a livestream here before, but by the end there were hundreds of you in the room, leaving thoughtful comments and questions, stoking a great discussion.

I had intended to maybe make this my first BITM podcast, too, but perhaps I got a little overzealous; the audio quality isn’t always great, and I’d have to edit a new intro, I think, as I was kind of all over the place. HOWEVER — the conversation was so good that in addition to reposting the video above, I transcribed it below, too, so you can read it as a Q+A. This turned out to be a lot of work, but I was glad to go over the chat and the many great points Hao raised again. Forgive any remaining typos. (I also did not transcribe the audience Q+A portion.) As always, all of this work is made possible *100%* by paid supporters of this newsletter. If you value work like this—in-depth interviews with folks like Hao, who are fighting the good fight—please consider becoming a paid subscriber. A million human-generated thanks to all those who already do.

Subscribe now

BLOOD IN THE MACHINE: Okay, greetings and welcome to the very first Blood in the Machine Multimedia Spectacular, with Karen Hao. Karen is a tech journalist extraordinaire. She's been a reporter for the MIT Technology Review and the Wall Street Journal. And you currently write for the Atlantic, as well as other places. You lead the Pulitzer Center's AI Spotlight Series, where you train journalists around the world how to cover AI. And after reading this book, I'm so glad it is you doing the training and not certain other journalists in this ecosystem—we won't name names. So congratulations on all of that. Did I get it all? Did I get all the accolades?

Karen Hao: Yes [laughs].

Okay, perfect. But most importantly, for our purposes today, Karen has written this book, Empire of AI, Dreams and Nightmares in Sam Altman's Open AI. And it's out this week. And let me just say this bluntly. This is not the interview you want to come to for hardballs and gotchas on Karen, because I just absolutely love this book.

I personally hoped somebody would write this book. I think it's just really just a real feat of reportage and cultural analysis and economic and social critique. It's a panoramic look, not just at the leading generative AI company, but at the last five to 10 years of some of the most important technological, economic, and cultural headwinds in all of tech, as well as how they're impacting, reshaping, and injuring communities around the world.

But if you have time to read one book about AI and its global and political implications, then this is it. Honestly, this is it. And we'll dig into why in just a second.I can't recommend it enough Okay. End of effusive praise. Karen, thank you so much for joining.

Thank you so much for having me, Brian. And it's an honor to be part of this first live stream. I religiously read all of your issues. And it is also so effective and inspirational for me to do the work that you're doing. So thank you.

Well thank you. And I look forward to diving on in. So let us do so right now. And let's just start with with the title. Okay, we've got this is this book is called Empire of AI, not say, “OpenAI, the company that changed everything.” It is very explicitly, I think, this formation, which I think really does sort of put in context the entire story to come in quite a useful lens. So why is that? Why is it called Empire of AI? Why is this book about OpenAI beginning with this empire framing?

Yeah, so the thing that I have come to realize over reporting on opening AI and AI for the last seven years is that we need to start using new language to really cap the full scope and magnitude of the economic and political power that these companies like OpenAI now have. And what I eventually concluded was the only real word that captures all of that is empire.

These are new forms of empire, AI companies. And the reason is in the long history of European colonialism, empires of old several features to them. First was they laid claim to resources that were not their own and they would create rules that suggested that they were in fact their own.

They exploited a lot of labor as in they didn't pay many workers or they paid them very, very well for the labor that would fortify the empire. They competed with one another in this kind of moralistic way where the British Empire say they were better than the French Empire, or the Dutch Empire would say they were better than the British Empire, and all of this competition was ultimately accelerated the extraction, the exploitation, because their empire alone had to be the one at the head of the race leading the world towards modernity and progress.

So the last future of empire is that they all have civilizing missions and they, whether it was rhetoric or whether they truly believed, they would fly this banner of we are plundering the world because this is the price of bringing everyone to the future. And empires of AI have all of these features. They are also laying claim to resources that are not their own, like the data and the work of artists, writers, creators. And they also design rules to suggest that actually it is their own.

Oh, it's all just on the internet. And copyright law is fair use. And they also exploit a lot of labor around the world, in that they do not pay very well the contractors that are literally working for the companies to clean up their models and to do all the labeling and the preparation of the data that goes into their model.

And they are ultimately creating labor automating technology. So they're exploiting labor on the other end of the AI development process as well in the deployment of these models, where OpenAI literally defines AGI as highly autonomous technology. As systems that outperform humans' most economically valuable work. So their technologies are suppressing the ability of workers to mobilize and demand more rights. And they do it in this aggressive race where they're saying, there's a bad guy, we're the good guy, so let us continue to race ahead and be number one.

And one of the things that I mentioned in the book is empires of old were extremely violent and we do not have that kind of overt violence with empires of AI today but we need to understand that modern day empires will look different than empires of old because there has been 150 years of human rights progress and so modern day empires they will take that playbook and move it into what would be acceptable today but one of the things that I don't put in the book itself, but I started using as an analogy is, if you think about the British East India Company, they were originally a company that was engaging in mutually beneficial economic activity in India.

And at some point there was a flip that switched where they gained enough economic and political leverage that they were able to start acting in their self-interest with absolutely no consequence. And that's when they dramatically evolved to imperial power.

And they did this with the backing of the British crown. They did it with the resources of the British crown, with the license of the British crown. And we are now at a moment, like I froze this manuscript in early January. And then the Trump administration came into power. And we are literally now seeing the same inflection point happening where these companies already have such profound power and are more powerful than most governments around the world.

And previously, really the only government didn't necessarily have complete power over was the U.S. government. And now we've reached that point where the Trump administration is fully backing these companies, allowing them to do whatever they want, completely frictionless. And so we have also gotten to the point now where companies have turned into imperial powers where they can do anything in their self-interest with no material consequence.

Yeah. Well said. And what more profound an example of these sort of imperial tendencies that you're talking about than for your book to drop the same week that there's literally a bill getting pushed through reconciliation that says ‘states can't write any more laws about AI. We're going to ban lawmaking around AI. This is too important.’ It really fits into that definition that you were talking about quite well, where they'll justify it. They have these litany of justifications of, ‘well, it's too complex. We want to legislate at a federal level,’ which they’re not going to do, of course. But it's just an excuse, again, as you said, for this to sort of transmute more fully into empire.

And then part two is last week they were in Saudi Arabia with just with literal empires, right? Just making billion-dollar deals at the same time that they're working to sort of gut any ability of states to sort of engage in the democratic process when it comes to AI. What do you make of all of that?

I mean, and was it last week or the week before, that OpenAI also announced OpenAI for countries and explicitly said in that announcement, we want to build democratic AI rails for other countries to build on.

So we are the provider of democracy around the world. And if you sign up to our program, we will, once again, bring you into the future with all of the wonderful social norms, mores of, the United States of America and I mean it's it's all it's it's exactly what you said, like these are all illustrations of the ultimate game that they're playing, which is they are not here to bring democracy they are not bring modernity to everyone.

They are here to accrue power and to continue accumulating economic and political leverage all around the world, not just in the U.S., such that when, you know, they're trying to put pedals to the metal while Trump is in office and trying to reach escape philosophy so that when he's no longer in office, if it switches back to a Democratic government—to the Democratic Party—and there are different people with different ideas in the White House that might not be so permissive and enabling of their visions. They want to reach a point where it doesn't matter anymore. They want to feel, make everyone feel that there's an inevitability.

The data is already built. The infrastructure is already laid. You cannot reverse that. And so you just have to continue on what seems like an inevitable path.

Yeah, it's truly remarkable. And one way that I think about what's happening with AI is as an amplification of things that have been tried before. You know, with the model of something like Uber, trying to do state capture at the municipal level to ram through its taxi law overrides and try to get exemptions and to try to expand—and not being profitable for a long time and just relying on reams of investment until you can scale. And OpenAI, to me, seems to be saying that we're going to do all of that and more. Like: Can we just blow this all the way up?

I think that it's such a smart observation and framing mechanism to drive this book, which really is not just analysis. It is a narrative history. And that's what I love so much about this book, is that it is structured in the way—it's almost kind of, sneaky—in the way that a lot of ‘tech books’ are, where it’s like, okay, we're going to read about Sam Altman and OpenAI’s founding, and you're not heavy handed—you read their actions and about what's going on. And they speaks for themselves. And you can make your own judgments in a lot of cases.

But I think we need to sort of zoom back and look at sort of who is at the center of all of this, at the man who sort of fancies himself the emperor. This new, aspiring smol bean emperor, maybe is a way to describe him. But it's Sam Altman, of course. So you can't write a book about OpenAI without talking about Sam Altman.

And I think that your treatment of him is—I know he tweeted sort of suggestively a couple of weeks ago about like, he's like, there're two great books coming out about OpenAI that are really fair, implying that there's another one that you shouldn't listen to, which to me is the best endorsement you could hope to get, right? Like Sam Altman saying, stay away from this book because somebody's dug a little too deeply.

But you were one of the first journalists to be invited onto OpenAI's premises before ChatGPT became a phenomenon, a global phenomenon, before this ever-expanding and ever-accelerating funding boom going on with AI. You were at MIT Tech Review at the time, and you were doing a company profile. So let's look back at that era and like, what were your impressions then?

Because they invited you in or they agreed to do a profile—what was that like?

Yeah. So I went to embed in the company in August, 2019 and just a little bit of history of what opening I was like then, because it really was not at all kind of in the public consciousness. It was founded in late 2015 as a nonprofit. And in 2019, there were there were a couple of things that happened. One was or actually even starting in 2018.

The first thing was it was originally co-founded by Elon Musk and Sam Altman. Elon Musk leaves in 2018. And then at the start of 2019, OpenAI, as a nonprofit who said that they would give away all of their research and open source it for the public good, suddenly withholds a piece of research. And at that time, it was the model GPT-2, so two generations before ChatGPT.

Then Sam Altman, or then there's a weird conversion where the nonprofit suddenly has a for-profit nested inside it. OpenAI calls it a capped profit. It's an invented term that they make up to describe a for-profit that has, where investors only get a certain amount of the returns. So there's a ceiling on the returns that they have. And then Sam Altman becomes the CEO of this new capped profit entity. And there was just a lot of activity happening where as a reporter that was covering the cutting edge AI research at MIT Tech Review, I had already kind of had my eyes on OpenAI as a nonprofit and the fact that they had put a stake in the ground saying that we're going to do bleeding edge AI research and we're going to be a big deal. We're going to make ourselves a big deal in this space. And when they started doing all this stuff, I was like, huh, that's kind of interesting.

I think it might be something that we should look into because they were kind of gaining some influence in shaping how AI research was going. They were gaining some influence in the policy world because they were very, very early in starting to build kind of relationships in DC. And I thought if this company is now transforming into something different than what it was originally founded on, and it has already these spheres of influence, then I should just look a little bit closer at that and see how it might have ripple effects in that, those spheres of influence.

And so I ended up pitching the company on this idea: ‘hey, you seem to have a lot of changes going on. I think it seems like you're right for a profile. They agreed. They wanted to reintroduce themselves to the public. And so they told me, come join us for three days within the company. And then once we had already made the arrangements, then they made another announcement that signaled a big change, which was they got a new backer, Microsoft, that was gonna give OpenAI a billion dollars.

So when I got to the company, what was interesting, the first impression, I kind of went in being like, let me just take what they say at face value, that they are trying to do this mission where they are ensuring AGI benefits all of humanity. And let me just take that at face value and ask them to articulate a little bit about how do they actually plan to do that.

And so one of the first meetings that I had was an interview with Greg Brockman and Ilya Setzkever, the CTO and chief scientist at the time, where I said, why spend billions of dollars on AGI? What is AGI? Why are we not just putting billions of dollars in AI or other types of problems that humanity is facing? And what I quickly noticed was they couldn't define what AGI was. They couldn't articulate a vision of what it was, what it would look like, who it would serve.

They couldn't really tell me how AGI would suddenly do all the things that they said that it would do, like give higher quality health care to everyone solve climate change. I was like, can you just explain to me a little bit step by step how it would do that? And they were like, well, I mean, like the whole point is we don't know. And this is a scientific endeavor, but we're going to get there. And it was just bizarre. Like, I would keep trying to poke them on details and get no details.

And then I started noticing that a lot of the undertones at the company were in tension with what they publicly said. So they were actually very, very secretive when I was there. And I was like, wait a minute, but I thought you're supposed to be fully transparent and open.

Open AI, yeah.

Yeah, and they were very, very competitive, even though they had publicly espoused that one of their most important values was collaboration. And the executives told me, like, it is imperative that we are first to AGI. Our mission falls apart and our purpose falls apart if we are not first.

And I kind of probed them a lot on this. Well, how can you be first and also be collaborative? And how can you be first and also be transparent? And also, why are you trying to be first? I don't fully understand.

And yeah, it was just, again, just a complete lack of articulated answers.And so, after that experience, I started interviewing a lot more people. So I probably interviewed around more than a dozen employees and executives when I was within the company, sanctioned interviews by the communications team.

And then I did a bunch more interviews, around two dozen that were not sanctioned by the communication team, just trying to understand what is actually happening in the organization. And essentially what I wrote in the piece was there's a mismatch between what it projects and what it projects has given it a lot of goodwill, it has allowed it to accumulate enough capital, and what's happening behind closed doors.

And ultimately, OpenAI was not happy with that. They were not happy with the piece that came out, yeah.

It sounds like to me was what happening is that they had been getting by in these communities, in Silicon Valley investor communities, waving around these terms. And it was enough. It was enough to sort of unlock some capital here. There was enough to recruit a new sort of batch of talent. But it deflated once it was exposed to sort of any kind of rigor.

And just to remind everyone, this was six years ago—five or six years ago. This was before sort of all of the turmoil. Because I do think that there's also this narrative around Open AI that like, well, it was all these well-meaning guys who were just like, you know, maybe they became corrupted by power. Maybe they just are trying to move too fast or whatnot.

But, and I've done a little bit of work on this myself and a report I did for the AI Now Institute, but—it was looking like this from the get-go. And that's one thing I'm curious about your research is, we weren't there. We'll never know like what the actual scene in those earliest sort of days of deal-making with investors—but when they're launching OpenAI with Elon Musk and some of those researchers and guys from Amazon and it's not like a ‘we're going to save the world’ nonprofit coalition. It's some of the heaviest hitters in VC and, and big tech sort of there.

So the degree to which this drive to build AGI—was it a for profit mission from from the start? Or was there just this idea that this is something big?This is something like, if I'm Sam Altman, I can get Elon Musk on board with this, maybe. And then we can build something big and we'll figure out the details later. Because that's sure kind of what it seems like to me.

Yeah, so I think Altman is very much, he's extremely strategic. He plays the long game. And he's also very very good at understanding people and so you know i opened the book with a quote from one of his blog posts early early on when he's young, where he says uh the best way to motivate people is by building a religion and then most people realize that the best way to build a religion build a company. Not the exact quote, I paraphrase, that but I think he understood very early on that the way to attract the things that you need to do big things—and he's always fashioned himself as someone who does big things—is to have a rallying ambition.

An ambition that rallies talent, capital, resources that wards off regulation. And so he originally created a nonprofit with this rallying ambition of AGI benefiting humanity. I think, you know, I never found a smoking gun for ‘Did he always think that Open Air would be a for-profit?’

Subscribe now

OpenAI has very much evolved again and again and again based on what Altman thinks will be the right vehicle to kind of get what he needs in that moment.

I can't say for certain, but I know for a fact that he was very careful in the way that he crafted the company to try and get this thing going. And there were two people that he really recruited that this enabled him to recruit better.

Like the first one was Elon Musk. So I opened the book after the prologue with the kind of way that really tactically reeled Musk in by mirroring a lot of the language that Musk had around AI. Musk was really concerned at the time about Google. He was really obsessed with Demis Hassabis and he thought Demis Hassabis was like evil and that if AI would be developed by Hassabis, that it would go totally wrong.

And he was genuinely fearful of this notion that AI would somehow kill all of humanity. And so Altman really kind of cultivated a relationship with Musk by repeating all these things. Like, ‘I think it would be really concerning if Google was able to develop this. And I think you're right that we should absolutely be thinking about the doomer aspects of this. And, like, it seems to me that in order to compete with Google and make sure that we don't kill ourselves, we need to build some kind of counter to Google.’

He's just leaving the breadcrumbs. I love how you do that. That was one of my favorite parts of the early part of the book because you're sharing the emails and you can see Altman just mimicking Musk's language even. And it's a very early sign, that you kind of return to at the end when we go back to sort of the boardroom drama that's now so famous—but you just see like you're like oh my god like this guy is like willing to be quite shameless in how in how he deploys like these tactics to manipulate people.

Yeah I'm really glad that you picked it up because I did, when I was going through the reporting I was like, ‘whoa.’ It was so close to the things that Musk was saying at the time. So yeah, so I did piece a bunch of them together side by side to just allow the reader to kind of absorb that without necessarily saying explicitly, ‘this is what's happening.’

But yeah, so Musk ended up being the first person that he kind of reigned in. And then Ilya Stutzkever was the second key person that reigned in with the mission. There was also Greg Brockman, but Greg Brockman was already like, I'm on board, Sam, we're buds. You say what you want me to do and I'll do it.

But Ilya Sutskever, Sam Altman cold emailed. He was working as a research scientist at Google. He was already a millionaire at that point, already extremely awarded, prestigious researcher within the AI field. And I think Altman knew that he couldn't outcompete Google on salary. He couldn't outcompete on other types of things because OpenAI didn't really have that many assets at the time—but he could appeal to Sutzkever's sense of purpose and mission.

And so Altman ended up orchestrating this dinner, that is now very much part of the OpenAI founding mythology, where he brings Elon and then a bunch of people, as well as Ilya Sutskever. And Sutskever only said yes to the dinner because he heard that Musk was there. And then he ended up saying yes to joining OpenAI because of the mission. So you can sort of see how, from the very early get-go, Altman knew all the pieces and the chess moves that he had to play to kind of get everyone in formation.

And then as certain elements of OpenAI's structure, like the nonprofit, this and that, started losing relevance and utility, that's when things change. It's like, okay, we've already got the talent now. We've already got Elon's branding now. So the nonprofit is not as valuable. So let's now move to the next phase. Let's put a for-profit in the nonprofit because now the thing we need is money. And that's like the next era. And so OpenAI has very much evolved again and again and again based on what Altman thinks will be the right vehicle to kind of get what he needs in that moment.

Yeah. And it is really wild to see throughout the book, the ways that he will just sort of like, talk about AGI and X-risk, depending on what room he's in,and I think that that is a really sort of important insight that becomes really crystallized throughout this book that, above all, AGI has utility as a means of, again, as you said, leading the mission or, drawing in investors. But the idea, this idea and this promise, were here first.

And I also talked a lot about that charter, that line in the charter, about this promise that we can automate “most economically valuable work.” That's what it comes down to. And that's why so many companies are salivating over it. And then I think another thing that's very interesting is that the way that we see them seize on, which is OpenAI's core insight, which is just “More,” right?

And once again, I think it's really interesting that it's in this empire narrative because it’s just ‘more, more, more.’ They realize that they can do what no one else has been able to do. If they just hook the models up to more compute, more resources, more data.

I love that part of the book where they realize that their, their whole secret that they've unlocked can be like written on a grain of rice. It’s scaling—that's it. Like, that's what we're doing. But in order to scale, you need to have sort of an imperial formation. You have to be able to perform conquest of new areas, of new arenas to get that data, to get that energy. And that's what I think is what's truly amazing about the book is that then we take, I think we take three different sort of zoom outs, one to look at the work that DAIR and the researchers who were fired from Google, including Timnit Gebru and her co-authors for, you know, writing an AI paper that Google didn't like.

And then we go to the Global South to see where, well—to do an imperial project, you need those raw materials. And so you spend time in Kenya, in Venezuela, in Chile.

And that's really two more elements. There's the labor that has to go into making sure this data is usable. And then there's the, you need to have data centers running to process it all. You need a lot of energy. And so can you talk about those trips you took and why you took them and why you included them in this narrative?

Yeah, totally. Yeah, so one of the things that I sort of hit upon quickly as I started reporting on AI was that to really understand how AI interfaces with society, you have to move far and away from Silicon Valley. Because if you stay within Silicon Valley and spend all of your time understanding their worldview and how that worldview is being crafted into the technology, you're not going to see how it then falls apart when it goes to the places that are most different from what these people sitting in the pristine ivory towers are imagining about how the world works.

And so I felt really strongly that in order, if I were going to write a book I had to have both elements. I had to have the inside in the halls of the company view and also then go all the way to places that are the fundamental black mirror opposite of what they're talking about. And so the primary places I went to were Columbia and Kenya for looking at the workers' rights and then Chile and Uruguay for looking at the data center and environmental impacts and how that's starting to really escalate around the world and take people's life-sustaining resources.

And the trip to Colombia was actually an older trip where I really wanted to go to Venezuela, as you mentioned, because in 2016 to 18 there was a phenomenon where the entire air industry started contracting workers in Venezuela because the country's economy was completely collapsing and yeah and all of these data annotation firms that were selling to AI companies like hey we can get you workers to clean your data in self-driving cars; that was the thing at the time. They also had an uptick in Venezuelans coming on their platform and they realized that they could really capitalize on this opportunity by getting people that were super educated, that were really well connected to the Internet and were facing extreme economic duress and therefore willing to work at any price.

And so in 2021, I believe, I ended up going to Colombia because I couldn't get into Venezuela to speak with a Venezuelan refugee who had fallen into this data annotation work. And for the first time, I sort of saw through her eyes how these platforms controlled her life because they work by just that you have a website that you log on to. Anyone can create an account and then just post jobs that disappear within seconds if you don't immediately claim it because they hit all of these workers against each other to claim the jobs as quickly as possible.

And there were experiences where she would go on a walk, there was an experience she had where she went on a walk and she got a notification on her phone that a job had suddenly appeared and it was for hundreds of dollars. And she just started sprinting back to her apartment to try and claim it before it went away. And by the time she got back to the apartment, it was gone. And that could have been her, literally the income for her month. And so after that, she never went on walks during the weekday. And she would only allow herself walks on the weekend.

That was pre-generative AI. Now we're in a generative AI era where problems have certainly escalated because the labor has risen. So what does it look like now? So i ended up taking a trip to Kenya to then understand what it looks like with this current paradigm and the thing that had changed was the type of whereas with the woman in Columbia she um like the harm was structural how the platform imposed a kind of fragility and inability to control her life rhythms now we were shifting to a world where the actual tasks themselves were also psychologically harmful because trying to develop a chatbot that can chat about anything you need to then put content moderation filter on that chatbot and develop that content moderation there you need workers to do the same exact type of content monitoring media made workers do.

And I interviewed workers who had been contracted by OpenAI specifically to develop these content moderation tools later used for ChatGPT. And their lives were completely upended by the experience. And there was one man whose specific story I highlighted in the book, where his mental health completely frayed. And then the people that depended on him left him. So his wife then up and left and took her daughter with her. And all to produce a technology that then ultimately also took away his brother's economic opportunities.

His brother was a writer and when ChatGPT came out, he started losing his contracts. And so really, I mean, this is how you can see the true logic of empire. All the Silicon Valley is always talking about is ‘when AGI comes, when the AGI economy comes.’ That's like the new phrase that they're now saying. It's going to create all of these new opportunities. But actually, the jobs that we did are psychologically harmful and are taking away other jobs that are dignified.

The question to ask is not “Is AI doing good? Is AI benefiting all of humanity?” The question to ask is: “How are you developing and deploying AI to shift power? Are you ultimately fortifying the empire or dismantling the empire?”

So and then the other piece of it was then I went to Chile to really understand the environmental and resource extraction that happens because we're not just talking about data extraction. We're talking about literal minerals being excavated from the earth to build data centers and supercomputers for training these models.

And Chile is one of the largest suppliers of minerals that go into data centers, lithium and copper. And so the Atacama Desert, which is the northern part of Chile, has been completely hollowed out because that is how Chile's economy is very much an extractive economy.

They've, over their entire course of their own colonial history, have really ended up in a place where, in this country, this is how they create jobs, is they serve higher power and you extract their natural resources.

And as that's happening in the north in Chile, there's also all of these data centers that are flooding into Santiago in the middle of Chile because the government also wants investment and to allow these companies to take up their energy and their water to build these data centers. And so there's just so many different degrees of environmental extraction happening in Chile. And I met with activists, indigenous activists in the Atacama Desert and activists that were living in the greater Santiago area that were all facing the same thing of: our land is literally being rung dry and we are not getting any benefit from this.

The people that are getting benefit are the elites in our country and the elites in your country. Um, and we have no, like, how, how do we have a future that in this context where we can't even control our own resources or being enriched by our own resources. So yeah, so those were two aspects of the empire story that I really, really wanted to put into the book.

And they ring out. You spend time with some of the activists and local community members and workers, and it's woven in in a way that it's like—OK, none of this would be possible without all of this happening in the global south—all of this labor.

So in a dark corollary, I wrote a book about the iPhone about almost 10 years ago now. And I went to almost some of the same exact places, right? I went to Kenya where there were app makers who were being exploited to help develop the apps for the early iPhone. And then I also went to Chile where the lithium for batteries are mined. And you're just looking at the footprint, right, and I think AI is certainly a good lens, but it's like almost you can combine all of Silicon Valley's tendencies that have been accelerated—its pursuit of capital, of expansion, of new markets, and it's just striking to to hear how little has changed. How [these companies] keep returning to these same places in the global south to exploit.

I think that's such an important point because AI is continuum it's not like AI suddenly came out of nowhere—it exists on a continuum of everything that Silicon Valley has done before.

It's the same dudes that are doing this. And so one of the things that I really feel after doing all the reporting on the book is this current manifestation of AI, where it's really a manifestation of growth-at-all-costs, could not have existed without the groundwork that was laid in the previous era of Silicon Valley with social media and with search and with iPhone. Both in terms of the surveillance capitalism that started 10, 20 years ago, they wouldn't have had all of that data lying around without surveillance capitalism to then train the models.

And imposed the sense that it was okay to take it, right? Like, that it's there for the taking.

Yeah, exactly. And they had to really normalize that culturally. You know, a lot of people, as I was reporting the book, kept coming back to this metaphor of a frog in a boiling pot of water. Like, Silicon Valley has very strategically made people become more and more and more and more okay with giving all of our agency away.

And so all of the supply chains that they installed, the cultural norms that they installed, the capitalistic logic that they installed previously is now evolving into its most extreme form and has turned from capitalism to colonialism.

To the point where like—I don't know where they go if this all blows up. Where do you go from here? From promising the machine that can automate all labor, that can be your therapist, that can be your doctor. That could do everything—it's the real everything app!

Right. Like how long is this narrative going to last before the new one and what on earth are they going to say in the new one?

Yeah.

Yeah, it's alarming.

It is. It's wild. I do want to leave some time to get to some questions, but I also want to, there's a beautiful epilogue in the book where you look at an alternative model for like what might what might be if this if this wasn't an imperial project, if this wasn't a late capitalist project where that, the incentives were just infused in from the get go.

You know, we might have really useful, really interesting and beautiful projects like reviving a dying language. If you could go through some of your prescriptions for how we might dismantle the empire of AI or do things differently.

Yeah, one of the things that I draw on in the epilogue is from a talk that one of my dear friends, Ria Kalluri, who's an AI researcher from Stanford, that she said in a talk in 2019 at Queer in AI, which is an organization of queer AI researchers thinking about how to queer the AI process.

And she said, the question to ask is not “Is AI doing good? Is AI benefiting all of humanity?” The question to ask is: “How are you developing and deploying AI to shift power? Are you ultimately fortifying the empire or dismantling the empire?”

And what I say in the epilogue is ultimately—I don't have all the solutions and prescriptions. I don't even have a good prediction of what the empire is going to look like. It's evolving and moving so quickly. I don't have all the prescriptions for how we're going to dismantle it. But that is the thing that we have to keep asking ourselves again and again and again. When we are developing AI, when we are deploying AI, when we are resisting certain visions of AI—how is that all trying to dismantle the empire and return us back to democracy?

And the thing that I really advocate for in terms of AI, visions of AI that do that, increasingly, I believe that we cannot have large scale AI models’ growth at all costs. A one-size-fits-all solution is inherently colonial. But what we can do is go back to a world in which AI models can be designed and deployed to solve specific challenges that are well scoped and are things that we need. Things like, as you said, revitalizing indigenous languages that original colonizers tried to eradicate.

Things like integrating more renewable energy into a grid by optimizing, by better predicting renewable energy generation and optimizing distribution. Things like enabling doctors to give patients better diagnoses, because there have been plenty of studies that have shown when doctors are given specific models, such as a model for detecting certain types of cancer, they are able to detect the cancer earlier and more accurately than patients are able to address it faster.

Things like models that help us with drug discovery. None of these are generative AI models. None of them are large language models, large scale models. And we really need to start better articulating ‘what are exactly the challenges that we need to address? Where are the places where there's an optimization problem, a computational problem, a maximization problem that lends itself to AI? And how do we then empower, enable humans to use AI as tools to accomplish those goals?

And that's ultimately what we need.


Subscribe now