On AGI, mass automation, and what the Luddites really fought against

A week or two ago, a producer for the New York Times’ very popular tech podcast Hard Fork reached out and asked if I’d submit a short critique of the show. The hosts, Casey Newton, a veteran tech journalist who runs the newsletter Platformer, and Kevin Roose, the Times’ tech columnist, are often criticized for their, let’s say, bullish coverage of AI, and the plan was to do a segment where they addressed some of their critics’ concerns.

I’ve reached out to Kevin and Casey before, and encouraged them to host more voices critical of AI—as you’ll hear in the snippet, Kevin joked that he’d only do it if I let him give me a cattle brand that read FEEL THE AGI. (I agreed on the condition that I get to do the same with one that said THE LUDDITES WERE RIGHT.) Maybe that was part of the impetus for this segment, who knows! Interestingly, the producer opens by noting that she feels increasingly alienated from the optimistic vision of AI aired on the show, and was thus moved to push for a segment that puts them in dialogue with more skeptical views.

Regardless, it was of course a little bit of a trap; critics were asked to record one-minute segments, while the hosts got as long as they needed in their response. It wouldn’t be live, and there would be no opportunity for rebuttals or clarifications; critics wouldn’t be able to defend themselves or their positions. The critique I recorded and sent in was closer to two minutes, and it got edited down to maybe less than half that. Not in a way that totally distorted the substance or anything, but it did leave out what I consider some key points as well as some of the blunter criticisms. (And even after the editing mine was presented as the “most forceful” critique!)

Blood in the Machine is a 100% independent publication made possible entirely by readers like you. If you find value in this work, and you’re able, consider upgrading to a paid subscription.

I knew that all of the above was going to or likely to happen and I still decided to participate, because I do think it’s important to get more critical perspectives about AI on big, influential mainstream shows like Hard Fork. Plus obviously I like to yell about this stuff. But I wanted to try to challenge the hosts and their audience, in a respectful way that invited substantive response, and would, you know, actually make it to the air.

So I focused my question on AGI—the idea that OpenAI or one of the other tech companies or labs is close to building an artificial general intelligence that can replace most people’s jobs. It’s a framework that’s not always referenced directly on Hard Fork, but is an assumption that sits central to its approach to AI coverage.

Here’s how it appeared when the show aired last Friday:

Hello, gentlemen. This is Brian Merchant. I’m a tech journalist and author of the book and newsletter, “Blood in the Machine.” And first of all, I want to say that I still want a whole show about the Luddites and why they were right. And I think it’s only fair, because Kevin recently threatened to stick me with a cattle brand that says “Feel the AGI,” which brings me to my concern.

How are you feeling about feeling the AGI right now? Because I worry that this narrative that presents super powerful corporate AI products as inevitable is doing your listeners a disservice. Using the AGI language and frameworks preferred by the AI companies does seem to suggest that you’re aligning with their vision and risks promoting their product roadmap outright.

So when you say, as my future cattle brand reads, that you “feel the AGI,” do you worry that you’re serving this broader sales pitch, encouraging execs and management to embrace AI, often at the expense of working people? OK, thanks, fellas.

For reference, I’ll throw the full text of my submitted comment in the footnotes here.1 The most important omitted bits included a citation of OpenAI’s specific corporate definition of AGI, and the following:

VCs, Softbank and big tech are not investing historic amounts of money in this technology because they think it might solve climate change—investors care about the prospect of automating tasks and labor at unprecedented scale. They want new kinds of surveillance, ways to dodge accountability, and dirt cheap content creation.

A key part of my critique is that by advancing the AGI framework, and calling it that, you risk not only materially advancing OpenAI’s interests, but providing cover for all of the short-term harms AI companies are packaging up in their software automation products right now. But I digress. Perhaps the cuts were made for time, or to keep the question focused and more answerable, or to reduce the fury of the hosts, or who knows why, but regardless this is the kind of thing that happens when you’re producing radio shows and podcasts and I do not hold it against anyone.

I also want to give credit where it’s due—Casey and Kevin gave the question a sincere and considered response, and did not just laugh anything off or shit on me while I was not in the room to push back. They presented the response I mostly anticipated I’d get, which boils down to the fact that regardless of whether or not AGI is an industry construct, they both, to varying degrees, believe that such a phenomenon is on the horizon, and that accepting as much is crucial if we are to understand and prepare for the disruptions AGI will engender.

There are a few things that I do want to respond to, so I’ll excerpt from their comments at length here:

Casey Newton

Yeah, I mean, I think Brian’s question is a good one, and I understand what he’s saying when he says, look, AGI is an industry term. If you come on your show every week and talk about it, you wind up sounding like you’re just sort of amplifying the industry voice, maybe at the expense of other voices.

I think this is just a tricky thing to navigate. Because, as you said, Kevin, you look at the rate of progress in these systems, and it is exponential. And it does seem like it is important to extrapolate out to as far as you can go and start asking yourself, what kind of world are we going to be living in then?

I think a reason that both of us do that is that we do see so many obvious harms that will come from that world, starting with labor automation, which I know is a huge concern of Brian’s, and which we talk about all the time on this show as maybe one of the primary near-term risks of AI.

So I want to think a bit more about what we can do to signal to folks that we are not just here to amplify the industry voice. But I think the answer to Brian’s question of, why talk about AGI like it’s likely to happen, is that in one form or another, I think both of us just do think we are likely to get powerful systems that can automate a lot of labor.

Kevin Roose

Yes.

Casey Newton

And we would like to explore the consequences of such a world.

Kevin Roose

Totally. And I think it’s actually beneficial for workers to understand the trajectory that these systems are on. They need to know what’s happening and what the executives at these companies are saying about the labor-replacing potential of this technology. I actually read Brian’s book about the Luddites. I thought it was great.

And I think it’s very instructive that the Luddites were not in denial about the power of the technology that was challenging their jobs. They didn’t look at these automated weaving machines and go, oh, that’ll never get more powerful, that’ll never be able to replace us, look at all the stupid mistakes it’s making. They sensed, correctly, that this technology was going to be very useful and allow factories to produce goods much more efficiently. And they said, we don’t like that. We don’t like where this is headed. They were able to project out into the future that they would struggle to compete in that world and take steps to fight against it. So I like to think that if “Hard Fork” had existed in the 1800s, we would have been sort of encouraging people to wake up to the increasing potential automation caused by these factory machines. And I think that’s what we’re doing today.

To Casey, I’d say thanks for taking the note seriously, and I’m glad to hear you’ll be thinking about whether and how the industry voice is getting over-amplified on the show, perhaps cracking the door for more non-industry voices (not just critics, ofc) too. To Kevin, thanks for the thoughts, and for bringing in the historical context of the Luddites—and for reading the book! But I do have to push some oversized spectacles up the bridge of my nose and say “well actually” because the last comment I excerpted above is a bit of a mischaracterization of what happened with the Luddites. And I want to address it because I think it cuts to the core of much of our debate over AGI, too.

I understand what Kevin is getting at, that unlike many critics and AI opponents today, who doubt AGI will ever be as powerful as the AI companies say it will, Luddites recognized the widespread threat to their livelihoods and acted accordingly. But there’s one important distinction: The Luddites were not concerned primarily with the power of the technology, but rather the people—the bosses, the factory owners—using it against them.

What the Luddites feared and protested was entirely of the human-made variety: labor exploitation. The Luddites were fighting bosses using power looms and wide frames as a justification for driving down wages, building factories and pushing workers into them, forcing them to sacrifice their autonomy, to “stand at their command.” The Luddites were not in denial about the technology’s power because it was quite clear where that power was derived from, which was the industrialists who owned it. That’s who they organized against, not the technology itself. That’s why Luddites wrote letters to factory owners excoriating them for degrading labor conditions and taking work away from weavers, and exhorting them to stop using the “obnoxious” machines, and threatening them with sabotage if they didn’t. It’s why Luddites like Gravener Henson spent their daylight hours petitioning Parliament and agitating for worker protections and labor rights. Not because machines were too powerful, but because they were often being used by unscrupulous men in exploitative ways like facilitating child labor and driving down industry standards.

In fact, the Luddites did say, quite loudly, look at these stupid machines and all the mistakes they’re making. This was one of their chief complaints, that the mechanized looms and wide frames produced low-quality “fraudulent” goods that drove down prices and ruined whole cloth-producing towns’ reputations. They also said that those machines could never adequately replace them. And they were right!

To this day, our clothes are made not by autonomous machinery, but by workers around the world, often in countries with cheaper labor markets. The automation of the industrial revolution did not abolish weavers or knitters; it helped bosses deskill them, break their labor power, and immiserate them. Eventually, it dispersed the supply chain of cloth production further around the globe. It did facilitate a broad and lasting change; that there would be more and cheaper cloth, produced by workers who were paid much less, at the benefit of a relative handful of industrialists and capitalists. (There’s a justified fear that a similar pattern will play out today, with creative workers not disappearing, but having their working conditions and pay crushed by managers turning to AI for writing, art, and music.)

This is why I’m so critical of the AGI/mass automation story. If we’re worried about the ill impacts on labor, the first place we need to look is at the people who will control and profit from the technology. And such warnings about mass job loss have been issued throughout history, typically by the upper classes, and have not yet materialized. The closest thing the Industrial Revolution had to an AGI story was not told (nor feared) by the Luddites, but the proto-computer inventor Charles Babbage and the business theorist Andrew Ure. These men predicted that fully automated factories were incoming that would put an end to human labor altogether—a prediction made on behalf of the factory owners, with whom both men sympathized, at a time when factories were falling under public disfavor due to their brutal working conditions.

For another thing, AGI is a story that seems to accept as ordained a subjugated hierarchy of labor. If Hard Fork had existed in the 1800s (or perhaps the 1700s, when the mechanized machinery was still in more sporadic use), it would indeed be great if it had been able to warn people not merely that very powerful new technologies are coming, or to brace for inevitable job loss and automation; but that new technologies are coming, and that elites are going to use them to justify a regime of production predicated on low wage work, child labor, brutal working conditions, and so on. After all, this was not inevitable! Entirely other social formations were (and are still) possible! That’s what the Luddites were fighting for—a future that was not enclosed by the factory. A future where working people shared the fruits of productive technologies, had a say in their deployment, and did not have to accept as determined an arrangement where factory owners had power over them to set the terms.

This is the crux of the issue, I think. Presenting AGI as inevitable or elemental lets AI CEOs and the c-suite execs buying their enterprise contracts off the hook for what is actually a series of human decisions—decisions ultimately aimed at increasing profits, or attracting more investment, usually at the expense of workers. It cedes control of the narrative to the owners and profiteers of this technology, and empowers their efforts to sell more automation and surveillance software. As I mention in my unedited critique, I think that accepting AGI on industry terms prevents us from really interrogating who the AGI story serves.

Near the end of my bit, Casey made a comment that I think drives home many of these points. He says that he “would just love to see the leftist labor movement work on AI tools that can replace managers. You know? It’s like, right now, it feels like all of this is coming from the top down, but there could be a sort of AI that would work from the bottom up.”

Indeed, I’m sure there are many people, leftists and conservatives alike, who would like to see their managers replaced with an AI. Or better yet, eliminated altogether. And if there’s any job an AI can do well right now, it is one whose design is to efficiently allocate resources, surveil workforces, and summarize output—in other words, AI could absolutely replace management. But replacing management with AI is of course not simply a matter of coding a capable managerial AI. Who would approve and facilitate the automation of management, after all? Certainly not managers themselves! Not without an old-fashioned struggle, anyway.

As with any automation technology, the question of who benefits from AI and who is harmed is ultimately a question of power. Who has the power to impose or refuse automation, and who profits from its use? The AGI story obscures those power relations, and allows it to concentrate, with minimal challenge, in the hands of those telling it—the modern day factory bosses hoping to use automation to slash labor costs and bring workers to heel.

OK OK OK. Longtime readers of this newsletter know I can go on and on about the Luddites, as I think I have ably demonstrated here today. So! I’ll leave it here for now. While I’d still love to have a more robust back and forth about AGI, the state of AI, and its historical context with regards to labor automation with Casey and Kevin, they certainly did not have to do a segment hearing out their critics at all. I do appreciate their airing my criticism and responding in kind, and will look forward to continuing the conversation.

Subscribe now

1

Hello gentlemen, this is Brian Merchant, tech journalist and author of the book and newsletter BLOOD IN THE MACHINE. First of all, I still want a whole show about the luddites and why they were right, and I think it’s only fair since Kevin recently threatened to stick me with a cattle brand that says FEEL THE AGI.

Which brings us to my concern: How are you feeling about feeling the AGI right now? Because I think that this narrative that presents super-powerful corporate AI products as inevitable, is doing your listeners a disservice.

I think it’s one thing to discuss how impressive technology is being developed and sold, but using the AGI language and frameworks preferred by the AI companies seems to suggest that you’re aligning with their vision, and risks promoting their product roadmap outright.

AGI, after all, is defined by OpenAI as “highly autonomous systems that outperform humans at most economically valuable work.”

And VCs, Softbank and big tech are not investing historic amounts of money in this technology because they think it might solve climate change—investors care about the prospect of automating tasks and labor at unprecedented scale. They want new kinds of surveillance, ways to dodge accountability, and dirt cheap content creation.

When you say, as my future cattle brand reads, you Feel the AGI, do you worry that you’re serving this broader sales pitch? Encouraging execs and management to embrace AI, often at the expense of working people?

I worry that the AI companies’ story — that AGI is in the imminent future — is helping them concentrate power and perpetrate abuses in the present.

I worry that by predicting this future on big tech’s terms, you’re helping to spread the AI industry’s vision of AGI — when you two are in fact better positioned than nearly anyone to interrogate who that vision really serves.

Thanks fellas, I’ll take my brand now.