AI Risks for Democracy, the Economy, and Civil Rights

Since January’s launch of DeepSeek R1, the Chinese open-source variant of OpenAI’s GPT-o1, the metaphor of the ‘AI race’ has dominated the geopolitical discussion about artificial intelligence (AI) technology. The debate on the real impact of AI on society and the best ways for AI to benefit society as a whole has faded into the background.

However, these topics were at the heart of the conference session “Risks from AI to the Economy and Society” at the recent American Association for the Advancement of Science annual meeting in Boston.

Tina Eliassi-Rad, a computer science professor from Northeastern University, focused on the risks of AI for democracy. She showed that over the last 10 years there has been a worldwide decline of democracy, notably in civil liberties, the functioning of government, the electoral process, and pluralism. “Democratic backsliding is in part the result of instability in the democratic system,” she said. Regarding the influence of AI, she added, “AI technology that amplifies misleading and false information increases instability in democracy.”

In the context of Franklin D. Roosevelt’s quote that “Democracy cannot succeed unless those who express their choice are prepared to choose wisely,” Eliassi-Rad asked, “Can we choose wisely, in the age of Generative AI, when it is no longer clear whether a text, speech, image, or video is real or artificial?”

AI technologies give rise to a complex feedback-loop, she explained. “Algorithms that recommend products and news affect economic processes, political regulations, and societal norms. Those, in turn, affect the algorithms, and this feedback loop goes round and round,” she said.

Another important question is how generative AI influences what people believe, because they make choices based on their beliefs. Said Eliassi-Rad, “A large language model (LLM) is not an expert, even though people treat it often as one. It is a probabilistic model of a knowledge base. It returns what is both probable and likable.” That gives rise to risks ranging from convincing falsehoods to the possibility of jailbreaking or working around LLM’s guardrails.

Furthermore, Eliassi-Rad said, “When people are flooded with relevant information, they are less able to estimate how probable things really are.”

So, what should be the answer? According to Eliassi-Rad, “Regulations, even if they are enacted one day, will not be enough to fend off the risks. . . The real safeguard is education. Learn how generative AI tools work, what they are good for, and what they should never be used for.”

Daron Acemoglu, a professor of economics at the Massachusetts Institute of Technology (MIT) and recipient of the 2024 Nobel Prize in Economics, used a numerical analysis of the effects of automation on the economy to propose a different path for AI development from the current one.

Acemoglu provided data showing that the automation implemented since 1980 thanks to computers, and more recently AI, has led to disappointingly little productivity growth, “nowhere comparable to the rapid growth of what the U.S. and European economies experienced in the 1940s, ’50s, ’60s, and ’70s,” he said.

In analyzing the impact of automation on inflation-adjusted real wages, Acemoglu showed another alarming trend: “About half of the population in the U.S. doesn’t have much of an improvement in their labor market fortune. . . A lot of the inequality that we have experienced over the last 40 years appears to be related to automation. . . This is not an act of nature, but it’s related to the choices tech firms and society make.”

Acemoglu said many AI applications are going in the same direction as previous digitalization: automating away human labor without growth in productivity. In response, he advocated a different AI path from the current one: “machine usefulness,” as opposed to “machine intelligence.”

The current AI path of machine intelligence focuses on machines replacing more and more human cognitive tasks. Acemoglu said, “Machine usefulness, instead, means that machines are going to be at the service of humans in order to amplify expertise, knowledge, and the capabilities of humans. . . It enables workers to perform more sophisticated and new tasks, exactly what you see during periods in which wages and worker’s employment increase.”

We cannot rely on the free market to change the path from “machine intelligence to “machine usefulness,” said Acemoglu. “We need an institutional response, just like during the 19th century Industrial Revolution.”

After discussing AI risks to democracy and the economy, speaker Cory Doctorow said we need to dissect current AI revenue models. Doctorow, a science-fiction author, journalist, and digital rights activist, focused on how firms make money with AI and how they derive power. “I have been developing a theory that is very serious, with a very funny name,” he said: “enshittification.” Doctorow coined the term several years ago to describe the concept of online products and services declining in quality over time.

He explained that, at first, platform companies treat their users well in order to keep their user base growing, but eventually they will exploit those users to benefit their business customers. Finally, they will turn against those business customers to capture all the value for themselves.

However, the impact of this goes far beyond Big Tech. “Once organizations digitize, they can quickly change their business logic,” Doctorow said. “In America today, the majority of hospital nurses are being hired through one of three apps that all bill themselves as ‘Uber for nurses.’ ShiftKey is the market leader, and because America has such a poor privacy environment, ShiftKey is able to, in real time, acquire credit data about nurses before offering them a shift. Nurses who have more credit card debt or longer outstanding debts are offered a lower wage for the same shift as nurses who are more financially independent. This is something that is coming down to nursing, but it’s endemic to other sectors of the gig economy.”

Like Acemoglu, Doctorow warned against having unrealistically high expectations of job automation with AI. “I don’t think AI can do your job. I don’t think AI can do my job. . . Throwing more compute and more data at AI training in the expectation that that will allow it to reason as we do is like breeding horses to get faster and faster with the expectation that one day one of your mares will give birth to a locomotive.

“But just because AI can’t do your job, it does not follow that an AI salesman can’t convince your boss to fire you and replace you with an AI that can’t do your job.”

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.