Nuclear Power for AI; AI for Nuclear Power?

Artificial intelligence (AI) is everywhere these days, and the nuclear power industry is no exception. The benefits are many, including the potential to run power plant operations more efficiently, identify potential anomalies, and proactively predict when equipment might break down. But some question whether deploying AI in nuclear plants will be a match made in heaven—or a combustible relationship?

The U.S. Department of Energy’s Argonne National Laboratory is using an AI tool called the Parameter-Free Reasoning Operator for Automated Identification and Diagnosis (PRO-AID) tool, which is designed to leverage generative AI and large language models to handle real-time monitoring and diagnostics. PRO-AID alerts staff to any issues that arise and explains what is happening.

The lab declined to comment on future plans for PRO-AID and whether any safeguards have been implemented to ensure its proper use.

If a U.S. nuclear power plant wants to deploy AI in safety-related applications, it must seek approval from the Nuclear Regulatory Commission (NRC), “or document how it determines the use would have no more than a minimal effect on events covered by the plant’s safety analysis,’’ according to NRC Public Affairs Officer Christine Saah Nazer.

“While our existing regulations and guidance were developed long before AI systems were available, our regulatory framework ensures any modifications or changes, including those involving AI, do not adversely affect the safety and security of the facilities,’’ Saah Nazer said. She added that licensees are inspected to ensure they remain in compliance with NRC regulations.

To date, the agency has not received any requests for AI use “in NRC-regulated activities,’’ Saah Nazer said.

In addition to predictive maintenance, AI has also been used in mining plant performance data “to understand core dynamics for more accurate nuclear fuel reload planning,” Saah Nazer said.

In another use case, utility Pacific Gas and Electric (PG&E) announced in late 2024 a deal with AI startup Atomic Canyon, to deploy its “Neutron Enterprise” tool at the Diablo Canyon Power Plant, touting it as the first on-site generative AI deployment at a U.S. nuclear power plant.”

Diablo Canyon, which is slated for decommissioning at the end of the decade, will use the tool to improve efficiency in the document search and retrieval process. The goal is to help the plant’s workers navigate millions of pages of NRC reports and regulations that go back decades.

Although PG&E did not respond to a request for comment, it has previously said Neutron Enterprise will be integrated with Diablo Canyon’s systems using the latest optical character recognition (OCR), retrieval-augmented generation (RAG), and AI-powered search technology. The goal is to reduce document search and retrieval times from hours to seconds. By accessing critical information faster and more reliably, teams will be able to focus on more mission-critical tasks, according to the utility.

Expect to see AI play a greater role in nuclear operations in the future, said Simay Akar, a senior member of the IEEE. “With increasing pressure for cleaner, more efficient, and dependable energy, the pressure to adopt AI simply keeps mounting,’’ she said.

Yet, Akar said of nuclear power, “There is no room for ‘oops’ and vigilance is paramount. We are dealing with things like overreliance on AI models we don’t fully understand, bad results from bad data, or the system behaving in an unexpected way under an extraordinary situation,’’ Akar said. “Even small mistakes in such a sensitive setting have massive consequences.”

Then there is the issue of security. “AI software, especially cloud-connected or drawing from a vast amount of external information, could bring with it new vulnerabilities if we’re not very careful,’’ she noted.

For AI systems to work safely in a nuclear environment, they need to be transparent and explainable and put through rigorous testing, Akar said. Implementation of AI “must take place in tandem with regulators,’’ she added. “Regulators must be at the table day one so that these technologies are subject to the same high level of safety and reliability requirements as every other technology within the nuclear sector.”

Tamara Kneese, director of the Climate, Technology, and Justice Program at the non-profit Data & Society Research Institute, agreed, expressing concern about nuclear power plants using AI without regulatory scrutiny.

“What we’re seeing play out in the federal government generally is this idea you can just apply AI to something and trust it’s giving you usable results,” Kneese said, adding that this kind of “trial-and-error methodology” should not be used in nuclear facilities “because of the extreme risk” they can pose.

Kneese said she can understand the desire to integrate AI into safety operations, and the hope it will lead to greater efficiencies and a reduction in errors. However, “Nuclear facilities are under very strict regulations for a reason, and if you’re going to start integrating AI into operations in any way, you really need to make sure you know very well where you need human oversight and accountability to ensure things are made safe; and I don’t trust that to happen without regulations and guidance directing that process,’’ she said.

Regulation of AI technologies in the U.S. nuclear industry may be coming within the next year. “The NRC is actively researching regulatory gaps within the existing framework, focusing on evaluating distinctive facets of AI technologies,” Saah Nazer said.

With Microsoft hoping to revive Three Mile Island and other tech companies eyeing nuclear power to generate electricity to power datacenters (largely for AI apps), one question Kneese wants addressed is how much humans will be in the loop with increasing use of AI in nuclear plants. She envisions scenarios where an AI system doesn’t recognize a potential problem and there are no highly specialized plant workers around to intervene.

“Adding in new AI systems to automate process[es] and reducing human feedback feels like a recipe for disaster,’’ Kneese said. “I couldn’t say what exactly could go wrong, but it doesn’t make me feel secure.”

Esther Shein is a freelance technology and business writer based in the Boston area.