Google finds state-sponsored hackers use AI at ‘all stages’ of attack cycle 

A new report from Google found evidence that state-sponsored hacking groups have leveraged AI tool Gemini at nearly every stage of the cyber attack cycle.

The research underscores how AI tools have matured in their cyber offensive capabilities, even as it doesn’t reveal novel or paradigm shifting uses of the technology.

John Hultquist, chief analyst at Google’s Threat Intelligence Group, told CyberScoop that many countries still appear to be experimenting with AI tools, determining where they best fit into the attack chain and provide more benefit than friction.

“Nobody’s got everything completely worked out,” Hultquist said. “They’re all trying to figure this out and that goes for attacks on AI, too.”

But the report also reveals that frontier AI models can build speed, scale and sophistication into a myriad of hacking tasks, and state-sponsored hacking groups are taking advantage.

Gemini was a useful, dynamic and convenient tool for many tasks, helping threat actors in a variety of different ways. In nearly all cases, Google’s reporting suggests that state-sponsored actors relied on Gemini as one tool among many, using it for specific purposes such as automating routine processes, conducting research or reconnaissance and experimenting with malware.

One North Korean group used it to synthesize open-source intelligence about job roles and salary information at cybersecurity and defense companies. Another North Korean group consulted it “multiple days a week” for technical support, using it to troubleshoot problems and generate new malware code when they got stuck during an operation. One Iranian APT used Gemini to “significantly augment reconnaissance” techniques against targeted victims. China, Russia, Iran and North Korea all also used Gemini to create fake articles, personas, and other assets for information operations.

“What’s so interesting about this capability is it’s going to have an effect across the entire intrusion cycle,” Hultquist said.

There are no instances of state groups using Gemini to automate large portions of a cyber attack, like a Chinese-government backed campaign identified by Anthropic last year. It suggests threat actors may still be struggling to implement fully or mostly-automated hacks using AI.

Hultquist said that some state groups, particularly those focused on espionage, may not find the speed and scale advantages of agentic AI useful if it results in louder, more detectable operations. In fact, while state actors continue to experiment with AI models, he believes on average these developments will help smaller cybercriminal outfits more than state-sponsored hackers.

But that could change in the future. Frontier AI companies like Anthropic and cybersecurity startups like XBOW have already developed models with powerful defensive cybersecurity capabilities in vulnerability scanning, reconnaissance and automation. Foreign governments with similar technology could use those same features for offensive hacking, as Chinese actors did with Claude before being discovered.

In December, the UK AI Security Institute’s inaugural report on frontier AI trends found that Al capabilities are improving rapidly across all tested domains, and particularly in cybersecurity.

And the gap between frontier and free, open-source models is shrinking. According to the institute, open-source AI models can now catch up and provide similar capabilities within 4-8 months of a frontier model release.

“The duration of cyber tasks that Al systems can complete without human direction is also rising steeply, from less than 10 minutes in early 2023 to over an hour by mid-2025,” the institute said in its Frontier AI Trends Report in December.

The post Google finds state-sponsored hackers use AI at ‘all stages’ of attack cycle  appeared first on CyberScoop.