Hackers Exploit Atlassian’s Model Context Protocol by Submitting a Malicious Support Ticket

A sophisticated attack vector targeting Atlassian’s Model Context Protocol (MCP) that allows external threat actors to gain privileged access to internal systems through malicious support tickets. 

The attack, dubbed “Living off AI,” exploits the trust boundary between external users submitting support requests and internal users processing them with AI-powered tools.

Prompt Injection Through Support Workflows

Cato Networks reported that the attack leverages a critical vulnerability in how AI systems process untrusted external input without proper isolation. 

When a threat actor submits a malicious support ticket containing a prompt injection payload, the attack sequence unfolds as follows: 

  • Internal user or automated system invokes an MCP-connected AI action to process the ticket.
  • Embedded malicious instructions execute with the internal user’s privileges.
  • Sensitive data gets exfiltrated back to the attacker’s ticket or manipulated within internal systems.
Prompt injection via Jira Service Management

The proof-of-concept demonstration specifically targets Atlassian’s MCP server integration with Jira Service Management (JSM)

Researchers found that support engineers unknowingly become proxies for the attack when they use AI tools like Claude Sonnet to summarize or process tickets. 

The prompt injection payload executes automatically, bypassing traditional security boundaries and granting attackers access to internal tenant data that should remain protected.

Using reconnaissance techniques such as Google dorking with queries like site:atlassian.net/servicedesk inurl:portal, attackers can identify numerous potential targets across organizations using Atlassian service portals.

The threat extends beyond simple data exfiltration to sophisticated lateral movement capabilities. 

In a demonstrated scenario, attackers exploit compromised partner accounts with scoped JSM access to submit enhancement requests containing crafted MCP prompts. 

These prompts can silently modify all open Jira issues by adding comments with links to attacker-controlled fake Confluence pages designed to mimic internal R&D documents.

When product managers respond using MCP auto-response templates, the injection triggers automatically. 

Subsequently, when QA engineers click the malicious links, command-and-control (C2) connections establish in the background, enabling malware deployment, credential extraction, and extensive lateral movement throughout the organization’s infrastructure.

GenAI Security Controls

Organizations can implement protective measures through GenAI security controls integrated with Cloud Access Security Broker (CASB) solutions. 

Recommended protections include defining security rules to inspect and control AI tool usage across enterprise environments, creating policies to block or alert on remote MCP tool calls involving create, add, or edit operations, and enforcing least privilege principles on AI-driven actions.

Additional safeguards involve implementing real-time detection of suspicious prompt usage, maintaining comprehensive audit logs of MCP activity across networks, and establishing proper prompt isolation and context control mechanisms to prevent untrusted input from executing with elevated privileges.

This “Living off AI” attack pattern represents a fundamental security challenge as AI integration expands across enterprise workflows, requiring immediate attention to design flaws in AI-human interaction boundaries.

Are you from SOC/DFIR Teams! - Interact with malware in the sandbox and find related IOCs. - Request 14-day free trial

The post Hackers Exploit Atlassian’s Model Context Protocol by Submitting a Malicious Support Ticket appeared first on Cyber Security News.