Microsoft Copilot Agent Policy Let Any Users Access AI Agents

Shortly after the May 2025 rollout of 107 Copilot Agents in Microsoft 365 tenants, security specialists discovered that the “Data Access” restriction meant to block agent availability is being ignored. 

Key Takeaways
1. The “NoUsersCanAccessAgent” policy is bypassed, leaving some Copilot Agents installable.
2. Manual per-agent PowerShell revocations add overhead and risk.
3. Mitigate by auditing inventories, enforcing Conditional Access, and monitoring.

Despite administrators configuring the Copilot Agent Access Policy to disable user access, certain Microsoft-published and third-party agents remain readily installable, potentially exposing sensitive corporate data and workflows to unauthorized use.

When administrators set:

Microsoft Copilot Agent Policy Flaw

The expectation is that all Copilot Agents are hidden from end-user installation across Teams, Outlook, and other Microsoft 365 services. 

However, testing by cybersecurity researcher Steven Lim shows that agents such as “ExpenseTrackerBot” and “HRQueryAgent” continue to appear in the Copilot panel despite the global policy restriction.

In many organizations, manual intervention is now required:

Microsoft Copilot Agent Policy Flaw

This workaround must be run per-agent and per-tenant, introducing operational overhead and risk of oversight in large deployments. For external publisher agents, similar manual revocation is necessary, further complicating lifecycle management.

Copilot Policy Flaw

Copilot Policy Flaw

Unauthorized access to AI-driven agents can lead to:

  • Data exfiltration via “ExportDataAgent” or “SearchFileAgent” that query SharePoint or OneDrive content beyond intended scope.
  • Execution of custom RPA workflows through agents like “AutoInvoiceProcessor” without formal change control or audit logging.
  • Compliance violations if unapproved AI models process sensitive PII or regulated data.

Mitigations

To mitigate these risks, M365 administrators should:

Run a weekly discovery script to detect any agents bypassing the global policy:

Microsoft Copilot Agent Policy Flaw

Integrate Azure AD Conditional Access to require MFA or device compliance for installing any Copilot Agent and feed agent invocation logs.

Further, report policy enforcement failures via the Service Health Dashboard and track the resolution of identified bugs.

As AI agents become integral to productivity, it is critical that access policies designed to govern them actually function as intended.

Administrators must proactively audit, monitor, and enforce controls to prevent inadvertent exposure of enterprise data and preserve compliance.

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.

The post Microsoft Copilot Agent Policy Let Any Users Access AI Agents appeared first on Cyber Security News.