AI agents like ones built by Microsoft Copilot Studio or MCP servers like GitHub’s server are transforming productivity by acting autonomously with users’ identities and permissions.
“An agent is not just another app—it acts with your permissions, your identity, and your data. That makes it powerful. And dangerous.”
Welcome to the next frontier of enterprise risk: agentic AI.
While the hype around generative AI continues, something quieter—but more consequential—is unfolding underneath: the rise of AI agents that autonomously act on behalf of users. These are not just chatbots; they’re becoming identity-bound entities with access to mailboxes, files, calendars, repositories, and sensitive enterprise APIs.
In this post, we explore two major recent inflection points in agentic AI, why they matter for identity and access security, and what enterprises must do now to prepare for the new security paradigm.
At Microsoft Build 2025, something subtle but groundbreaking was announced: Copilot Studio.
Billed as a low-code platform to build and customize copilots, Copilot Studio gives enterprise users the power to develop autonomous AI agents using a drag-and-drop interface. Since its May 19th release, organizations have embraced it to create task-oriented agents that can:
These agents don’t just “assist”—they act. And to act, they require access to the user’s identity and data.
In short, Microsoft has handed over the power to programmatically impersonate users—in a friendly, intuitive, enterprise-ready interface.
It’s a productivity dream.
And a visibility nightmare.
About a month before Microsoft’s CoPilot studio release, GitHub rolled out support for MCP (Model Context Protocol) server, enabling Claude, VS.Code and other MCP hosts to interact more deeply with code repositories.
With the MCP server, users can spin up agents that:
But there’s a catch: coarse permissions.
As uncovered in recent research by Invariant Labs, malicious third parties can exploit these agents through clever prompt engineering to:
The root cause? Lack of fine-grained authorization and oversight. Agents can hold more permissions than they need—and once provisioned, operate in a semi-autonomous gray zone.
What unites Copilot Studio and GitHub’s MCP server is this:
AI agents are not neutral observers. They inherit identity.
And that makes them part of your access perimeter.
Let’s break down the implications:
If a user’s identity is compromised, the attacker may gain access not just to the user’s data—but to the agents acting on their behalf, often with long-lived tokens and access scopes that go unnoticed.
Even if the agent itself is trusted, a malicious prompt can redirect it:
Prompt injection can turn a helpful bot into a data exfiltration tool or a disinformation vector.
Agents call APIs as the user:
Enterprises rarely have visibility into what these agents are doing—until something breaks.
Most IAM stacks are not built to answer these basic—but now critical—questions:
Unlike traditional SaaS apps that get registered in your enterprise directory with known scopes and logs, many AI agents exist in shadow mode—spun up by users, granted OAuth tokens, and forgotten.
This is not just a logging issue. It’s a risk modeling problem. Because without visibility, you cannot enforce policy. And without enforcement, you cannot contain breach scenarios.
We often focus on what agents can read. But what about what they can write?
An agent given write permissions to a document store, repo, or database can:
This is not hypothetical. AI agents can be tricked into writing toxic outputs, deleting records, or updating configuration settings—especially if APIs are overly permissive or not scoped by role.
In this context, AI becomes not just a security concern—but a data integrity threat.
What Enterprises Must Do Now
This is the dawn of agentic AI—but the identity and security stack must evolve to meet it.
Here’s what organizations can do today:
We are at a turning point. AI agents are becoming first-class actors in enterprise ecosystems. And as they do, identity becomes more than a username and a token—it becomes a network of intelligent extensions operating semi-autonomously.
This is both a security opportunity and a challenge.
The enterprises that succeed in this new reality will not be those who block agents entirely.
They will be those who understand, govern, and secure them with precision.
At WideField, we believe the next wave of identity security will be agent-aware, API-observant, and prompt-protective. We’re building toward that future—one insight at a time.
In an upcoming post, we’ll uncover how AI agents’ access evolves over time—and why their lifecycle is your hidden risk frontier.