top of page

AI agents are becoming staff. Are you ready?



From pilots to production


AI agents are moving from experiments to infrastructure. They read, decide, and act. Sometimes in response to a human. Sometimes on their own.


The speed is the feature.

The speed is also the risk.


What changes when agents become “infrastructure” is that they stop being a single team’s tool and start becoming a shared dependency. They touch identity, data, collaboration surfaces, and operational workflows. The moment an agent can trigger actions across systems, the question is no longer “does it work,” but “can we control it at scale.” Microsoft’s AI agent adoption guidance is explicit about the sequence: plan for agents, govern and secure agents, build agents, then operate agents. If you flip that order, you typically get adoption before control.




Why agents are harder than apps

Microsoft’s Cloud Adoption Framework describes agents as software that can reason over inputs, use tools, and take actions, backed by instructions, retrieval, actions, and memory. That combination is what makes agents useful, and what makes them harder to govern than traditional apps. You are not just deploying code. You are deploying decision loops with reach into data and systems.

In practice, “tools” means API access, workflow triggers, file operations, identity lookups, ticket automation, and admin actions. “Retrieval” means the agent can pull information from internal sources and synthesize it into outputs and decisions. “Memory” means it can carry context across interactions, which is powerful, but also increases the risk of data being retained, reused, or surfaced in the wrong place if controls are weak. This is why the Cloud Adoption Framework ties agent governance to more than IAM: data governance, observability, security controls, and development practices have to work together.


What breaks first

The first thing that breaks in most organizations is not the model. It is ownership.

Ownership fails because agent identities look deceptively familiar. They get created like “another app identity,” then quickly become operational actors. When an agent can run continuously, it outlives the sprint that created it. When it outlives the sprint, it needs a lifecycle. When it needs a lifecycle, it needs a real accountable party with renewal authority, not just someone who once wrote the code.


Agent sprawl is how you lose control

Microsoft calls it agent sprawl: agents appear in pockets of the business, multiply fast, and outlive the project that created them. Permissions are usually too broad because “it was easier,” and then nobody comes back to reduce them. When something looks suspicious in the logs, there is no accountable person to ask what the agent is supposed to do. That is how small automations become security incidents.

Sprawl is not only “too many agents.” It is too many agents without a shared standard for naming, metadata, risk tiering, and review. It is agents created in different products, by different teams, with different assumptions about data access, operational boundaries, and monitoring. You end up with identities that can authenticate, call APIs, and take actions, while the organization struggles to answer basic questions like what the agent is for, what data it can touch, and whether anyone reviewed its permissions in the last quarter. Microsoft’s guidance explicitly points to the need for organization-wide governance and security practices for agents because team-by-team governance does not hold up when agents proliferate.


Why this is now an executive topic

This is why agent identity is now an executive topic. If you treat agent identities like regular app app identities, you end up with the same failure mode as the last decade of service principals, just with more autonomy and more surface area.

The risk curve changes because agents reduce friction. Reduced friction is the point. It also means more actions happen automatically, and more systems can be chained together. A single over-permissioned identity can become a shortcut through otherwise well-designed controls. When something goes wrong, it happens faster and spreads wider, because the agent is built to move work forward without waiting for approvals at each step.


Microsoft’s direction: governance first, then scale

Microsoft’s answer inside the Entra ecosystem is to make agents first class identities through Entra Agent ID, and to place agent governance into an adoption lifecycle that starts with planning and governance, not tooling. You plan, you govern and secure, you build, you operate. If you invert that sequence, you get scale before control.

The practical shift here is standardization. Instead of every team inventing its own identity pattern, you move toward repeatable identity and governance building blocks. In Entra Agent ID, discovery is also treated as a first-class problem through the Agent Registry, which is described as a centralized metadata repository and discovery mechanism for deployed agents across the organization. That matters because governance starts with knowing what exists, not guessing.


What should an executive mandate look like?

What should an executive mandate look like?

It should be short, enforceable, and built for reality. Reality includes reorgs, vendor changes, busy teams, and projects that end. The mandate should define how agents are allowed to exist inside your tenant, how they are owned, and how they are shut down.


Accountability that survives org changes

Start with accountability that survives org changes. Entra Agent ID introduces administrative relationships that separate technical control from business responsibility, including sponsors. A sponsor is required when creating an agent identity or an agent blueprint. That is a built-in forcing function: every agent must have a human or a group that can explain why it exists and decide when it should be renewed or removed.

This separation matters because technical administrators should not need to be the ones who justify business purpose, and business owners should not need admin permissions to be accountable. Sponsors make lifecycle decisions and keep the “why” alive. Owners handle technical administration. This avoids the classic trap where accountability lives in a Jira ticket or a departed employee’s inbox.


Lifecycle by default, not by exception

Then require lifecycle by default. Temporary agents should not become permanent residents. Your baseline should include expiry, periodic review, and a registry that lets you discover what exists and why. Microsoft explicitly describes a centralized agent registry as a discovery mechanism with metadata about registered agents. That is how you prevent invisible agents from accumulating invisible privilege.

A registry is not bureaucracy. It is basic operational control. It should contain enough metadata to make decisions quickly, such as purpose, sponsor, environment, data scope, and when it was last reviewed. Without it, “access review” becomes guesswork. With it, renewal becomes a decision instead of a ritual. Microsoft’s Agent Registry is positioned exactly for this discoverability and organization at scale, including guidance on agent metadata and discoverability patterns.


Boundaries that limit the blast radius

Finally, put boundaries around where agents can run and what they can touch. The Cloud Adoption Framework governance view ties together data governance and compliance, observability, security controls, and development practices. This is not just an IAM problem. It is Purview and data controls, it is monitoring and cost visibility, it is threat protection, and it is safe build patterns.

This is where many organizations underinvest. They focus on authentication and forget that the agent’s value comes from access to data and the ability to act. If you do not pair identity controls with data controls and monitoring, you will not see the difference between “busy agent” and “compromised agent” until it is already expensive. The Cloud Adoption Framework’s governance and security guidance is clear that these controls must be consistent across the organization, not reinvented per team.


Fortytwo view

AI agents should be treated like digital coworkers with a stricter contract than humans. Humans have context and judgment. Agents have speed. Your job is to ensure speed does not outrun intent.

A “stricter contract” means the agent is never allowed to be vague. It has a purpose that fits in one sentence, a sponsor who is accountable, boundaries that are enforced, and a lifecycle that ends unless renewed. That is how you keep autonomy useful instead of risky.


The outcome standard that keeps you honest

If you want one practical executive outcome, aim for this: every agent is identifiable, attributable, reviewable, and containable. If you cannot do those four things, you do not have governance. You have hope.


Identifiable means you can find it and understand what it is for without archaeology.


Attributable means a sponsor exists who can renew or retire it.


Reviewable means permissions and scope are periodically re-approved based on current reality, not last year’s assumptions.


Containable means policies and boundaries limit damage when something goes wrong, so incidents do not become tenant-wide problems.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
    bottom of page