Blog

Five Eyes on Agentic AI: The Case for a Trusted Identity Registry 

A digital illustration of an eye, with the iris made up of glowing blue binary code and pixelated dots radiating outward, representing technology, data, and digital vision, on a black background.

When six allied cyber agencies publish joint guidance and quietly endorse a federated registry of agent identities as the architectural starting point, the agentic AI security conversation has moved past speculation. The question now is how fast enterprises can build the foundation. 


In 2026, the cyber agencies of Australia, the United States, Canada, New Zealand, and the United Kingdom (a.k.a. the Five Eyes) published joint guidance titled “Careful adoption of agentic AI services.” The document was authored by ASD’s ACSC, CISA, the NSA, the Canadian Centre for Cyber Security, NCSC-NZ, and NCSC-UK. It is measured, technically grounded, and notably restrained on vendor framing. 

It is also, when read carefully, a near-complete endorsement of a position we have been making at Radiant Logic for the past two years: agentic AI cannot be governed without first solving the identity problem underneath it. 

The guidance does not name vendors. It does not need to. The architectural conclusions speak for themselves. 

What the Five Eyes Guidance Says About Agentic AI Identity

Radiant Logic has argued that enterprises now face three distinct identity populations: human users, non-human identities such as service accounts and API credentials, and agentic AI. Each behaves differently. Each requires different controls. Treating them as a single homogeneous user population is the source of most of the trouble we see in the field. 

The Five Eyes guidance arrives at the same destination through different language. It treats agents as a distinct identity class, warns explicitly about static and shared credentials, and observes that “governance mechanisms designed for human actors do not always translate effectively to autonomous AI agents.” 

That sentence dismantles the assumption embedded in most current IAM programs, which is that an agent can be onboarded as if it were a service account or a power user. It cannot. The behavioral surface is too large, the goals too underspecified, and the failure modes too creative. As the document puts it, agents distinguish themselves by “accomplishing underspecified objectives, acting autonomously, following goal-directed behaviours and creating long-term plans.” That is not a service account. That is a new identity class. 

Why Governments Are Calling for a Trusted Agent Identity Registry  

The most important passages in the guidance, from an architectural standpoint, sit in the identity management section. The authors are unambiguous: 

“Construct each agent as a distinct principal, a cryptographically anchored identity with its own unique keys or certificates.” 

“Maintain a trusted registry and bind identities to authorised roles; periodically reconcile the registry against the live set of agents.” 

“Deny access for any agent or cryptographic key that is not present in the trusted registry.” 

This is not visibility. This is not detection. This is a system of record. 

The distinction matters. Shadow AI discovery tools, which scan the environment to surface unsanctioned agents, will be commoditized by every cloud platform vendor within the next eighteen months. They are useful but not defensible. The defensible position is the federated registry that knows which agents are authorized, which humans own them, which non-human identities they consume, and what scope of action they are permitted at any given moment. That is the moat. 

The Five Eyes guidance does not say buy a registry. It says, in the imperative voice that government cyber agencies reserve for things they actually mean, build one and deny access to anything outside it. We have been calling this approach repository, not radar. The phrasing is ours. The conclusion is now theirs. 

Why Privilege Controls Fail Without Strong Agent Identity

The guidance opens its risk analysis with privilege, which is the right place to start. It calls out scope creep, the confused deputy pattern, and over-broad entitlements granted at deployment and never revisited. None of this will surprise anyone who has lived through a decade of enterprise IAM remediation projects. 

What is new is the line that follows: “Identity is every bit as important as privilege.” 

The reason that lands is that most enterprise debate about agentic AI security has fixated on what agents are allowed to do. The harder question, the one the guidance forces into the open, is who the agent actually is when it does it. If the credential is shared across agents, if the token is static, if the binding between an agent and its human owner is informal or absent, then privilege controls are decorative. The document is clear on the consequence: a spoofed or impersonated agent invoking a sensitive operation produces audit logs that look legitimate, “rendering detection tools ineffective at identifying deception until a confirmed anomaly surfaces.” 

This is why the order of operations matters. Unify the identity data first. Observe the configurations and entitlements that govern it. Then act with confidence, because the act layer is only as trustworthy as the identity it is acting on. 

What Enterprise Security Teams Should Do in the Next 12 Months  

The guidance is conservative in places, and reasonably so. It tells organizations to deploy agentic AI incrementally, to begin with clearly defined low-risk tasks, and to prioritize “resilience, reversibility and risk containment over efficiency gains.” Enterprise security teams will be tempted to read this as permission to slow down. They should read it as a mandate to invest in the identity foundation now, so that when business pressure to deploy agents accelerates, the controls are already in place. 

A few practical implications follow. 

First, the trusted registry is not a future product category. It is a present requirement, framed as such by six allied governments. Enterprises that wait for vendor consolidation will find themselves rebuilding identity foundations under deadline pressure. 

Second, non-human identities and agents are not the same problem, but they share a common substrate. The credentials, secrets, and machine identities that already exist in the environment will be inherited, badly, by agentic systems unless they are rationalized first. The guidance is explicit on this point, recommending organizations “replace static, long-lived secrets with ephemeral credentials that expire when the job is complete.” Most enterprises still run on the opposite. 

Third, observability of agent behavior is necessary but not sufficient. The guidance places significant weight on logging, monitoring, and anomaly detection. It places equal weight on the identity binding that makes those logs meaningful. A log entry that records an agent deleting a firewall configuration is only useful if you know who that agent is, who owns it, and what it was authorized to do at that moment. 

The Agentic AI Security Conversation Has Moved Past Speculation

Two years ago, the agentic AI security conversation was speculative. Eighteen months ago, it was largely a vendor pitch. Today, it is government guidance jointly issued by six cyber agencies that operate the world’s most consequential intelligence and defensive missions. 

The architectural conclusion is consistent across all of them. Build the registry. Bind the identities. Enforce the boundaries. Deny anything that does not belong. 

The question for enterprise security leaders is no longer whether to do this. It is whether they will do it on their own timeline, or under the pressure of a deployment they did not see coming. 

We know which conversation we would rather have. 


Quotations in this article are drawn from Careful adoption of agentic AI services, jointly published in 2026 by the Australian Signals Directorate’s Australian Cyber Security Centre, the U.S. Cybersecurity and Infrastructure Security Agency, the U.S. National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom National Cyber Security Centre. The document is licensed under Creative Commons Attribution 4.0 International.