RadiantLogic-Cisco-Dashboard-Reporting-Hero

Building Radiant AI: Lessons Learned on Applying Large Language Models in Identity

In this webinar we explore the integration of LLMs to streamline access governance processes and reduce risk management, including real-world applications, commercial and open source LLM options, and common integration patterns. 

Read the transcript

Hi folks. Thanks for standing by, and welcome to today’s presentation, “Building Radiant AI: Lessons Learned on Applying Large Language Models in Identity.”

During today’s event, attendees will be in listen‑only mode, but if you have a question, you can submit it anytime using the Q&A button located toward the center of your screen. Questions are private and will only be visible to the event staff. We’ll be addressing questions during the presentation as well as at the end if we have time, and any questions we don’t get to we will compile and send to John for review. Also, feel free to interact with the chat panel. Chats will be visible to all event attendees, and discussion is encouraged. Lastly, CPE credits will be emailed directly following the conclusion of this event.

Now I am very excited to introduce our featured speaker today, Dr. John Pritchard, Chief Product Officer at Radiant Logic. John, please go ahead.

Thank you very much, Heather, and a very warm welcome to everyone attending the Identiverse webinar series. This is quite possibly my favorite event of the year, mostly because it’s a practitioner conference. We get to discuss a lot about the “how,” and today I’m going to be sharing some of the “how” of our own experiences in building and integrating generative AI into identity, and the making of AIDA, our AI data assistant.

I think there’s going to be something for everybody today. If you are an identity professional evaluating technologies, I’ll give you some questions to pose to your providers around what to look for and how they’ve implemented capabilities. If you’re in service delivery, I’m going to talk about where data, specifically for identity, fits into the larger IAM ecosystem. And if you are a technology producer or product company like we are, I’ll share some of our lessons in pulling AI innovations from the market into our own product development, what worked, and what didn’t work so well.

For today’s session, I’m going to break things into four sections. I’ll start with a view of the current threat landscape and some of the problems we’re dealing with in the identity sector. This will suggest opportunities for AI capabilities, especially in areas we’ve struggled with as an industry, and how AI might help. Then we’ll get into a demonstration of our own implementation of AIDA, talk about a specific use case, go through examples, let you see capabilities in action, and showcase the role that AI, identity and access management, and data can play together. I’ll wrap with lessons learned from this roughly year‑long journey, which began before ChatGPT was public and continued as generative AI became mainstream and integration‑ready, forcing us to adapt quickly.

From a market perspective, we all recognize that classic perimeter defense is no longer fit for purpose. Every organization we work with is defending a very broad surface against many attack types. We’ve implemented a lot of technology to protect enterprise assets: access management, SSO with MFA, lifecycle provisioning, privileged access management, and steps toward zero trust. Even with this investment, we still see enormous breach risk, including in large, well‑resourced brands, some even in the identity market itself. This speaks to the challenge of operating at enterprise scale. Bad actors don’t need to defeat every control; they just need one under‑managed or unmanaged account to compromise.

Two key contributing factors drive this. First, systemic “zombie credentials”—accounts that remain active for former humans or non‑human actors that are no longer in active service. This is largely a lifecycle deprovisioning problem, especially in “last‑mile” legacy systems not fully integrated into automated processes. Second, insider threats, often not malicious but compromised or negligent accounts: attackers take over legitimate accounts and use their authorizations to move inside the organization. Our goal is to ensure the right person has just what they need, for the right reason, governed and monitored, while reducing entitlement reach across the enterprise.

As an industry, we’re doing many right things: layering technologies for SSO, customer identity, lifecycle management, governance, and privileged access, and moving toward just‑in‑time provisioning and zero trust. However, our cyber and identity stacks are not simplifying. We continue adding specialized tools, which all need to connect to complex underlying data landscapes—workforce directories, training systems, acquired environments, multiple customer databases, and more. Identity sprawl, silos, and legacy systems contribute to technical debt and hygiene problems, feeding risk.

We can ask whether different technological approaches could assist us, and AI is particularly skilled at reducing complexity and driving insight. We looked for use cases where AI could help in identity. Industry surveys show overwhelming belief that AI and ML are applicable in identity, especially to handle complexity and large datasets. At the same time, there’s reluctance to delegate full decision making to a black‑box AI; we still want humans to make critical decisions, with AI assisting their decision process.

We think of identity analysis use cases across authentication (are you who you say you are?), authorization (can you do what you’re trying to do?), and administration and governance. For each, we can ask what problems remain and what AI techniques are appropriate. “AI” is a broad umbrella: machine learning, specific algorithms, and generative AI. Each has strengths and is not suitable for all use cases. Generative AI is particularly weak for forecasting and prediction, and prone to hallucinations—answers that sound plausible but are factually incorrect—though many techniques exist to minimize this risk. So first we identify the problem, then we select the right tools, often in combination. For example, combining ML anomaly detection with generative AI for sense‑making and natural language explanation can be powerful.

We wanted a well‑known, long‑standing pain point with a clear “before vs. after AI” comparison, so we chose access review and certification. It’s arguably the least glamorous identity use case, but a critical control: verifying that what people can do is what they should be able to do. It appears in every compliance framework. Yet, most organizations would admit it basically doesn’t work well. When executed right, access review reduces entitlements and permissions, directly addressing over‑privileged accounts that attackers target, but in practice the process is painful and ineffective.

Most access review campaigns are run by frontline decision makers—managers and business owners—who only perform this task periodically, in tools they rarely use, facing large amounts of information they don’t fully understand. The number of enterprise applications keeps growing, and many are poorly inventoried or labeled. Managers see thin metadata and cryptic permission names, then are asked if access should be removed, with a strong fear of breaking something. Radiant Logic’s background in data management lets us see data posture before and after reviews and measure how much actually changes. In most organizations, very little changes; the process becomes a rubber stamp. Managers receive long lists of accounts and permissions and, afraid to disrupt the business, click “approve all,” effectively preserving last year’s state.

This widespread yet low‑impact process made access review a great candidate for AI augmentation. We could observe existing campaigns in many organizations, measure real‑world change (permission reduction, deprovisioning), then introduce an AI assistant and compare outcomes. We specifically wanted a process where managers are given data sets to review and we support them with a capability they don’t need extensive training on.

For us, this capability is AIDA—our AI data assistant. Conceptually, AIDA is like having a consultant sitting with each manager as they perform access reviews. This consultant is trained in cybersecurity best practices, knows the organization’s data that managers are reviewing, and is very familiar with the review tool. If a manager had such a person to explain what they’re seeing, answer questions, and suggest where to start, they’d likely conduct the most effective access reviews they’ve ever done.

The AIDA‑guided process has implied steps based on best practices. First, we look across all the data and start with the least concerning entries: for example, access that hasn’t changed since the last review and is actively used according to behavioral logs. There’s a population that appears “okay.” We approve those first, using one click to clear a large portion of the data. This reduces the remaining review scope and gives managers a sense of progress and familiarity with the questions being asked.

Next, AIDA guides managers into areas needing more focus, highlighting the strengths of generative AI. It can examine data sets, understand relationships, and answer “why” and “what if” questions. The assistant groups similar people into cohorts, visualizes clusters of employees and their permissions, and surfaces outliers. This cohort analysis helps managers see which access patterns are normal and which are anomalous, with AIDA explaining what to look at and why. The assistant also knows how to use the underlying product’s features (filters, charts, views) to navigate managers through different perspectives on the data, making decisions informed rather than blind.

In field testing, reactions have been very positive. Some organizations let managers rely heavily on AIDA’s suggestions; others restrict bulk approval but still benefit from AIDA’s guidance, explanations, and targeted focus on risky areas. In all cases, the idea of a well‑trained assistant that you can question, that questions you back, and that suggests both answers and missing data has been powerful. We’re exploring additional use cases where generative AI can suggest external data sources that would improve decisions—for example, collaboration patterns in Slack or Teams channels as dynamically emerging “peer groups” for access comparison, rather than relying solely on static directory groups or org charts.

Under the hood, the capability you saw is built on three primary data types from customer environments: hierarchical data (directories like Active Directory or LDAP), relationship/graph data (access chains linking people, groups, and permissions), and time‑series data (how these change over time, crucial for anomaly detection). We conceptualize this as an identity “lake” that feeds the assistant. On top of this, we use multiple models, including fine‑tuned large language models configured for our domain and taught cybersecurity best practices via curated knowledge bases. A key lesson is that LLMs only know what they were trained on; domain adaptation is essential.

Humans remain in the loop. Business users still own decisions; AIDA uses customer‑specific data and best‑practice knowledge to propose and explain remediation plans, but managers ultimately approve or adjust them. Implementing AI here involves much more than pointing data at a single LLM. The heavy investment is in data staging, cleansing, synthesis, and building test datasets to evaluate models for accuracy. We combine algorithms, machine learning models, and generative AI: generative components handle reasoning and knowledge search; more deterministic algorithms and ML handle prediction and risk scoring to reduce hallucinations and ensure repeatability. We also must consider operational cost and scale, which drives interest in smaller, domain‑specific language models that achieve good accuracy at lower cost.

In summary, identity data and AI are “having their moment.” We have the sensors, standards, and data richness across IAM stacks to support powerful AI‑driven sense‑making, especially in long‑standing pain points like access review. With AIDA, we’re using these advances to turn an often rubber‑stamped compliance exercise into a faster, more insightful, and materially more effective control.