RadiantLogic-Cisco-Dashboard-Reporting-Hero

The Identity Security Paradox–When More Tools Create Bigger Blind Spots

Despite significant investments in identity security tools, organizations continue to face increasing identity-based attacks because the very tools meant to enhance security often create dangerous blind spots. This paradox arises from fragmented identity data, inconsistent visibility, and a heavy reliance on detection while neglecting prevention.

Learn how attackers exploit gaps in identity visibility, why machine identities pose an increasing risk, and how organizations can shift from a detection-heavy approach to a balanced security strategy. By leveraging a data-centric identity security model—leveraging Identity Security Posture Management (ISPM) and Identity Threat Detection and Response (ITDR)—you can gain true resilience against modern threats.

Read the transcript

Welcome. I’m JD Miller and excited to be your host for what I think is gonna be a really insightful session with our friends at Radiant Logic. So let’s be honest, most organizations have poured a ton of time, money, and energy into identity security tools. But somehow identity-based attacks keep getting more common, not less. So what is going on?

That’s the paradox we’re here to unpack. It turns out the tools we rely on to protect us actually create gaps—gaps in visibility, gaps in data, and gaps that attackers are more than happy to exploit. We’re going to discuss how it’s not just human users anymore. Machine identities are a growing part of the risk equation too. We’re going to dig into why this is happening and, more importantly, how to fix it.

We’ll look at how a more data-centric approach, using things like identity security posture management and identity threat detection and response, can help you move from a reactive stance to one that’s truly resilient. Joining me today are two experts who live and breathe this stuff, Sebastian and Wade with Radiant Logic. We’re gonna cover a lot, so let’s dive in.

We often see organizations investing heavily in identity security, yet breaches continue at an alarming rate. What’s behind this disconnect between investment and outcomes? Sebastian, let’s start with you.

Sebastian: To your point, JD, we’re definitely observing something like we’re losing a war against cybercrime. This is, in my view, the identity security paradox. For a long time we’ve managed identity mostly by leveraging detection capabilities only—managing identity lifecycle on one side and detecting threats as they occur, actual identity breaches. In doing so, we neglected the integrity of identity data and preventative measures. All preventative measures were mostly compliance-driven.

We see this whenever I discuss with customers or practitioners in the industry. There’s a gap where roughly eighty percent of investments are in detective measures and only twenty percent in preventative measures. It’s like building a castle on a foundation of sand. Moreover, despite those strong investments, we observe tons of cybersecurity solutions deployed to tackle all these problems, each with their own identity data silos.

So we end up with siloed views of identities. These siloed views create identity visibility gaps that hackers leverage to attack or breach the identity ecosystem.

Wade: Building on what Sebastian just said, and it’s very insightful, we actually have real-world examples. We see them every day. We worked recently with a large customer who had invested heavily in intrusion detection—the ability to understand when someone is inside the network doing the wrong thing and to stop them. The barbarians are inside the gate, and that is a viable solution. You do need to react when someone is already inside the house.

But what they discovered in a separate analysis was that they had dozens and dozens of orphaned accounts—accounts with no owners, no human being attached anymore because the person had been terminated or left the organization. All that access was still sitting there, unmonitored and unmanaged. Threat actors were using those accounts to penetrate the environment and do bad things to the organization, unseen by detection systems, because it wasn’t an obvious active breach from the outside.

It wasn’t something perimeter alarms and filters were seeing. It was someone inside the organization, quietly in the shadows, operating without detection. That’s the challenge: when we secure the environment, we have to secure all the threats. A strong perimeter detection method gives you a valuable piece of the puzzle, but it doesn’t alert on all the different possibilities that may come into play.

The reality is that you will be breached. We used to say “if” you are breached, but now it’s a certainty. The windows are open, the doors are open, you have termites; you already have internal systemic problems that make a breach more likely and potentially more dangerous. It’s like spending a lot of money on detecting that the horse has left the stable and then chasing after it, when we really need to invest on the other side—a better gate, taking away the incentives and cleaning up the environment.

We need to invest in improving the foundations so that even if an attacker gets a foothold, there is less they can do. If we take away the fuel attackers use to gain access in a compromised system, we can starve them of what they need to be successful. That’s an area where investment needs to shift.

The other component Sebastian touched on is that we have siloed platforms today. We’ve invested a lot: single sign-on and access management platforms that check credentials and enforce multi-factor authentication; governance platforms; privileged account management systems. But in many situations they’re working in their own little islands.

They may even be using different data to make decisions about the same person because they each sourced and integrated their data independently. There’s no common set of information that everyone is working with. If everyone has different answers to the test, everyone’s going to get a different result and not everyone will be on the same page. The challenge is that the system has to work together to be effective because threat actors only have to be right once. They only have to find one gap. If we don’t bring everything together in a holistic approach, we open ourselves up to that vulnerability.

JD: You mentioned an identity visibility gap in your analysis. Could you explain what this is and why it poses such a significant threat to organizations, Sebastian?

Sebastian: As highlighted by Wade, this identity visibility gap exists because we’re lacking a single, accurate, timely view across the entire identity landscape. That landscape should cover human identities, machine identities, and nonhuman identities in general. It’s a very complex problem to solve.

Identity and access management systems were not built with today’s complex, distributed environments in mind, so they are still very siloed. You have dedicated systems for each silo. The identity visibility gap is a huge problem because threat actors no longer need to “breach” your security in the traditional sense. As Wade highlighted, they can instead exploit inconsistencies in identity itself.

This is the typical “reactivated dormant leaver account” scenario, where you have unmanaged accounts at the edges of your identity landscape that attackers leverage to bypass defenses. That’s the identity visibility gap in action.

Wade: That’s an excellent callout because the challenge really lies there. Our premise is that if you don’t address this visibility gap, you’re working blind. Threat actors today have tools and capabilities to search your own network and find these gaps for you. Unfortunately, they don’t tell you about them so you can fix them; they exploit them.

This is an even bigger problem with nonhuman or machine accounts: service accounts and similar. With human accounts, if my account is compromised, I might notice that it’s acting differently and alert security. I have a limited set of privileges that are audited and reviewed; least privilege is often enforced.

On the service account side, it’s the Wild West. It’s an unmanaged environment. There’s even a conference track this year at Identiverse focused on nonhuman identities as a major threat. These accounts are challenging because people don’t see them daily. They are “ghost accounts” operating in the background, the ghosts in the machine. They operate autonomously, often without visibility.

These service accounts are also the gears that run the system. They often have more access than the average user. For example, a system account might allow a financial application to reach a database and gather all needed data—a massive amount of access in a single account. Because they’re unmanaged, they’re often left behind when systems are sunset, leaving powerful accounts sitting there.

We worked with one organization that had a DevOps test service account. They ended up giving that account to every production application in the organization. That one account had access to everything. The only way forward is to start treating these like human accounts: auditing them, discovering them, understanding what they do, and managing lifecycle.

On top of that, we’re seeing AI-driven bots and nonhuman AI identities. They can operate at higher speed, in the shadows, with intelligence. They may do things you can’t easily understand. So we must bring stronger tools to bear on that problem.

One example: a major breach at a Las Vegas casino in 2023. The attackers operated undetected on the network for thirty days, gathering credentials off the wire. They were not flagged by intrusion detection because they looked like regular service accounts doing regular service-account things. Once they had enough information to collapse the platform, they acted. That’s the risk created by the visibility gap and unmanaged machine identities.

JD: You highlight machine identities as a particular concern. What specific risks do these nonhuman identities pose that many security leaders might be overlooking?

Sebastian: Machine identities are a big concern. They are multiplying three to five times faster than human identities. To Wade’s points, you have legacy service accounts, DevOps practices, and more recently generative AI and other automation. All of these bring more accounts to the table, most of them dynamic.

The industry only recently realized they also have to be managed. We took care of human identities, implemented joiner/mover/leaver processes, but left service accounts to the side, often just managing the password and not what you can do with them. The lifecycle of machine identities was left aside.

We still see many nonhuman identities active even though they are no longer used—stale or orphan accounts due to lack of data quality and cleanup. Historically there were also weak or no password policies for machine accounts. Combine that with the fact that these accounts are often over-privileged, have many permissions, and lack access management and visibility, and you get a serious risk.

The biggest threat is that they are often used as tools for lateral movement within a company, often undetected. This is the typical casino example Wade described. They are leveraged as very efficient tools to defeat company defenses.

Wade: There are many examples. As we shine a light here, it’s a bit of a horror story: skeletons coming out of the closet. One organization we worked with had a financial platform they had diligently locked down—minimum privileges, regular access reviews, segregation of duties, strong controls. It was pristine.

An audit of service accounts, however, found there was an admin account on the server hosting the financial services application with full access to all files. A backdoor you could drive a truck through. One compromised service admin account would give you everything, despite all that work on user-facing controls.

That’s the nature of the challenge: the attacker only has to be right once; we have to close every seam. That starts with visibility. Focusing on service accounts is critical, but it comes with challenges. Another organization we audited had no clear naming convention for nonhuman accounts. No prefix, no pattern. We had to work backwards: they had managed human accounts very well with a strict naming convention, so by filtering those out, we assumed the rest were machine accounts.

Once you identify machine accounts, you then categorize: admin service account, AI service account, API account, etc. You add context—what applications they access, what entitlements they have, what endpoints they hit, what logs show. You correlate all that to create context. You need that context to identify these systems, especially when not tied to a user, and then decide: disable, restrict, or closely monitor.

It’s also concerning that many of these highly privileged accounts are not inside PAM tooling. These are often the most privileged and the most vulnerable accounts in the organization. They need to be managed as actively as human accounts.

JD: It’s hard to have any conversation without the letters “AI” coming in. Current AI applications in identity security primarily focus on anomaly detection. Sebastian, how do you envision AI evolving to address the deeper identity data-quality issues?

Sebastian: We could speak for hours on this. First, let’s envision AI as more than just an LLM engine. AI should be seen as a practice. So far, AI in identity security has mostly been used for detective measures: advanced pattern matching for anomalies, baselining analysis to identify unusual behaviors like sudden night-time connections to critical systems.

We have a lot more expectations for AI. AI’s promise is all about data and what I call relationship mapping—understanding and building context. That means building connections between identities, permissions, accounts, and critical assets across the identity ecosystem.

Then you can identify things like similar critical roles accessed by different people, privilege accumulation over time, or correlations between identity data and vulnerability scans to identify potential attack paths. To make the most of AI, you must feed it with the right data elements so it can understand this context and relationships.

If you do, I envision AI proactively mapping identity attack surfaces, flagging risky relationships and discrepancies, suggesting remediations, and mitigating risk before attackers leverage it.

Wade: It’s an open question what AI will ultimately do, but we have high hopes. A lot depends on how we train and feed our AIs. People are familiar with AI hallucination—where an AI confidently returns a wrong answer. That goes back to the data quality you’re feeding it.

AI connects into identity in multiple ways. One example is role mining: determining current access across the organization to predict what access people should have and avoid over-privilege. That’s a great idea if you have good data on roles and entitlements. But if the data is garbage, you’ll get garbage roles. You might, for example, use an over-privileged long-time employee as the “model” and give everyone everything they have.

So we must clean up the data fed into AI. If we add context and train AI to think in multidimensional relationships, it can do amazing things. For example, imagine it’s 2am in the SOC and you get an alert: an IP from the Korean peninsula is spoofing the CEO’s credentials and trying to access the general ledger. Instinct says “block.” But with context, you might see that 2am local is 5pm in Korea; the IP is from your South Korea office; travel records show the CEO is there; it’s the end of a financial reporting period. Blocking might disrupt legitimate critical work. AI that understands this context can suggest a more nuanced response.

Another example: a user is added to an admin group. On the surface, that could be a normal provisioning event. But if you see that the change wasn’t made by the provisioning platform but manually by another admin, out of band, AI could flag this and even automatically roll it back. The key is context and real-time correlation.

We do similar correlation in our heads every day when we talk to colleagues—we bring context about who they are, what role they play, our history. AI is at the beginning of achieving this kind of association, but because it operates so quickly, the potential is huge. We can use that power to defend ourselves, clean environments, and make better decisions. Attackers are also using AI to probe networks, so we need AI on the defensive side too.

JD: You both advocate for a combined-arms approach to identity security. Sebastian, what would this look like in practice for an organization trying to rebalance their security investments?

Sebastian: Combined arms is a battlefield term—long story short, it takes a village to win a battle. Applied to identity, especially on the risk side, it means you do not rely only on detective measures. You deploy both preventative and detective measures.

NIST’s cybersecurity framework calls this out. We now have terms like ISPM (Identity Security Posture Management) and ITDR (Identity Threat Detection and Response). ISPM is about prevention; ITDR is about detection and response. A combined-arms approach would be to deploy both.

But I would say something is missing: the data piece. The combined-arms approach is three-fold: real-time identity data (a unified single source of truth for identity and access), ISPM, and ITDR. You need real-time, unified, enriched identity data feeding both ISPM and ITDR, so both are fueled with the same consistent information. Then you can reduce attack surface proactively via ISPM and mitigate threats via ITDR.

Wade: We’re seeing a repeating pattern. I’ve been at this for decades, and IT goes through cycles. Early adopters get on the bleeding edge, taking bets on new technology to proactively solve problems. Four or five years ago, some organizations came to us saying, “I need a single source of truth, an entitlement catalog, one place to get all my identity data, know it’s accurate, and make decisions based on it.” They recognized early that everything else relied on that.

Now this is becoming mainstream. We’re seeing more conferences and presentations where people realize everything they do relies on identity data. They also realize they’re doing the same integration work multiple times across different tools. So they’re looking to pull it all together.

The key benefit of a single place for identity data is that you have one place to audit, manage, clean, and monitor for identity-based intrusion. Identity data is an attack surface. If intruders compromise it, they can compromise access. So you have to look at data quality and data security.

All of this feeds policy engines, zero-trust decisions, AI models, and more. If the foundation—identity data—is bad, everything built on top is at risk. That’s where Radiant Logic focuses: providing that strong data foundation. Then whatever you’re building on top—ISPM, ITDR, access management, governance—has solid fuel to work with.

JD: Wade and Sebastian, thank you so much for joining us today.