In my last few blogposts (here, here, and here), we’ve been talking about the best—and easiest!—way to upgrade to ABAC (attribute-based access control), using more flexible group definitions to create smarter authorization policies. And I told you that it’s possible to build these richer groups by leveraging more attributes. So that leaves one essential question—how do we round up all those attributes to build better groups that can drive finer-grained policies? (You will not be surprised to learn that it has to do with federating identities from across diverse sources—but read on, and I’ll explain how all this works…).
As we know, in most organizations, our definition of an identity is stored in some sort of directory infrastructure. And sure, that’s fine if all you’re doing is authenticating users. But authorization is where it gets a little trickier. You’ve heard me say it before: there’s a ton of information about each identity that isn’t part of that basic record. Even if the directory contains the core set of identity attributes for each user, many specific aspects of an identity will be handled by more specialized applications—HR, production, marketing, sales automation, and the like. And getting to all the nuggets that are siloed across these specialized applications is a real identity integration challenge (and that’s our favorite kind of challenge here at Radiant…).
Beyond the Meta/Virtual Paradigm: A Federated Identity Service to Gather Attributes from Disparate Sources
We believe that the solution is not (yet another) centralized uber-directory. The answer is a common coordination effort that’s based around a virtualization layer that creates a federated view of identity out of multiple applications and identity silos. This smart layer normalizes all that disparate data for “curated” views of identity that can be customized for each consuming application. Now, this solution goes way beyond what the market calls a “virtual directory,” although it grew out of that technology. And it’s not a traditional metadirectory, although it centralizes and synchronizes the data in a logical way.
Such a service derives its flexibility from metadata consolidation and remapping, making it easier to manage and deploy. It provides is a consolidated view of your identity, while also adding the ability to delegate security back to the authoritative stores. Basically, it’s the identity counterpart to the federated access made possible by SAML and, we bet in the near future, OpenID Connect and OAuth 2.0. In, short it is the identity infrastructure that enterprises need to access the cloud through those protocols.
So how is this layer implemented? I’m sure some attentive readers are wondering how this works within the realm of directories, metadirectories, and virtual directories. In fact, there is an interesting conversation on this topic going on right now at LinkedIn with Dave Kearns and others, and Radiant will also be hosting a webinar about this soon with Gartner Nick Nikols (although that discussion will center around why you need such a federated identity structure for cloud access and Office 365—be sure to sign up!).
To keep today’s story short (and visual!), we believe that the architecture needed to gather those attributes looks something like the following picture:
Of course, an architecture diagram always looks clean and simple. The devil is always in the implementation details.
Why do you need to federate your identity in order to pull and join your attributes? Well, for the simple reason that even if you have only one identity, most of the time you are known by different identifiers in each application or identity silo. So first, before you can extract those attributes, you need one global, normalized view of your identity. (And that isn’t just a baseline requirement for groups or access policies—this ”smart global view” is becoming mandatory for pretty much any security initiative coming down the pike, along with data management, business intelligence, personalized services, and any other buzzword you want to throw onto the list.)
read more →
In my most recent post, we looked at how today’s static groups are an essential building block in most policies used to “authorize” access to application resources. So when we talk about providing finer-grained authorization for an existing application, we are essentially talking about finding a better way to dispatch users to the different existing roles. Now, these roles are mostly associated with an application’s resources, which are pretty stable—except when the application allows a user to create new “resources,” such as report pages, that can be shared and hence could need authorization.
Because roles and application resources are created at application design-time, they’re only updated when there are product upgrades and new releases. So unless you’re facing a complete new release of an application with major new features (and hence new resources), the finer-grained mechanism for authorization—the real degree of freedom that exists for users of an application—resides in your ability to dispatch people to different groups and to switch those groups between different roles as the business demands.
Give Me Granularity or Give Me Death: The Evolution of RBAC
We know that more granularity at the level of authorization translates directly into more flexibility at the level of business. With greater granularity, you’re free to dispatch people into roles and groups according to your business requirements—and ideally, you don’t have to lean on the IT team as much. But here’s where the notion of “static” groups starts to become a hindrance for better business agility within the security realm. In fact, the whole progressive evolution toward Attribute-Based Access Control began with groups based around a static association of “labels” with respective populations, such as “Sales,” “Marketing,” “Production,” “Gold” and “Silver” customers, and the like:
Step 1: RBAC Based on Static Groups
But usage quickly dictated the introduction of a finer-grained way to characterize these populations. Hence, the introduction of subgroups:
Step 2: RBAC with Sub/Nested Groups
Notice that the introduction of one level of sub-grouping is equivalent to adding a second column of labels that now subdivide each group. If we go a few iterations further, we see the introduction of multi-level grouping, for a full hierarchy of groups:
Step 3: RBAC with Sub-Groups of Many Levels
Implicitly, even if those levels are not named, you can think about each of them as a sublevel of the group hierarchy (sublevel 1, sublevel 2, etc…). But as this hierarchy starts to grow, we quickly discover the management challenge this poses. Sure, you get a finer-grained way to dispatch your users, but you have to satisfy two constraints:
- You need to define a vocabulary that’s consistent across the entire organization—and if this set of labels is not enforced across the org chart, you’ll not only have a data quality issue, you won’t be able to give the right person access to the right resource. So if one set of people in your organization creates a group called “Product Marketing” and another “Product Management,” you need to know whether these are two completely separate groups, two groups with some overlap, and or if they have any dependencies, where one group is a subgroup of another.
- Once the group labels are created, you still need to assign/administrate those people to these different groups. And, alas, the information about who belongs to which group is only in the mind of the administrator, which means these assignments are done manually, and not by any data-driven rule. So it’s labor-intensive and potentially a bottleneck in an otherwise automated authorization chain.
Last week, I posted about a syndrome I called “trapped in the future.” We often get stuck in this kind of future dreaming, because even if the target we want to achieve is great, the challenge of getting there seems so high. We tend to overplay the characteristics of the ideal solution, imagining the perfect world down to the excruciating details, only to finally drop out of our dream with a long sigh, telling ourselves: “oh yeah, one day…but not today.” And just like people who believe the grass is always greener in neighboring yards, we ignore the fact that, yes, we also have green shoots—the beginnings of a solution—showing up in our own garden.
From Roles to Attributes Via Groups
So in search of how to deliver ABAC today, let’s first examine what we have in our existing security backyard. The current model that is predominant in modern applications is role-based access control, or “RBAC” (of course, every sizable enterprise also has a handful of legacy apps that cover the gamut of authorization concepts from ACLs to DAC). So let’s take a look at the best practices and principles behind RBAC. The two next pictures summarize the story.
First, roles and permissions are defined at design time:
Then, groups are assigned to roles during deployment:
The idea is to divide the authorization process into two steps: design and deployment. And it makes sense because it matches the division of work between:
- The application designers who are in charge of creating the applications and defining the main use cases and the abstract profiles of the people who will be using the application—the so-called “roles.”
- The end users who will be assigned to those roles at deployment time through the groups mechanism. These users carry out their tasks, without tinkering with the applications in any way.
So, in the case of a web access management package like SiteMinder, this division of work shows up in the following policy architecture:
So let’s reexamine what we have. When it comes to authorization, most “relatively” modern security applications (WAM, NAC, etc…) follow the RBAC-model. This approach divides the process into two steps:
- Design, where you define the policies enforced behind each role.
- Deployment, where you assign groups to roles and control access and authorization through group membership.
Since an application evolves relatively slowly—there aren’t new releases of functionality or policy every two days—any changes follow the lifecycle of the application.
read more →
Sometimes, I think we’re all trapped in the future.
In a time when we depend so much on technology for our productivity, comfort, and security, we tend to be obsessed by what’s next. We’re surrounded by seers, futurists, and sometimes crooks who make a great living painting their vision of Tomorrow 2.0. Now, there’s nothing wrong with trying to forecast the trends so we can evolve our present toward a better future. We all need to see where the world is moving, so we can make our business align with our customer’s needs—however they might grow and change. But there’s a danger with projecting ourselves too much into the future, that better, smarter, more organized time.
It’s a little like the crowded gyms in January, where everybody’s trying to make their New Year’s resolutions come true. We all have a wish list, along with the best intentions, but wishing doesn’t make a habit. Dreams can only become reality when you can tie what’s coming to where you’re at today and make a map for how to get to where you want to go. Otherwise, we’re all just a bunch of hype-chasers on the lookout for the next shiny object in our path—and hey, I’m as guilty as the next guy.
We must be careful about going too far in search of the perfect world, without taking our current world into account. There’s no clean slate: tomorrow comes with yesterday’s baggage, whether it’s those last ten pounds you want to lose or that directory infrastructure that’s no longer serving your needs. If we don’t start with what’s real right now as we plot our evolution forward, we’ll become victims of a syndrome I would call “Trapped in the Future,” along with its related strains of “Frozen Status Quo” and “Paralysis by Analysis.”
That’s enough philosophy for now—let’s look at a specific case where we’ve all been victims of this illusion: the idea of basing access on attributes.
George Jetson vs. Fred Flintstone: Who Wins?
I think we can all agree that it’s a pretty cool idea—one that could do a lot for authorization and even authentication. It’s also a timely idea, because in a world of sensors and the nascent “Internet of Things,” with its myriad of collected attributes, we should be able to substitute ye olde name and password with something stronger, smarter and a lot more usable. Something like multi-factor authentication but with even more attributes to drive authentication by context, for instance. But as this paragraph progresses, did you notice that my sentences are starting to drift from reality to the world of the Jetsons, with its flying cars and jetpacks?
The truth is every SCO, architect, and security team has been talking about ABAC for years, and yet how many true attribute-based systems have been deployed? We dream about driving the Jetson’s flying car, imagining it in gleaming detail, then we wake up and jump into our Flintstone cars to get to work on time. ABAC always seems out of reach, in the future, just like our New Year’s resolutions.
Now, of course, there are the happy few who achieve their goals, who drive the future (in their flying cars). After all, we live in a world where the rate of change in technology is staggeringly high, which is why we’re in awe of people like Edison, Tesla, Ford, or Steve Jobs. So what are their secrets? We know they’re visionaries, but vision isn’t the real driver here—there’s plenty of that going around. There are lots of early adopters who live in a perpetual version 0.01. And the rest of us are trapped in the future, without the will (or willpower) to change our lives. But the real geniuses are the ones who can envision a future, then make it real using what exists today. Think of them as the MacGyvers who can make tools out of toothpicks, who can blaze a trail from what is to what will be.
read more →
Last week, I introduced my favorite topic—digital context—and laid out a plan for how to consider the case. Today, we’ll dive in with a real-world example, looking at how freeing context from across application silos helps us make more considered, immediate, and relevant access control decisions. For those of you who have been following along (and thanks for sticking with me in my madness), this is blog 8 in response to Ian Glazer’s provocative video on killing IAM in order to save it. And if you haven’t been with me from the beginning: I’m in favor of skipping the murder and going straight to the resurrection. Those of you who are coming in late to the game, here’s the recent introduction to context, or you can catch up with the entire story in order here: one, two, three, four, five, six, seven.
It All Starts with Groups: The Simple, Not Especially Sophisticated Solution
Let’s start first with the notion of groups and their implementation. On the surface, nothing could be more straightforward: If I have to manage a sizeable set of users and assign them different rights to applications, I need to categorize those users into groups with the same profile, whether that’s by function, role, need to know, hierarchy, or some other factor. This is the simplest approach to any categorization, creating some “relevant” labels, then assigning people that fit within those label to define groups.
So let’s say we’re creating groups based work functions, such as sales, marketing, production, and administration. All we need to do is list all the people under a particular function, create a label, and then assign this label to those people. Couldn’t be easier, right? The simplicity of the process explains the huge success of groups—and although we implementers tend to make fun of groups as crude categorizations, I would guesstimate that at least 90% of our authorization policies are still implemented through groups. (So much for all that talk about advanced fine-grained authorization! But I’m getting ahead of myself here…)
In fact, we’ve become so dependent on groups that in many cases, especially with sizeable organization where the business processes are quite refined and well managed, we’re seeing that there are often more groups than users! At first glance, this seems paradoxical—after all, what’s the point of regrouping people if you have more groups than people? But the joke is on us technical people because we ignored another key reality: the business one. Sure, we could have a lot of people, but generally a well-managed and productive organization can have more activities (or different aspects of a given activity) that require the multiplication of those groups. So we gave our users a simple mechanism to categorize people into groups, and they used it—talk about being a victim of our own success! 🙂
Basically, we played the sorcerer’s apprentice and our simple formula yielded a multiplication of groups, which quickly became un-manageable. So we went back to the formula and started to tweak it, creating groups inside groups, hierarchies of groups, and nested groups; introducing Boolean operations on groups; aggregating them into roles, and so on. So what we were just saying about groups being simple? Simple for whom? Simple for the group implementers—yes, definitely. Simple for a user in charge of the initial creation of the group—sure. But add any complexity into the mix and the chaos begins.
So Much for the Digital Revolution: Every Change, Managed Manually
From a computer’s point of view, the assignment of a user to a group is totally opaque—just an explicit list entered by the person in charge of creating the group. This explicit list contains no information about why or how a user is dispatched into or associated with a group. In short, the definition of membership rests with the group owner, which is fine on the face of it. But that excludes any automated assignment of a new member to the group without manual intervention of the group owner. That means every change must be entered by hand—imagine the complexity as people constantly change roles and shift responsibilities. And imagine how easy it would be for an overworked manager to miss removing the name of the person she just fired from just one of the groups he was part of. Now imagine the security risk if that guy’s still got access to sensitive files.
Without explicitly externalizing those rules, those policies, the administration of the system becomes tied to the group owners/creators. The effort of sub-categorizing with nested groups or introducing more flexible ways to combine groups by using Boolean operators just reveals the root of the problem: When you give users better ways to characterize their groups, you are forcing those users to either make explicit the formation rules of their groups—or continue to make every single change manually, even as those changes become more complex and unmanageable.
And that’s how we (re)discovered the value of attribute-based group definitions.
Welcome to the sixth post of my series in response to Ian Glazer’s video on killing IAM in order to save it (AKA my Russian novel on identity, security, and context :)). We’ve looked at a lot of the issues surrounding identity today, and if you’ve been following along, my perspective on how to solve these issues is probably pretty clear by now (and if you haven’t, you can catch up here: one, two, three, four, and five). We at Radiant feel strongly that you need to federate your identity layer, just as we’ve all been busy federating access. But what does that look like? Here’s how the VDS, our RadiantOne virtualization engine, implements a federated identity service—we’ll be focusing on the highlighted part of this diagram today:
The common abstraction, data modeling, and mapping serve as the “backplane” for the whole system. As we see in the leftmost section of the diagram above, the system starts with the existing identity sources and transforms this identity data through three higher-level services built on top of the VDS virtualization layer.
First, the “aggregation” service regroups the different identity sources side by side. In directory talk, this is like “mounting” each different identity source into its own separated branch. Here, we don’t try to merge the different identities, we regroup them under a common virtual root, while keeping each namespace separated. Metaphorically, it’s like people putting different objects (identity sources) into a common bag (the “virtual directory”). The main advantage of this structure is that it allows you to send a global search, from the root to the top of the tree, to find an identity that can be defined in one (or more) aggregated identity sources.
Then, the “correlation” service determines if there are any commonalities across those different identity sources. Beyond the local/specific identifier for a given identity, the service discovers any correlation based on attributes and rules that can disambiguate an entity—a person or object—from across the different identity sources and representations. From a logical point of view, the correlation service is used to create a union of the different identity sources. The end result is a new identification scheme, where each entity in the system is uniquely—and uniformly—identified by a “global identifier” from across all identity silos. This global identifier does not exist in isolation; it’s also tethered back to each specific local identifier, so we always know where every piece of information lives. After the correlation stage, the entire set of existing identity sources is regrouped into a global namespace, where each identity is totally disambiguated.
Finally, the “integration” service links identities with their attributes for a complete global profile. For any given entity that exists in more than one data source (which we determined via the union operation described above), we can now take advantage of the common link—or global identifier—to “join” different attributes that are specific to each source. Through this join operation, we obtain a global profile out of each local description.
read more →
I’ve been blogging in response to Ian Glazer’s video about killing IAM in order to save it (I’m in favor of saving, even if we don’t agree on the killing part). Get caught up on my earlier posts here, here, here, and here.
A Millions Paths to Everywhere: The Internal Authentication Challenge
As we discussed in my most recent post, authenticating users and securely communicating authorization information with a cloud application—or any web-based portal—requires a common endpoint acting as the enterprise IdP. We also know you’ll need to be able to access multiple cloud applications, such as Salesforce, Workday, and Google Apps, as your enterprise moves toward this model. And we’ve seen that you’ll need many token translations on a per-application basis. But this is only one part of the requirement.
Another key function to support is being able to authenticate an incoming user against multiple internal authentication sources. Think about all the legacy applications and identity stores deployed across your infrastructure, with their various authentication methods and protocols—they’re all over the map, right? First, you encounter the AD domains, and get lost in all those forests. The authentication method here could be name/UPN and password or based on Kerberos and Windows integrated authentication. But the user could also be stored in some SQL database with a proprietary hard-coded password encryption. Chasing the user across diverse forests and data stores and knowing which authentication method is appropriate for presenting and checking credentials is a full-time job—one that predates the challenge of cloud applications. In fact, the search for a common identity structure has been a primary headache for IAM for as nearly long as the category has existed. Multiple attempts have been made to solve these issues, from in-house script-and-sync to metadirectories to virtual directories. These new requirements for supporting the cloud have just made it more acute.
No matter what you call it (or how it works)—whether it’s an enterprise directory, metadirectory, or virtual directory—the logical mechanism you need is a federated form of identity. Why federated? Because you don’t want to reinvent this layer, which already exists in a highly distributed, heterogeneous way across your identity silos. Better to tap into what already exists, while giving your underlying data more scope and flexibility by bringing it—or a flexible representation of it—into an identity hub. Now, you could implement this hub in many different ways, but we believe that a virtualization layer, based on a global data model that rationalizes and reconciles the different local views, is the most effective way to do it. And in a world where you will connect to multiple applications using “federation” standards, you need to do more than just federate access via the SaML or OpenID Connect layer—you need to federate your identity layer, as well. And the way to get to a federated identity is through virtualization.
All Roads Lead to the Hub: The Need for a Common Attribute Server and Better Provisioning
Fast forward and think about the n different applications in the cloud you need to provision to, and it’s like déjà vu all over again…with even more complications. So again, as both a final authoritative source of your identity and as an attribute server, you will need a rationalized view of your identity, a federated Identity system.
Next time we will look at the architecture of this federated identity, or “FID,” and then we’ll end this series with a deep dive into my favorite topic—context—along with graph and protocols support.