RadiantLogic-Cisco-Dashboard-Reporting-Hero

Moving Beyond MIM: Modern Solutions for Legacy Identity Infrastructure

Explore how Radiant Logic’s real-time, event-driven synchronization platform provides a seamless solution for modernizing legacy identity systems like Microsoft Identity Manager (MIM). Learn to streamline identity processes, transform data across systems, and scale effortlessly with a modern, agile identity infrastructure.

Read the transcript

Okay, let’s go ahead and get started this morning.

Thank you everyone for joining me here.

We’re going to be talking about actually moving on from MIM,

the concept of being able to replace your existing MIM

architecture with a modern deployment of

a data synchronization platform.

So let me go ahead and get started here.

A couple mouse clicks and we’ll be on our way.

Again, thank you everyone for joining me today.

There will be a recording of this session available to

everyone that registered along with a set of these

slides, and we’ll be including a white paper on the

seamless migration powered by Radiant Logic off of MIM twenty sixteen.

This will give you some comprehensive information in

much more technical detail than I can cover today and hopefully

a road map to help you comfortably start to move away

from MIM and towards a more modern architecture

for undergirding your identity infrastructure.

So why am I saying we need to move off of MIM?

Well, at the root of all this is is we have an identity data problem.

This is a challenge in the environment today that we

have a, issue with Radiant Logic, excuse me,

we have an issue with identity data.

And the challenge we have with identity data is that it is the

number one vector used by attackers internally and

externally to compromise our networks and to

take advantage of opportunities to either distribute

ransomware, steal internal information, or in other ways,

disrupt the environment.

If everything we were doing today was sufficient,

if all the tools we had already integrated were sufficient,

we wouldn’t have a breach a day from organizations across the

country and across the world.

So we have to look at the underlying reason for this,

and this really can be traced back today to identity data.

This is the information that is used to make all the access and

authorization decisions in your environment.

It’s the fuel that fuels all of your identity infrastructure,

but it’s also the vulnerability that causes your environment to

be susceptible to compromise.

So if you have a legacy platform in place that is

managing all of your identity data or a large portion of that

identity data, then you are in a compromised position.

And this is why we are suggesting actually at this

point moving on from MIM.

And for a number of reasons we’ll cover today,

but the primary one is that identity data is the foundation

for your security platform,

and you need to make sure that data is secure and you know

what that data is doing at all times and have visibility and

control over that.

So what’s in a name?

And I’m gonna take a little bit of a walk here down memory lane.

This will kind of give you a indicator of how long you’ve

been in the business.

If you remember Microsoft Identity Manager as

ZoomIt, which was the original application purchased by

Microsoft way way back in the nineteen late nineteen nineties.

And then that became a number of different things at

Microsoft.

I used to work with a company that built its identity

management platform on top of Microsoft’s forefront identity

manager at the time,

and we had a challenge with them continuously renaming the

product as they were searching for a market and a way to make

this platform, understandable to people.

So if you notice Microsoft metadata directory,

you really go back a ways.

Microsoft Identity Integration Server was where I came on,

on the scene with this product.

ILM was the next iteration of the product,

and then FIM when Microsoft built their forefront identity

and security platform, and then more currently,

in two thousand sixteen,

Microsoft Identity Manager as we know it today.

So the reason I bring this up is that we have a a challenge

with this platform not only in how old it is and how much it

is is been renamed over time,

but generally what this platform is doing today and the

challenges it’s giving you in your environment that is in

place in the way it’s being used today.

So what we’re gonna talk about today is Microsoft Identity

Manager then and now,

give you a little bit of a perspective about the product,

talking about what it looks like to replace MIM,

why it’s difficult,

what you need to be able to make that change,

how we can support you in doing that,

and the replacement product that we can put in place from

Radiant One that will allow you to gain most of the

functionality you have in MIM today and a lot of

modern technology and capabilities to expand the

breadth of your identity synchronization or foundation.

And then we’ll talk about modernization.

You’re having this opportunity to replace a legacy platform.

It’s a great time to take a look at what you’re doing and

modernize what you’re up to.

So let’s talk a little bit more about Microsoft Identity

Manager then and now.

It’s a classic ETL platform, extract, transform, and load.

This is what you see in a lot of data management systems.

The idea here is that I have sources of data on the left.

I have a connector service inside MIM that is an

agent that is run that actually is a a polling agent

that goes out and pulls data in from the source on a

batch basis or on a periodic basis.

It brings that information into what Microsoft calls the

metaverse, that’s their aggregation platform,

allows you to map that data to a common construct like

mapping it into a meta directory.

That’s why it was called a meta directory originally.

And then you can, transform that information and push it

out using another set of connectors to a target.

So potentially workday data and field glass data

could come in to MIM, be processed,

be aggregated potentially, and then pushed out to active directory.

Now there’s some challenges with this diagram we’ll get

into in a little bit,

but this is the general idea of the way that MIM worked.

One of the challenges with MIM is just the sheer complexity of

deploying a product.

As you can see here,

there were many individual components of MIM and these

were all individual services or service running on the

platform to get this system up.

I deployed MIM extensively in the twenty

ten, twenty fifteen range of time and had a lot

of challenges with getting the system up, keeping it running,

getting all the components integrated,

getting everything just to work together happily.

I used to tell the story that I would work on the system all

week in a lab.

I get everything working on Friday.

My batch synchronizations would run.

Everything would be wonderful.

I would leave it alone over the weekend.

I would not do a thing to it.

I come back on Monday and it would not function and it was

not easy to troubleshoot.

There was a tremendous amount of moving parts that could get

out of sync and things would stop working.

As I mentioned a couple times,

it’s a batch processing system so it has a schedule.

So, your data is only as fresh and only as accurate as the last

time you ran that synchronization process but

because it does not have the ability to recognize real time

changes in the system,

it has to rely on going out on a periodic basis,

dumping the data, reloading the data, re correlating the data,

and then republishing the data out to the end.

And this gives you a gap in time.

And that gap in time is a window that creates security

vulnerabilities in your data.

It hampers your business because you’re not decisions on

accurate information.

It challenges what you’re doing in terms of being able to do

data synchronization if it’s only good,

every four hours or so and slowly degrades over that time period.

Now, you could potentially run it faster if you have a simpler

system but a lot of times,

the challenge was I have so much processing to do.

I takes four hours to get it done.

I can’t process more quickly than that.

So, we learned to live with that.

That was a limitation of me and that was one of the challenges.

We also learned to live with lose,

boy, learned to live with limited data mapping and normalization,

limited capabilities of actually transforming data from

one platform to another, from one protocol to another,

making the kind of really dramatic shifts in the

information that we need and the ability to work in a modern

cloud environment with modern connectors.

So that is the other challenge with MIM is the connectors,

the way it gets data out of systems.

It shifts with a minimum set of connectors

for active directory, three named databases,

and then a generic, JDBC connector that as you know,

anything generic takes additional effort to customize

and, integrate into your

environment, which adds additional challenges as

generic platforms migrate or change or systems upgrade

generic connectors tend to break so you want to have as

closely aligned the connector set as you can.

This again was duplicated with LDAP directories.

It may seem like all LDAP is the same, it’s not actually.

The major LDAP directories actually act quite differently

and call things different things.

Think of inet org person and the user schema between AD and

traditional LDAP.

So those connectors for the three named systems were fine,

you only had a generic connector after that.

It did have a flat file connector and they eventually

integrated into Microsoft MIM, excuse me,

Microsoft Intra ID to the graph interface.

This was because MIM is what is actually in what we used to

call DERSync.

I’m not sure what Microsoft is calling it today.

It was renamed like MIM about eight times,

but the system for Microsoft to migrate data from on premise AD

domains into Intra ID is actually MIM under the covers.

Every other connector had to be custom written by your

integration partners.

And so more often than not,

you paid for every connector you needed in your environment.

So that diagram we had a minute ago of an SAP

Fieldglass and a Workday connector.

Those would have had to been custom built by an integrator

and then maintained by the integrator and then when

something changed or you came back and said we have

additional fields in Workday now that we want to be able to integrate,

you’ve got to come back and rebuild that integrator with

the integration partner and that just creates additional

fragility in the system.

The biggest challenge though right now and the reason we’re

here today is Microsoft support for MIM.

This is the technical support that the company is offering to

this particular product.

Is Microsoft going to back MIM endlessly into the future so

the investment you have today is going to be safe?

And the answer from Microsoft’s own declarations is no, they’re not.

They are actually sunsetting MIM.

They are ending support of MIM.

It has as a product at end of life date,

at twenty twenty nine.

Now twenty sixteen was the last major release of MIM twenty sixteen,

ending a seventeen year investment by Microsoft in the

synchronization platform for on premise data.

As you can imagine,

Microsoft around twenty sixteen was moving dramatically towards the cloud.

Microsoft wanted to be all in on intra,

and having an on premise data synchronization tool really

wasn’t an area they were investing any additional effort into.

Memm had a slash service pack in twenty nineteen.

This is the last time you got feature upgrades.

You got any kind of modernization.

With this now, you’re losing the MFA support

on the platform discontinuation of

additional features and functions if you have a need

you have a bug you have a problem you’re not going to see

that addressed security

platform updates stopped in twenty twenty one so as you see

in the news there are many vulnerabilities sharepoint’s

going through one right now,

a zero day vulnerability that’s being taken advantage of.

This happens in code.

Things are built in there that we don’t know until they’re

uncovered, but with MIM in twenty twenty one,

you stop being able to get patches to MIM to actually be

able to secure that environment.

And in twenty twenty nine the product will be actually out of life.

You can’t even pay additional money to continue support after

twenty twenty nine.

So these are all really good reasons to move off of MIM top

of the fact that the platform has limited connectivity,

is a tremendous amount of complexity,

and is very difficult to work with.

So a MIM replacement, great. Why haven’t you done it?

Why haven’t you replaced MIM already? Well it is hard.

It’s very difficult to replace MIM in most organizations

because likely it’s a complex integration.

MIM was asked to do a lot of things.

MIM was asked to integrate with a lot of systems and because it

was not out of the box, very easy to use, very compatible,

or very well integrated,

it took a lot of custom work to build MIM into your environment

and to try and duplicate the business processes that MIM was

actually performing.

So you have a complex integration in your environment

that’s not easily understood or replaced.

And understood as a challenge because of lack of knowledge.

This is a platform that was probably put in ten or fifteen years ago.

It was probably integrated into the system by multiple

different people.

I would doubt most of those people are still with the organization.

They have moved on.

Their skill set has waned in that they do no longer work

with MIM on a regular basis,

so they likely don’t understand all the intricacies and

complexities of MIM.

I today couldn’t go in and set up MIM and operate it the way I

did ten years ago.

And if I was the person who implemented it for a company

and then moved on to many other projects PAM, I run IGA,

you asked me to fix a MIN problem,

I am actually hoping nothing goes wrong because I don’t want

the weekend call to fix this thing if it goes down I’m in trouble.

It’s also likely locked into legacy infrastructure.

There’s a lot of old systems,

old platforms that I have distributed,

multiple AD domains environment that Mims working on,

applications that are just hanging on for one last

business process.

So I have old and brittle systems that other people may

not be around to still understand.

So I have a complex environment to work with.

But unfortunately,

because of what MIM does and the power of the platform and

what we needed to achieve, it’s business critical.

We have it integrated with systems that have to run.

This is why as the owner of the MIM platform,

on the periphery usually,

I go home at night a little bit uneasy that if it’s not running

tomorrow morning, I don’t know what to do,

but critical business processes are going to be affected and

the phone’s going to start ringing.

But I also have the challenge going back to my funding

organization and the management layers above me.

If it ain’t broke, don’t fix it.

If it’s running today, just keep your fingers crossed,

don’t breathe on it, and we’ll hope it runs tomorrow.

We’re not going to spend money on something that’s not

actually broken.

And this is part of the problem we have today with security.

A lot of the organizations we talk to say we’re not going to

invest more in identity security until we have a breach

and then we’re going to pour more money into it than you’ve

ever seen because we got visibility now in the public

that we had a problem but as long as we don’t see it,

we’re going to ignore it.

This is a challenge you really can’t ignore going all the way

back to the basic functionality that MIM is integrated into

your identity data.

It is the foundation of your security platform.

It needs to be modernized, it needs to be secure,

it needs to be visible.

And then I have no budget. So I can’t do this without money.

What am I going to do if I don’t have any money to do this?

Well it is a challenge always with budget.

It’s do more with less.

You have ten projects a year you want to complete,

you get three of them budgeted, you get two of them kicked off,

and then we try this again next cycle.

But this is becoming more and more critical,

and Microsoft has made it critical by giving an end of

life for MIM.

So you have a lot of factors here driving you towards

replacing MIM.

We understand it’s difficult.

We’re going to offer you today some opportunities and some

capabilities in Radiant One that will make this process

easier for you, it more attainable,

make it something you can actually take on.

So why is it replacing MIM so critical?

Well again, the big one is it’s end of life.

If you have end of life product in your environment there

you’ve got major security challenges and issues and it’s

not something on the periphery that affects one system,

This likely affects your whole identity infrastructure.

This infects everything that runs your environment,

so you need to address it.

And you’re addressing it because of security vulnerabilities.

This is the attack surface right now. It is identity.

That’s what people are after.

If it was ten years ago, I would say it’s firewalls.

And if you’ve got an end of life firewall,

you need to replace it because there are vulnerabilities on

that firewall that people are going to take advantage of and

your protection is down.

Well, you’re getting this now at your identity infrastructure level.

And as we mentioned, there’s a lack of expertise.

There are not enough people to actually manage and operate the

system, so you really need to take it out.

It also stifles innovation.

I have a legacy platform with limited capabilities with a

limited amount of understanding capability of implementing

additional features and functionality in the platform.

So I am stuck now with what I have in place and I can’t innovate.

They moved to the cloud that was done five,

seven years ago that we are finishing our migration now.

That is heavily stifled by MIM deployments because MIM is not

very cloud friendly and easily able to operate in a cloud

environment or in a multiple identity as a

service environment.

So you have challenges in that model as you move to the cloud

and have MIM in place.

It has a limited reach.

It can’t get to all the modern applications that you’re used

to operating with.

You don’t have an Okta connector.

You’re gonna have to custom build a Workday connector.

You’re gonna have a lot of challenges here integrating

with your IGA platforms and other systems.

If you have SCIM two or REST interfaces that your

applications talk about,

that’s not an out of the box functionality for MIM.

You’re custom developing again on a platform that is end of

life that nobody knows how to use,

so you’re now putting basically money down a money pit

and you’re investing in a lost cause,

so this is a real challenge to get that budgeted.

And then decreasing value.

More and more as you move your infrastructure,

as you modernize, you bring in new applications,

the functionality and benefits that MIM has in your

environment go down dramatically.

Because your reach is limited, you can’t increase that value,

so you’re putting, again, more effort into supporting a legacy

platform that is on its way out.

It’s going to be flashing around on the dock for a while

before it goes, so better to address it now early

while you can and while you have control.

So if you’re gonna do a MIM replacement,

if you agree with all the fear and doubt that I put into you

about MIM, then how are you going to do this?

Why haven’t you done it so far and how are you going to be

able to get to it?

Well, you need a plan.

You need to know where you’re starting from when you do any project.

And now with MIM,

that’s critical to understanding what is MIM doing today?

What is Mim’s intention?

What is it originally designed to do?

It’s designed to get information from this system

and that system and that system and take that information and

deliver it to this system into that system.

Can you map that out?

Can you understand it?

Is there anybody left still around that knows what it does?

Is there anybody left in your organization that could open up

MIM and look at it and say, oh, I see what’s going on here.

That itself can be a challenge but you have to understand what

it’s doing if you’re going to replace that functionality.

There is a process of discovery.

There’s a process of documentation.

Now if you’re lucky everything was well documented,

you’ve got complete product guides,

you’ve got complete configuration information,

you’ve got detailed log data coming out,

you’ve got everything you need at your fingertips,

but I rarely find that to be the case in most organizations.

We are so busy fighting fires we’re not stopping to document

everything in detail,

especially as things change and are updated or fixes are put in.

If something broke,

I doubt the person who fixed it went back and redocumented everything.

So what are you starting with? What do you have in place?

And what do you need to accomplish?

What is the goal of MIM in your modern environment?

Are you still working on?

Is it still providing functionality that’s no longer of value?

Are you updating systems that no one talks to anymore?

Are you trying to manipulate data that’s no longer used in

your environment by any of the applications?

But you have many other applications that could benefit

from having access to that data.

So what do you need to accomplish in your modern

environment as you rebuild and replace this system

so you can take this opportunity to not only change

out the technology but you can change out the functionality

and the intent of the platform to be aligned with your modern goals.

And the big one, what can you afford to do now?

What is critical?

What will stop the business from operating if it fails?

And what can you get budget for to take care of now?

There may be some legacy functions in MIM that are not

really that critical to the system.

That they failed, it wouldn’t be terrible,

you’d have time to work around them,

they’re not things you need to take care of now, good.

Define that, set those aside,

function on what is going to hurt the company when it goes down.

So also you need a solution if you’re going to replace MIM.

I can’t just tell you go out and replace MIM and leave you

alone in the desert without a cup of water.

So I’m going to offer you Radiant one Global

Synchronization.

This is a Radiant one component dev our platform that provides

data synchronization which is the core functionality of

Mim.

So, we’re able to give you the majority of the functionality,

everything you saw in the illustration a little bit

earlier that Mim’s capable of doing on steroids in a

modern infrastructure and a modern technology.

So, can do a one to many.

We can take information from one system and propagate it

into many endpoints.

We can take information from many systems as you saw a

workday, a field glass,

another input platform, and take that data and push it into

an active directory or an IGA system.

We can do a one to one.

So I can move data from a database into a directory.

I can move cloud data down into an on premise LDAP infrastructure.

I can move data around my environment at will,

and I can do something MIM couldn’t do effectively,

which is bidirectional topologies.

I can move data from one direction to another,

and then I can reverse that data and move it back from the

source to the target.

I want to make sure when I’m configuring that I don’t create

a looping environment.

We have capabilities of blocking that,

but you are able to move data in any direction you want.

We also give you data format, schema structure,

and protocol translation.

This is critical.

Data formatting critical to look at because you

have a different set of data labels and data types.

And even inside moving data and dates from Microsoft Active

Directory into a database infrastructure or into an IGA

platform, the data is in a different format.

It needs to be translated.

It needs to be made consumable in the language and the

schema that the target is needing to accept.

So we have complete capability of remapping that data.

I can also do controls on that data and alter that data.

I can build data strings from concatenation.

I can make calculations on the data.

I have complete control over everything that happens to each

of the attributes in that system.

I can change the structure of the platform.

I can come in from a flat file and I can distribute that into

a hierarchical model with multiple organizational units

within the organization.

Or I can distribute that across different database tables in the system.

So I have the ability to restructure where the data goes.

And critically here, I have protocol translation.

I can get data from a legacy database and push it into an

endpoint that only listens to SCIM.

I can take data from a highly documented REST API

platform and make that data available inside Intra or make

that data available inside a directory infrastructure or

some other application.

So I have the ability to move across those protocols

seamlessly without a challenge.

This is critical in a modern infrastructure.

We have standardized on protocols.

If you obey and use and integrate those protocols,

you have tremendous amounts of power in your environment.

We’ve also added something else that MIM doesn’t have,

and that’s message queuing and replay and reset.

We monitor every single message,

every single event that moves through Radiant Logic Global

Synchronization, and we give you the ability to catch

that message if it does not get applied properly on the

target for any number of reasons we’ll look at a little bit later.

I can replay that message, I can reset that message,

I can go back in and investigate

clearly through visual tools where did that break down.

I’m not just throwing things on the wire and hoping it got

there and assuming it did and then acting as if it did to

only find out later when my IGA platform does reconciliation

that my intended world and my actual world are so out of

whack that I have to reset everything from scratch and start over.

That is eliminated with message queuing and ability to replay.

Graphical configuration is a critical piece that makes it

very easy for people to learn MIM excuse me,

Radian one global synchronization very quickly,

start to get up to speed on it,

and also critically when you’re coming back to the platform you

haven’t used in a day, in a week, in a month,

due to the intuitive nature of the system,

the way things are laid out, the way things work,

and the stability of the platform,

it’s very easy to get back up to speed,

it’s very easy to get in and make adjustments,

and very easy to get back in and extend functionality of the platform.

As I traditionally say in almost all my webinars,

I don’t write code. I can’t.

I peaked in Pascal at a community college thirty years ago,

but I can configure RadiantLogic’s global

synchronization through the GUI interface in a very complex

environment, do a tremendous amount of data formatting,

translation, protocol translation,

and deploy this platform very easily with a mouse.

And that for me is a gift.

The other piece that is critical here that you will not

see in other data synchronization systems is

near real time event detection.

I say near real time because in reality real time doesn’t exist,

nothing happens instantaneously in the moment in a quantum

entanglement, but within milliseconds, radiant logic can

detect changes on a source of identity data

if that is supported process.

If that checks that change is detected,

it can propagate that change as an event and push that update

all the way through to all the additional targets that are in

the topologies that Radian logic is supporting.

So on a change, someone becoming added to HR,

someone being terminated,

someone’s security clearance changing, someone’s training being completed,

any of those events that would have an impact on access can

immediately be seen by all the downstream applications.

This gives you the capability to react in real time.

So when someone does something in the environment that is not authorized,

you can see that change and you can see that change propagating

through the system and you can act on it in real time.

It is critical that we move to a real time world because as we said,

what we’re doing today with batch processing is giving a

giant window for bad operators to get into our environment and

do their dirty work without being seen and potentially even

get out unseen.

So we need to be able to work in a real time world and that’s

what’s available from Radiant one.

So just to give you a sort of a look behind the covers here,

I’m going to get a little bit technical here for those of you

that are strictly on the business side and trying to

make a business justification for MIM,

I will say this diagram may look a little bit busy,

but this is an illustration of the modern technology that

has been integrated into Ray and Logic global synchronization.

So you can ensure that it is future proofing your

environment, it can operate in today’s world,

and it can operate in the world coming down the pike.

So you can see on the left hand side here, I have data sources,

I’ve got the Workday and the Fieldglass,

I’ve got all my web services up in the cloud,

I have SCIM interfaces, I have Okta up there and Intra,

I have legacy on premise directories, AD databases,

all the places I may find identity data, flat files,

everything else you need.

I have a set of real time connectors for platforms that

support real time detection.

I have periodic connectors for the others that are able to

listen on all those sources.

Now, you’ll notice on the right hand side here,

I have very similar data.

I didn’t move these logos over but I have very similar data

here on my target platforms.

So a source can be a target, a target can be a source.

We don’t care if we have a connector.

It can be a connector that writes or connector that reads.

So again, you have that ability to provide bi directional

synchronization of data.

Those real time connectors are again moving across all of your

standard protocols, even flat files,

even if you want to go back to web services and do some SPML

on an old Asia platform, we got you, don’t worry.

I’m going to create a view of that data inside GlobalSync

that allows me to do that data manipulation,

that data transformation in a graphical model.

I can see exactly what I’m doing.

I can see how the rules I’m creating apply.

I can test that data and visualize that data before I

start putting it into production.

So I know exactly what’s happening.

I’m not writing a set of scripts that basically run and

I am not clear on what the outcome is.

I have a very visual indication of exactly what the impact of

this data transportation, data modeling platform is.

I have a set of message queues inside the system that are

gonna listen for those changes,

publish the data onto the queue,

run it through the sync engine,

which is gonna push it out to the target system,

and then I’m going to transform that data and map it to the

structure, the schema, the format of the target itself.

So each target has its own individual format and way that

it holds the data.

It’s an apple, it’s an orange, it’s a pineapple.

I have models inside my system of an apple, an orange, and a pineapple.

So again, when I’m mapping this data,

I’m inside Global Sync seeing exactly how that target looks,

exactly what the data is going to look like when it’s applied.

I can see the impact of those changes and then those get

pushed out to the end.

So this again is a very graphical interface,

very easy to operate with.

But not everything is roses and not everything works every time.

So what happens when one of my systems,

one of my targets is down?

What if I can’t update an active directory because my

link worldwide to my systems in the Philippines

is down, that active directory domain is not available.

In a lot of systems,

they will simply throw the update on the wire,

hope that it gets there, counted as done,

and go about their work because they can’t understand message

queuing internally.

With Radian logic, if I get a fail to apply message on the

endpoint, I’m going to then re queue that particular

event so it can be rerun in a matter of

minutes or an increment of time when that system may be back up

and I can run that any number of times I want to but till it

finally times out.

If it does finally time or there’s another reason that

couldn’t be applied,

the system rejected the call because it was awake, it was alive,

but it refused the data format or something else was a problem with the system,

that is going to go back into our dead letter queue.

This is where messages that can’t be applied get stored.

We don’t throw these away. We don’t dump these.

You intended this information to change.

You want to know what did I intend to change and then why

didn’t it change and can I rerun that change once I fix the problem?

Can I see what the nature of the problem is to go back in

and reformat and remodel my controls?

What can I do to make this system work?

So you have a number of layers here to guarantee delivery

because when you’re running a synchronization topology,

when you’re actually relying on the system to keep your data

accurate, you need to make sure that it is,

and when the world snaps out and bites you,

you need to be able to bite back.

So again, this is part of the modern infrastructure in RadiantLogic

Global Sync that allows you to have that kind of assurance of data delivery.

We also have fault tolerance and recovery.

We run this on a multi node model.

It can be run on an on premise set of virtual servers.

It can be run-in a Kubernetes environment,

in a SaaS environment, in our cloud.

And it runs on a number of nodes that are independent

processing of data.

I run the agents on one node so that I have a a

limitation of the capability of creating looping on the system,

and I process the data synchronization on one node

primarily to keep things simple and easy to configure.

If I do need to do processing across multiple nodes that can

be supported with additional integration and configuration

so I can run for very high and intense environments

parallel processing across all my nodes of data

synchronization.

But what happens if a node goes down?

What happens if a virtual server drops?

What happens if I lose that

platform’s capabilities are running?

And that had to be running my agents.

Now, my synchronization is off. My data is getting stale.

People are complaining. Things aren’t happening.

People aren’t getting provision.

Everything is collapsing. Oh my gosh.

Worst possible scenario. Nope. Not at all.

We automatically took care of it for you.

We started all the agents again on another node.

This is a fully aware system.

Every system is a mirror of the other. They know each other.

They talk to each other.

They’re managed on the back end.

The agents start up immediately.

They know where the pointers were on the previous agents,

and they’re able to run the synchronization engine and

continue on as if nothing happened.

You’ll get alerts that the system’s down,

but you won’t get alerts from end users saying, hey,

I can’t get into a system or my configuration isn’t working

or I terminated this user ten minutes ago and he’s still getting access.

That all ends here because of the ability for us to do fault

tolerance and data recovery.

We also have rules.

Everybody needs rules, conditions, actions, events,

all the things that change in your environment that you may

want to then move data around based on that change.

Have a set of conditions.

Somebody is added to the HR system.

That is a new hire. That’s great.

Someone is added to a new group,

and that kicks off a particular event and a series of changes.

Someone’s, security clearance is altered and that action

causes a cascade of events downstream.

So I have the ability for these rules to be configured

again through a graphical interface or I can get into

complicated or very granular coding if I want to for

conditional systems.

But I can create a situation here where I’m manipulating

source variables.

I can change the actual attributes that are coming in.

So I come in as f name in all caps from the HR

system, but I need that to be gene or given name in

lowercase when it goes out the other side.

All that is capabilities there to create rules for manipulation.

I can calculate values.

I can create a full name based on taking a f name and

concatenating it with a l name and then call it a full name

and then put it into a SAM account name if I want to.

So I have complete control over manipulating that data.

I can transform date formats.

I can use constants and populated data from other attributes.

I can use computed attributes.

I can do almost anything I want to this data based on rules.

And I can do this based on a source attribute.

So something on the source system with a particular

setting, I’m in the engineering department,

that kicks off a whole bunch of rules based on the fact I’m in engineering.

I move over to research,

that kicks off a whole another set of rules that take changes

in the environment based on that change attribute.

Or an event.

If I am terminated and you want to revoke all my access

immediately in real time,

Radiant Launcher can kick off that event.

We can do an attribute event.

So if I change membership in a group,

that can kick off a series of actions that they take

other changes in the environment,

apply other updates to other systems,

and remove access in other downstream platforms.

And I have a rule variable.

So I can do actions based on a derived

rule and a variable in a rule.

So I can calculate an expiration date based on a

number of factors or a risk score.

And based on that risk score,

I can apply changes in the system.

So it’s critical that you have an easy graphical interface for

configuring all the business processes you’re trying to

mimic inside global sync.

Now, this is a challenge inside MIM.

It was very difficult to see that level of complexity or

write code or scripts to do that.

And again, the person that wrote that ten years ago is probably not here.

The person capable of going back in and decrypting all that

information and understanding what was trying to be

accomplished and how it exactly worked and what workaround

somebody did one Tuesday night at midnight just to get

something running again is going be very difficult.

And you’re going have a hard time just in real time knowing

what MIM is doing.

We log everything. We log the agents themselves.

Are they up? Are they available? Are they stopped?

Are they suspended? What is the health of my system?

Because agents, there’s those connections to the systems,

those connectors are what’s bringing all my data in.

So, we’re listening on those systems.

Now, we’re not deploying engine agents on the endpoints.

You don’t have to put an agent on any of your sources.

We will do that from our end together.

It’s just a a term that’s used to describe a connector but we

can manage and and check the health of that.

We can track and see exactly what the capture connector is capturing.

If we have a change on the system,

we calculate that change that kicks off some set of events.

What exactly did we see change and where did that change come from?

And then what are we doing with that change?

How do we transform it?

How do we apply that on the far end for cluster developments?

Popling processing can be done across multiple sync engines, as mentioned.

So we have the ability now to log all this information

so you have visibility in the system.

You can see exactly what’s happening to the system.

So you have a good, solid,

robust platform that’s been deployed for a number of years.

I think we’ve had Global Sync Out for at least the thirteen

years I’ve been at Radiant Logic.

We’ve continued to enhance and modernize the system.

We’ve added in support for all of the modern

protocols, SCIM two, REST, other well documented

APIs, endpoint connectivity.

So we have the modern infrastructure with message

queuing and real time detection available as a replacement for

MIM.

So if you’re going to do that replacement,

if I convince you that it’s too scary to hold on,

don’t have anybody to call in the middle of night when it

goes down, you need to get out from under this albatross,

then Radiant Logic Global Synchronization is the platform to do that.

And at the same time,

you have some opportunities here for modernization.

You have the ability to do parallel deployments.

You can deploy Radiologic Global Sync next to

MIM, and they can coexist.

You don’t have to rip out MIM on Friday and deploy Radiant on

Monday and hope everything works because that’s probably

a recipe for a bad

afternoon on Monday.

So what you wanna do is be able to build Radiant Logic in

parallel to the existing MIM infrastructure, run all the QA, all the testing,

make sure everything is working for a particular topology.

This source being transformed this way going into that target works.

Great.

Now, let’s turn that on on Radiant Logic Global Sync and

Production.

Let’s turn it off on MIM and let’s go to the next one on the list.

So, you have a periodic way to do this without disrupting your environment.

You have a chance of retiring legacy infrastructure.

You may be synchronizing data out of from old AD domains that

are hanging on because there were some platform out there

that needed that data in a certain format that only

existed in that domain’s extended schema and you can’t

get rid of that domain from that merger because you can’t

get rid of that application and you don’t have time to rebuild the whole thing.

Great.

This is an excellent opportunity because the

transformation capabilities of Radiant to be able to sunset that old domain.

Let’s get the data that’s in a current modern directory

intra AD domain, transform that,

make it look like the data that is the old domain structure and

schema and format,

and provide that information in the application so can still

operate without you having to keep that old system going.

Redundant databases, homegrown solutions that you put

together, all these can be replaced now by Radiant Logic

Global Sync.

Highly scalable, up to a hundred topologies, sources,

and target configurations per node, per deployment,

per infrastructure run out.

So if you have more than that,

we can deploy multiple instances of Global Sync.

So you have the ability to scale up now much more than you

did with the Mim platform.

So, you can reach more now.

Your company has grown. Your targets have grown.

You have a much more complex environment.

You’re switching out an old identity provisioning platform

for a new one.

You have a lot of plumbing to do here.

You have a lot of capabilities in Radial Launch to support that.

Also new sources and targets. You’ve got a Workday.

That’s a whole new set of identity data coming into the

system that is not the legacy platform you wired into MIM ten

years ago, which was PeopleSoft or

original Oracle or Siebel.

So now what do you do?

Well, with Radiant, you’ve got that modern tech technology.

It’s a great time to go back and say, hey,

let’s revisit all the really hard to integrate systems that

we really wanted to put into our platform and put those in the project.

Maybe I want to get data in and out of my PAM platform.

Maybe I want to do some work with my IGA system.

Maybe I have some zero trust capabilities where I need to

distribute data to policy endpoints to be able to

do decisions closer to the application to the user.

All these are opportunities now with Radiant Logic to take a

look at that and build now a modern project with a modern focus.

Let go of some of the legacy weight around your ankles

and, and run out new systems here.

Rethink existing workflows, filters, joins,

transformations, clunky workarounds for limitations in

MIM.

This is a great time to get rid of those.

Extensive scripts and codes and PowerShells that you wrote,

that somebody wrote, who’s not here anymore,

that were done to solve a one little problem.

Let’s see if we can streamline that,

that we can get this into the regular workflows in global synchronization.

I also have the ability to create sophisticated correlated

unions and joins of data.

I can build a unified profile inside Radiant one.

I can build now a global master user record

or an entitlement catalog where I can build a

unified source of all my identity data that then can be

monitored, managed, cleaned, and controlled,

and that data that can then be pushed out to where it needs to

be delivered or can be served up by Radiant One in a way

that now modernizes my whole infrastructure and future

proofs what I’m doing.

Identity data is the foundation for everything you do in the

identity space.

It is the fuel for your PAM platforms,

for your IDaaS platforms, for zero trust.

You need that data to be clean.

You need that data to be accurate.

You need that data to be available,

and that comes from Radiant one.

And then modern protocols, the SaaS and on premise deployments,

you can run this out on your virtual machines if you want to on premise.

You can put it in your own, hosted cloud, AWS, Azure,

or you can run this up in our SaaS platform as a hosted service.

You have the ability now to bring this platform into the

rest of your infrastructure and world.

So just to give you an idea of where this all sits and and how

Radiant one is an overall platform integrates with your environment.

As I mentioned earlier, identity data,

all these sources across the bottom here and these arrows go

both directions because every source can be a target.

I can pull information out of my HR system,

and I can have an update pushed back into HR.

Very commonly, someone is generated in HR,

that’s where their data originates,

but HR doesn’t know what their email address is going be when

the email address gets generated in intra.

That information now can be written back to HR so that

profile in HR has accurate data.

When someone’s manager changes,

that information can be updated so that my manager in HR isn’t

my hiring manager from three years ago,

it’s actually my working manager today.

If it’s important to keep quality data synchronized,

you can do that here.

So remember, every source is also a target.

My targets are also all the consuming applications in the environments.

Every place I need to be able to run time authorization,

my zero trust policy based decisions.

This can be network segmentation.

This can be access to the network itself at the edge.

This can be application or database access.

I have PAM platforms that need accurate information updated in real time.

If someone gets an elevated privilege to their account and

it’s not obvious to PAM, it’s not gonna know that.

It’s not gonna manage that. It’s not gonna track that.

But the rules,

the policies inside GlobalSync can push that update right into

PAM and make sure you’re current.

Same thing with getting multiple identity sources into

IGA.

Having GlobalSync available to push that data into IGA makes

it a critical component to being able to make sure your

IGA system is deployed easily,

simply with the least amount of effort and complexity as possible.

You don’t want to do data integration at the functional layer.

You want to do data integration down here at the Radiant one layer.

Believe me, it saves tremendous amount of time, effort,

and it makes things much more flexible.

Everything on your access management side,

whether it’s Okta, Intra, Ping, ForgeRock,

whatever platform, Keycloak you’re using for access management,

that needs accurate identity data for authentication,

for authorization.

It needs to recognize changes when it happens.

That data needs to be pushed into those systems.

GlobalSync is the one to do it.

If it needs to be pulled in by the system as a query,

Radiant Logic is the source of that identity data.

And again, all the capabilities here with this centralized source of

auditing, of data hygiene, of data quality assurance,

of data cleanup is all available at this layer inside

the full Radiant One platform.

So you’re investing in a synchronization tool that’s

part of a bigger family that gives you a tremendous amount

of capability along with AI assisted analytics and

access review capabilities so that you can actually start

to deal with the scope and complexity of your identity

data with the power and control you’ve never had before.

So I’m going to open it up to questions to see if anybody has

any questions right now before we wrap up.

And I want to thank everyone again for joining me today.

Let me take a quick look here and see.

Got a couple questions that came in.

So can Radiant One consolidate multiple HR fees

into a single stream for my IGA platform and create a single schema?

That’s a really classic challenge.

Because most IGA platforms would love to have one source

of identity data.

You have to do a tremendous amount of scripting and

programming inside those systems if you have multiple

sources of identity you need to merge together.

If you’ve got triangles and squares and half moons as your

sources and you have a circle for your blocks as a IGA

platform, it’s very difficult to get that data together.

So, yes, this is an excellent use of Global Sync to take that data

from those systems, to recognize a change in real

time, and push that into the HR system and

or see that data up in a way that the HR system excuse me

the IGA system to that data up in a way the IGA system

can consume that when it runs its batch process on a regular basis.

So you have that capability of making that unified view available.

Can we set up multiple master models which I can sync

changes made from any of my twelve separate department

domains, AD domains, so I can if a change in one is updated in all.

Well, you’re still managing twelve AD domains, I will,

I will tell you to listen on to the next slide here because we

have some help for you there.

But, yes, in an environment like that,

if you have multiple sources of data that are really peers of

each other, I have a different department,

but everyone is in that directory and a change in one

department should be reflected in another,

you can set up that synchronization.

So you can have a topology where each department is a source,

every department is a target for change from any source,

and again we can build in looping controls to make sure

you don’t end up just going in circles chasing your own tail.

So that looks like the questions I have in place today.

Let me go ahead and then extend an invitation to you.

As I mentioned, if you’re managing multiple AD domains in twenty twenty five,

you want to join us for our next webinar, Thursday,

August twenty sixth, eleven am Pacific Time,

two pm Eastern Time.

Consolidate with confidence, reduce risk,

and enhance performance across your AD environment.

AD consolidation is still a thing.

A lot of us tackled it a while ago or tried to but I’d

run into organizations and prospects and customers

regularly that are still dealing with multiple AD domains.

I’m aware of a federal agency that has districts across the

US that are all very critical to doing the government’s business.

They all have their own DAV infrastructure.

They’re trying to consolidate that now.

I’m talking to Radiant Logic about that.

So if you have a challenge with multiple AD domains,

at least you need to get a virtual consolidation in place.

You need to do it without spending millions of dollars

and millions of tens of years.

Join us to discover how Radiant One can simplify one of your

toughest IT challenges and accelerate your path to modernization,

as we have done for leading organizations across the industry.

So again, I want to say thank you to everyone who’s joined us today.

We are going to give you a set of the slides that I presented

here this recording and again our white paper

on MIM transformation and we look forward to seeing you on

the twenty sixth for AD consolidation.

Thank you again.