[MUSIC PLAYING] Hi. So I'm Julian Stephan. In today's session, I'll be chatting about Microsoft 365 DSC, and how to set up in tenant from scratch, and maintain configuration drift, along with some other scenarios that might be useful in your environment.
A little bit about me. I've got a 17-year career in the Microsoft productivity stack focused around architecture, migrations, operations, and the consulting space. Currently, I'm a solutions architect for professional services in the delivery space at Quest Software. I've been with Quest for about a year and a half now. Before that, I came over from the Quadratec acquisition where I was previously employed.
Currently, my role today is I'm responsible for architecture, design, and delivery of any exchange migration of flavor from on-premises to Office 365 or vice versa, Active Directory. Or tenant to tenant migrations.
So in today's agenda, we're going to talk about what is Microsoft 365 DSC. We'll talk about just some desired state configuration things in general just to look out for, the building blocks of how it exists today.
We'll go into how to build and deploy a Microsoft 365 desired state configuration and a future option of going with a configuration as code model with using DevOps, GitHub, things like that to maintain your DSC model.
So what is Microsoft 365 DSC? So DSC in general, if you've coded PowerShell before, typically what folks will do is you have a certain set of variables that you'll have filled with static information that you'll have in your code. You'll have a function that you'll reference those certain variables to to do something. And at the end of your script or somewhere in the middle you'll have some logging. And then you'll call that function to execute what you want to put in that function.
Desired state configuration in general is a little bit different. So it's written more in the declarative state of you want it to do something in a particular workload. So it's more apt for if you've written any C-sharp or Java, just the way you declare certain things and how you want it to do certain things. You'll see some of the variables within the resources are similar to that kind of model. So it's declaring what you want the actual script to do in certain chunks of order.
So instead of using resources, you would use configured instances to write your actual configuration. So your resources, which are your workloads in Microsoft 365 DSC are kept in the proper state. So within your resources which are referencing the Microsoft 365 workloads, they will contain the code, the PowerShell code within that workload that will put and keep a configuration that you want in place.
So they will reside in PowerShell modules that you would import onto your virtual machine or physical machine where you would keep this configuration hosted. So these modules would connect to-- are the PowerShell instances that you connect to today. So for any of the supported workloads like Exchange Online, SharePoint, Teams, Azure Active Directory, things like that.
The last part of the just DSC in general, there's a portion that's called Local Configuration Manager. So that's LCM for short. So what your Local Configuration Manager does is that it's an agent-- I would say it's an agent service that's built into PowerShell that's on your Windows Server 2016 or higher.
So it interacts between your resources-- so you're connecting-- interacts between connecting to your workload modules and what you want to set within those workload modules. So it acts as a certain polling interval that you can have on your VM where you set up your first configuration. So there are certain settings where you can set it to apply configuration.
Once it applies a configuration it can monitor for configuration drift and automatically apply configuration in place if something is out of state. You can apply and do a monitor where you apply your initial configuration and then you'll have a monitor that shows in the event logs if there's anything that's a difference. And it'll alert you there.
You can also add some code, just simple SMTP code. For sending out email you can use the standard PowerShell code that you would typically use to send out emails if you want to pull in those events to alert you just as initial first step of anything that would be different.
So what are the use cases for using Microsoft 365 DSC today? Your large time-saving your configuration once implemented is probably one of the major keys that you'll get by implementing this model. So you could apply this to a new tenant.
Or if you have a tenant that you're setting up from scratch, whether it's a dev test tenant that you want to mess around with for any configuration settings. Or I've seen it in a tenant to tenant migration scenario.
There's a couple of articles on the practical365.com website that go over certain use cases for DSC. One, particularly for tenant to tenant migrations written by myself. And also there's another use case written by Sean McAvinue who is one of the other Microsoft MVPs in the community. And he's got to take on how you would compare and contrast between two tenant configurations.
So you can take this and apply it to, like I said, a new tenant that's being stood up. You can apply it to an existing tenant configuration.
A thing to look out for in general if you've coded PowerShell in the past, PowerShell is a powerful language. When you script something it does exactly what you tell it to do.
So one thing to look out for if you're going to be going down this route with an existing configuration tenant that's in place, if you have a scenario of you're taking some settings from a source tenant, for example, and applying them to a dev test tenant, production tenant, target tenant, whatever the use case may be, you have to make sure that you don't want to overwrite the settings that you might already have currently in place that are working for you.
So there are some organizations that they have certain policies they have in their example for their source tenant that they want to move to the target tenant. That's great. But you just make sure that when you're reviewing your configuration, just make sure that it's things that you want to set and not things that you want to overwrite by mistake and then cause a certain scenario where settings are not right or it's opening up authentication where it doesn't need to be opened up.
This will simplify your processes and runbooks. Organizations today, especially large organizations, if they've had a Microsoft 365 tenant setup for a long time, they've gone through multiple iterations of certain changes and change controls to change certain settings in a tenant.
And most organizations will have a repository of script saved somewhere on some type of tool server or a certain directory saved up somewhere that might be backed up somewhere where they'll reference on occasion if in case they need to make a change.
In moving to a DSC module, you have all of those settings. And it depends on how much of an integration you want to have between the tenant. There's also other factors as far as whether you're a small, medium organization where you may have a smaller group of administrators responsible for an entire piece of multiple workloads within the tenant. Or if you have a large enterprise organization where some organizations might be siloed and they only are responsible for a certain workload within the tenant.
Either way, it gives a central location of where you're storing your administration and configuration for each workload. And it saves on the rewrites that you might have to do moving forward with going in and finding a script and for it to do one thing where you can just have it all saved in one document and central source.
It simplifies testing out your changes and tracking issues before your production rollouts. So what you can do is, as I said earlier, you can use an apply and monitor approach where you can go in and apply a configuration in your dev and test environment, have it running in that environment, monitor for any particular changes, and see how the environment may react if it's something that you would want to set or not.
So how to build and deploy an initial 365 DSC configuration. So your prerequisites today is, for a PowerShell dependency, it's only-- the Microsoft 365 DSC module is only supported for PowerShell versions 5.1 and 7.1. The reason why that is, is because they are leveraging the desired state configuration feature that's built into-- it was initially introduced in server 2012.
But the PowerShell version they're supporting are 5.1 and 7.1, which should be anything in Windows Server 2016 and higher. So for a VM where you would want to store these configurations, it's got to be Windows Server 2016 or higher because it supports PowerShell 5.1 and later, or 7.1 with a proper upgrade. Support for newer versions are not available.
So currently, we're on version 7.2 of the DSC engine. It's not available-- it's not supported in that release. And the reason for that is because Microsoft made a decision in direction where they wanted to separate the DSC engine from the OS and make it more open source that more folks in the community can help along with the Graph API team that manages this particular module to help make changes within the community.
So the roadmap is they're currently working on having the support for 7.2 and later versions that where DSC is decoupled from the server OS by the end of this year.
From a permission standpoint, you have a couple of different two or three different models that you can go with. So you at least need valid credentials with the correct permissions to the supported M 365 workload in which you want to extract and compile certain settings from. You can either use a user credential approach, which is just your basic username and password. It supports the most workloads today.
But it's not necessarily the most secure way of doing things. Microsoft is recommending to go more of the Azure application registration approach from Azure Active Directory as a application-only model where you only apply the certain permissions from the Graph API or the other modules that you may need to apply your configuration.
One thing that you need to keep out for from a configuration standpoint that's been seen in large organizations, especially, where you have multiple resources across multiple workloads that are being deployed is you have to watch out for your granular permissions that are being added that add to having higher permissions that might be necessary or need to be watched.
So the thing that I've seen in common configurations like this is when you move to a larger model. If you're supporting all of the workloads and you have a lot of resources within that workload, it can be a really large configuration. It could take sometimes up to 45 to 60 minutes to go through everything that you have configured depending on how large the configuration is and configure everything until it's complete.
What happens is when you have that and you're using-- you can only use one app registration per tenant. So when you're adding all those API permissions to an app registration, what happens is you have a bloated permission where you have this application that has way more rights than maybe your security folks might want to have. So that's something to look out for as you go on your DSC journey if you decide to go down this route.
So best practice for security from an authentication standpoint outside of using the-- with needing your Azure application ID is either is you can do from a security standpoint with using an app ID. You can use certificate-based authentication or a client secret. Certificate is best practice for security. It's more secure than having a secret string which is not able to be hashed out in any of the code. So it's visible in your code.
When you try to save it and try to compile and build the configuration it's viewable in code. You're not able to hash that out. With a certificate thumbprint, you're able to do that. When the engine compiles its configuration, if you're using a certificate to protect your authentication, it hashes out-- it encrypts the thumbprint in the credentials so you're not even able to see that in the configuration.
So this is a list of currently as of today, these are-- you'll see on the left-- you'll see workloads that are-- you'll see all the workloads in an M 365 tenant that are supported, the respective PowerShell module that you'll need with each workload.
A thing to point out is you'll see, Exchange Online obviously needs the Exchange Online module. OneDrive and SharePoint Online need the PNP online PowerShell module. The Security and Compliance Center, which hasn't been branded as Microsoft Pure View from a PowerShell standpoint as of yet still needs the IPPS session.
You'll see for some of the other workloads it's using Graph to connect in to pull that and set whatever data you might need. There's a group in the Graph API team is the product group that's responsible for the guidance-- is the guidance and the Microsoft 365 DSC module as a whole.
So their plan and strategy is for any of the workloads that you'll see here, you'll see where it shows credential and then a service principal which is your app registration, as I mentioned earlier. And you can use either a certificate, thumbprint, a path to a certificate, or an application secret. The column that says credential means it accepts username and password.
The features that are on the roadmap from an authentication standpoint that the product-- the Graph API product team is working on that controls the module is to move any of the modules that don't support modern authentication, or a certificate thumbprint, or security secret today to move those to the Graph API so that you can properly use the correct authentication method for it.
From a initial setup from a module standpoint, you'll see that you just need to call out in PowerShell. You just run the commandlet install dash module Microsoft 365 DSC. And then you can force that to pull down the module from the PowerShell gallery.
Once you do that, you can run this update dash M 365 DSC dependencies commandlet. What that does is that it pulls all the dependent modules that you'll need on top of the DSC module that you just downloaded to connect to all the workloads that you might necessarily need in a tenant.
The screenshot over to the right is an app registration screenshot that I've created. And it shows a lot of the Graph API rights for across some of the multiple workloads that you may need that you would want to control-- that you would need the rights to do in order to set your proper configuration.
As you'll see here, so for exporting your configuration. So as you'll see on the right, there's a link above where it shows the export.microsoft365dsc.com. So that's a UI web page that you can go to. And you can log in with your credentials to your tenant.
Once you authenticate, it brings up a web page where it'll show all the supported workloads that you can pick that are in scope. And you'll see all the particular resources within that workload that you can pick whether you want to export them and use them in your configuration or not.
At the top, you'll see your default credentials where you can use-- it'll specify you can use a certificate, username or password, or a secret. In this example, I'm using a Azure application ID along with a certificate thumbprint to protect my data. So you would just need to fill out those three variables at the top there for your application and cert off.
And in the next commandlet below, that's where the magic happens where you have your export dash Microsoft 365 DSC configuration command line. So it shows all the components from the workloads that at least I selected in this example in order to export what I want and import into my tenant. At the end you see the application ID, certificate thumbprint, and the tenant ID.
When you run this commandlet, you'll get two files. You'll get a PS1 file that's exported that exports all of the configure-- all of the information from the particular tenant where you're specifying these components. And there's also a configuration data-- a PSD1 file is the extension for it. It'll host the variables as far as application ID, certificate ID, and your tenant ID that you can just change in that model.
So to start your configuration, you'll see in this first screenshot here, after you've run that export commandlet, you just need to run the M 365 tenant config.ps1 that's in the folder. And then that creates-- it's called a local host.mof file. So it's a-- the mof file contains your fully built configuration from your PS1 file. That means no errors were found when trying to build your configuration. It hosts it in this mof file. And
Then, what that will do in order to start to import that configuration, you'll run your start dash DSC configuration you'll see here with the path to your directory. And then you'll need to specify your weight, verbose, and force parameters.
And then you'll see this yellow verbose text. If you see the yellow verbose text and you don't see any red errors, that means it's going in and compiling the configuration. It's calling the local Configuration Manager that I called earlier. It's importing those settings into the local Configuration Manager.
By default, once it goes in and executes that configuration, your LCM will enforce and try to auto-apply any changes that are detected from your current configuration every 15 minutes. You can change the configuration mode to apply and monitor, as I mentioned earlier, or just monitor only if you just want to monitor any changes that may come up and not actually apply them until we actually review them.
From a import to another tenant standpoint, there's a scenario of I want to take a configuration from my source tenant and export and then import to a dev tenant for testing. Or I want to import it-- the whole thing to a production tenant.
To do that as a one-time configuration and to get around the default configuration options, what you can do is you can run the following two commandlets at the bottom screenshot here where it says, stop dash DSC configuration force, which means it'll stop any resource deployment of what's in that mof file that I mentioned earlier. And then you have your remove dash DSC configuration document, which it removes the current configuration that's hosted in the LCM after it's been processed.
So once those are done after your configuration has been compiled and deployed to your tenant, you won't have it constantly trying to apply and apply your changes every 15 minutes, or even monitor it. If you just wanted to do it as a one-time push and then just leave it as is and then just work with what you have there after that one-time push, you would follow those two steps below.
So the screenshots you'll see here from a monitoring standpoint, you'll see that there's a M 365 DSC event log that's open-- that's created on the VM or the physical server in which you're hosting your DSC configuration. You'll see these events, and it's generated by each resource that you've deployed from your supported workload in the tenant.
So as you'll see here, just going through this first screenshot here, is you'll see it's looking for Azure AD application. So this is pulling in-- so this was called to pull any Azure AD applications that are configured in your tenant. And then it talks about your parameters that you want in your desired state. So where your param name is where it says ensure, it wants to ensure that's in place.
The current value is what's being currently pulled from your source tenant from your last check which is absent. And you'll see the desired value, which is what's actually set in your configuration is set to present. So if you would have this to apply an autocorrect, it means it would take that value and make sure that the application is actually present in the tenant if it's not there.
On the right, you'll see an export of some of the settings you'll see from your local Configuration Manager, as I mentioned earlier. So things to look out for is you'll see is the configuration mode. So there's the default is apply and autocorrect.
The other two modes, as I mentioned earlier, were apply and monitor and monitor only. You'll see right under that you'll see the frequency minutes is set to 15 minutes. And then you'll see down below you'll see a refresh, that if it's a push style or pull style, which in Microsoft 365 DSC when doing it from a machine it's all push mode only. And your refresh frequency on that is every 30 minutes.
So for cloning a tenant configuration, so you can use this scenario in a tenant to tenant migration. Like I said, there's a practical 365 article that I've written that goes into detail on how to do this in a tenant to tenant migration scenario. These are some of just the high-level steps here.
So what you would do is install the initial DSC module that I talked about earlier and then, the dependent workload modules that you're needed to connect to the PowerShell instances of the supported workloads that you want to use. You'll run the export M 365 DSC configuration from the source tenant using your either source credentials if you're using credentials, or with your source application ID with either your certificate that you've applied on your source application or your client secret, whichever your choice.
Once that's done and you've exported that, then you'll run your M 365 tenant config. You'll run your PS1 file that's generated from your export, which will create the mof file that has the correct compiled configuration that's ready for import into a tenant.
And what you'll do is, you'll change in your configuration file, you'll change your application ID that-- in the source, you would change it to the application ID that you're used for the registration in the target. So you would need a source app ID with those rights. You also need one of your target tenant that you want to imply with those same rights if you want to import all those same settings. So you just change the target app ID.
If you're using a secret or a certificate thumbprint, you would just need to change the secret value if you have a secret value configured on your Azure app. Or you would just change the thumbprint value, whichever one you're using. And save those changes, and then you'll run your start DSC configuration. And that'll just push that up to your target tenant. So you've just exported what you have from the source and it'll import it right to the target center under your choice.
Configuration as a code. So this evolves as a-- I would almost call this the pull server 2.0. So for those of you who have deployed a pull server in the past, how a pull server works in on-premises DSC is instead of a push model you can use what's called a pull model.
So what a pull model is, is that you would host a-- you would have your configuration that you would put together. And then you would have it hosted on a-- you would have a listener on an IS website that you would create. And you would have that website be available to communicate between where your configuration's hosted on that server and whichever servers you would want to apply that certain configuration to.
You can set up-- you would be able to set up high availability. You could use multiple pull servers and then put them behind a load balancer for high availability in case you're using a single server just to remove a single point of failure if you wanted to use a particular check for that.
Where that evolves in the Microsoft 365 world is using a config as code model. So it's using a combination to replicate that pull server technology. Microsoft 365 and then a combination of either using Azure DevOps or Git and using an Azure KeyVault.
So what happens is you would go in and create a Azure DevOps instance. It doesn't have to be public. You can keep it private to within your particular tenant and have conditional access saved across for only being able to access that instance within your tenant so you don't have to publish that out to the world.
You would create a repo or a Git repository. And then your repository hosts all of the files that you would need for your configuration. So your configuration PS1 file, your configuration data variable PSD1 file. Any of your certificates that you might need if you're using certificate-based auth. That all gets saved in the repo.
And what happens is, from the repo is you would create two different pipelines. You would create a export pipeline which what you do in the export pipeline is you would go into your config file. You would go in and make whatever changes you might want to make.
Your export will go and automatically will pull in those-- you can have a trigger set in place to either manually kick off that pipeline to pull in the latest configuration, or you can set a trigger on the pipeline to actually automatically do it as soon as it detects there's a change.
Once it gets through the export pipeline, it'll send those changes to the deploy pipeline. So in the deploy pipeline, you can do the same thing, where you can have it go right through and it will take those changes from the export and deploy it right to your tenant.
Or you can have a monitor approach, where it will send a notification to the folks who are on the deploy pipeline for the release that there is a potential release that's available for you to look at for any configuration changes. So the DevOps folks can go in, approve those changes, and send those off to deploy to the tenant.
So all of your credentials, the certificates, app secrets, those are all saved into Azure KeyVault instance. So you create an Azure KeyVault instance. You can integrate that into the pipelines so that you don't-- and it's all saved within the-- all your information is all saved within the KeyVault itself.
So a scenario just to walk through from a visual picture standpoint. So I'll start on the left-- or you'll start in the middle where it says admin1. So admin1 makes any changes to what we're calling the master DSC config file, which is generated from that PS1 file. So there's changes that are made there.
Once those changes are made, you'll go in and you'll do a commit and sync in DevOps, where it'll take it out to your first-- you'll take it out to that first repository. Once it hits that first repository, you could have your trigger set in place to create the pull request, which will pull it in from the build pipeline. It'll check it out.
You can have it-- you can either review there or automatically send it off to your main repository for either admins to either review that code before you would actually deploy those changes, or you can have the automatic trigger if you don't want any review to automatically send that to the build pipeline.
So when it hits the build pipeline, it takes the same process as I explained earlier where it's going to do basically your start DSC configuration in the build pipeline. It compiles them off. It'll get your credentials from the KeyVault, no matter if you're using a username or password, client secret, or a certificate authentication. It'll get the credentials from the KeyVault. It'll send it off to the release pipeline.
And then it'll either deploy it to your tenant, if you automatically want it to deploy to the tenant. Or you can have a final check-in from the admins to have validate and approve those changes look good before it actually does the final push to the tenant.
So thank you for watching today's presentation. For more information on any future developments that are coming with Microsoft 365 Desired State Configuration, check out the Microsoft365DSC.com website. Also check out any future practical 365 articles on the practical365 website written by any of our authors. Thank you very much.