[MUSIC PLAYING] All right, good afternoon. Welcome, everybody. My name is Thijs and I'm going to be walking you through a session called Notes from the Field Microsoft Sentinel in Real Life. The goal of this session is really to have a hands-on view of what Microsoft Sentinel is, and not just talking about what do you see in Microsoft, learn in a Microsoft talk that we try to showcase from a real-world experience how it's done.
Just as a reminder, I AM currently live in the session as well and I'm ready to answer questions in the Q&A. And be sure to jump into the live Q&A after the session as well where we can talk about Sentinel more in great detail.
So maybe just a small introduction from my side. My name is Thijs. I am a Microsoft MVP located in Belgium and I currently work for a company called The collective Consulting.
This is a Microsoft partner focused on security. And my role within The Collective is a team leader of the SOC. So our security operation center is something I manage, and day in and day out, I work with the SOC. And the SOC is entirely based on top of Microsoft Sentinel. So that's why I can showcase some really nice real-world examples on how Sentinel can be used in a SOC.
You can find me on Twitter, LinkedIn, but I'm also writing a book with a couple of other MVPs, and I blog as well on my own blog, but also [INAUDIBLE] on Practical 365. So be sure to check those out as well.
So small overview what are we going to be doing today is we're going to first have a small overview of the architecture of Microsoft Sentinel. What does it look like? What are the different components? And then I'm going to be diving right in with some tips and tricks on data ingestion and how to get data into Microsoft Sentinel.
We're going to see how to create incidents, how these incidents end up in Microsoft Sentinel, but also how to do incident response on those with some small tips and tricks from our own SOC, then see into automating those incident response and using playbooks and logic apps to automate some of the things you do manually. And finally, I'm going to be showing a sample architecture of a real-world example of Microsoft Sentinel. So really to help you see how can Microsoft Sentinel be used in the company as a real SIEM.
So I said SIEM. It's important generally that we have SIEM and SOAR. A SIEM is a security information and management and event management system, and a SOAR is a security orchestration automation and response product.
What's the difference between the two? Well, in the SIEM, we'll push out all of the content, all of the logs that we have in environments. All of your security logs will end up in a SIEM where we can correlate them, query them, and have an entire overview of the entire environment.
A SOAR is where we will do incident response, but also automation of an incident response. So not only manual actions, but also automating those actions. It is important to note that some products are a SIEM or a SOAR, or both of them. And Microsoft Sentinel, in our case, is both.
So within this session, you will see things vote about pushing data into the SIEM, getting logs into a Sentinel system, but also using Microsoft Sentinel to respond to those incidents and to automate the incident response. And that's just an important step to know Sentinel is both a SIEM and a SOAR product. Now let's see how it looks like. It's important to know that Microsoft Sentinel is a cloud-native SIEM and entirely based on top of existing Azure resources.
So at the heart of Microsoft's Sentinel what do we have? We have Log Analytics. You might know Log Analytics as an existing Azure resource and it's been used, for example, for Azure virtual machines to dump the performance logs in, but also in Application Insights to get insights into the exceptions in an environment. This is where we have the Sentinel system of logging and Microsoft Sentinel as a solution on top of Log Analytics. So it builds on top of that.
What does it build on top is we first of all have data collection. So through data collection, we will get data from outside, from different sources, and push that into Sentinel. That's the SIEM part, of course, getting all the data into a central place.
Then we also have detection and investigation. That's still the SIEM part. We'll look into creating rules to create incidents and alerts, which we will notify the SOC team and respond to that. And this is the where the SOAR comes in. It's in the investigation.
This investigation can be OK, we see from the data a malicious activity. Let's put this over to the SOC and the SOC will investigate this. And they can look into it and provide a verdict, is this truly malicious activity or is it not. And then finally, we can respond an automated response through Azure logic apps, which I'll talk about in a bit.
So here we can see an overview of the left data collection, then the incident, the incident response, and then the automation. And these are also the four parts which we'll be returning throughout this presentation. So first of all, data sources.
It is important to know that within Microsoft Sentinel, we have lots of different kinds of data sources. And the kind of data sources depend on what type of data you want to put into it and it also will dictate one, the cost, and two, how easy it is to integrate. And there is a big difference between first party data sources, which are natively integrated.
These are mainly Microsoft cloud sources like Azure AD, Office 365, Microsoft Defender, but also other third party products which have a native integration, which means through the portal, we can click a few buttons and immediately data starts flowing in. But what is a common misconception is that Microsoft Sentinel, it's only based on Microsoft logs and only Microsoft cloud logs, but that's not true. Microsoft Sentinel is essentially a really open system with tons of possibilities and it's not really limited to what Microsoft provides you. And that's, for me, a great advantage, as we're not limited to only this set of data collected or this set of data, but we can expand to it if we like.
How do we do that? Well, one of them is we have CEF and syslog support. So syslog and CEF, you probably know it. CEF stands for common event format, and it's used to get data from your firewalls, your switches, your access points, and put it into a central place, which can be a Linux box. And that Linux box will forward those logs into Microsoft Sentinel. And that's something [INAUDIBLE] are used for almost every other SIEMs as well, not just Microsoft Sentinel. That's a really known protocol to send those logs into the cloud.
Now a really important one is at the bottom, we have API. Now Microsoft Sentinel is actually one of the products with a ton of API support and with a ton of possibilities. What does this mean? We can use an API to also ingest data into our SIEM And that's great because again, we can build custom integrations. And if you have a certain data source, which might not be natively supported by Microsoft, you can also build that integration yourself.
Now maybe let's take a quick dive into the portal to showcase how this looks like within Sentinel itself. If we open up the Microsoft Azure portal, we can search for Sentinel. And you will find the Sentinel workspaces you have at the moment. If you want, it's really easy to create a new one.
Creating a new one doesn't cost you any money. We'll touch upon the cost in the next slide. But creating one is completely free. We can select an existing one, and here we are greeted within the overview dashboards of Microsoft Sentinel where we have an overview of what's happening in the environment.
Right off the bat, you have this overview screen. I personally never use it just because it doesn't provide any useful insights to me. Yes, it has the amount of events, alerts, and incidents within the environment. But for a SOC analyst, this doesn't really provide insightful information you might use day-in, day-out. So I usually skip this page and go to the page I want, which for us is the data connectors.
So here, we have the data connector overview and we have immediately a nice overview of the amounts available and the ones who are currently in use. So right off the bat, you can see currently there are 124 data connectors available and supported by Microsoft. We can look into some examples.
You can see some of them are from Microsoft, like Azure AD, but also some are not from Microsoft, like Aruba ClearPass from HP. So we can take an example, in Azure AD, we can click Open-- we can click on a data connector, and we can see open connector page. Within this connector page, this showcases how easy it is to implement those first party logs where we don't have to create a complex setup with authentication, API keys, or a separate syslog server. It's all natively integrated.
So with a few clicks of a button, we can select type of logs we want, like this, and we can click Apply. And this is really easy. This is maybe-- also really nice feature of Sentinel, native integration due to different Microsoft products. So without much effort, you can easily integrate some of the Microsoft cloud products within Sentinel.
Now as I said, it's not only Microsoft. We can, for example, look into your favorite firewall, let's say Fortinet. And here we can see also a native integration between the two. We can open the connector page, and here you will see it's not click, click, click and we're through.
Here, we have to get that stuff, collect it on the Linux machine, configure that, and configure your Fortinet to send the logs from your firewall and to the Linux box, and the Linux box will then send it into Microsoft Sentinel. And this showcases how there are different kinds of connectors, which can differ on the setup you do. But once it's setup, data will start flowing into Microsoft Sentinel.
Now right off the bat, something really confusing is the Content Hub, and you can see this as well here at the top. It says, more content at the Content Hub, and this is a move Microsoft has been doing in the past few months is they're also adding lots of content here. Within the Content Hub, we have different solutions, and all of these are solutions.
And a solution is a combination of different resources within Microsoft Sentinel, like a data connector, or an incident rule, or an automation. And while scrolling through them, you will find that we have data connectors in here as well, as seen right here. And the confusing part is we have data connectors in this list which are not available within this list. So if you're searching for a new data connector, you always have to search both in the data connector list and in the Content Hub.
But the nice example of how open a system is, if we search for LastPass, this is a data connector I made which will get information from the cloud's password manager through an API and push them into Microsoft Sentinel. And this is something I made in a couple of days to just showcase how easy it is to build content yourself on top of Microsoft Sentinel and use it. And that's, I think, one of the biggest advantages. You're not limited to what Microsoft provides you. If you have certain log sets, you can create a data collector yourself and push it in.
What's the difference between a built-in connector and something you create yourself? The main fact is the support. Within a data collector natively available, you have supports by either Microsoft or the vendor. Of course, if you build it yourself, you don't have that support.
Now of course one thing to look out for is which data could connectors you enable because costs in Microsoft Sentinel is based on the data you put into it. So if we go to the next slide, you will see that we paid for ingestion but also retention. So ingestion means for each megabyte we push into Microsoft Sentinel, we will pay for. But also for each megabyte we retain longer than 90 days, we will also pay for.
And here you can see you need to be really careful of the cost of Microsoft Sentinel because each megabyte you push into it, you will pay. And this is different from other SIEMs. Traditional on-premise SIEMs, you buy a license in bulk and you just throw all of it into it. But that's one of the pitfalls from Microsoft Sentinel is that if you just push all the data in there, it's going to be really expensive.
So cost is something you need to monitor closely because otherwise, it might explode and you have a really big bill. Just in a little bit we will also provide some tips and tricks on how to prioritize data connectors to make sure you don't blow up the cost. But ingestion and retention, those are the main two parts of your Microsoft Sentinel bill.
What also do we have is an archive. So what we can actually do is we have a set of default logs within Sentinel, and these are available for everything. You can create them, create incidents based on them, everything is in real time. Archive logs, as you can push logs into the archive after x amount of days, and this means they will be in like a cold storage.
You won't be able to actively credit them, but you pay much less for the retention. And this is a great way to save money with Microsoft Sentinel. If you have logs you keep for archival purposes, you don't need them actively, you can push those logs into an archive and your bill will be much less compared to keeping them in the default logs.
Then finally, we also have automation. Automation is built on top of another existing Azure resource called Azure Logic Apps, and here we paid for every run we have in a Logic App. So each run you do with automation will be added to your bill. Now the really important note to make here is that every run will cost like 0.0002 cent or something like that, so it will be negligible.
We don't almost pay any money for the automation, but it's important that it's billed separately, and you will see it separately inside of the bill. So here you can see we have four main pillars for Sentinel cost is calculated upon. One good example is we don't pay for querying, for example.
So just creating a query and getting data out of this thing, you won't be billed for. That is a really important note. While Microsoft Sentinel is billed for use, you don't bill for-- you don't pay for each query you make. So don't worry about how many times you're querying the data. That has no effect at all on your build.
So we have seen data collectors, we have seen the cost. How do we get started? Well, I have a few tips I use personally with all of our customers getting started with Microsoft as a SIEM.
Because like I said before, if you're going to be using Microsoft Sentinel as on-premise SIEM and push all of the data into it, your bill will be really, really high. So you need to think about what data do you want to put in first and for how long just so you know, OK, I'm getting the value of my money I am paying for Sentinel.
So where do l get started? Well, for every customer, I always start with the Microsoft Cloud Clogs. Most of my customers have E5 security license, which means if they employ the entire Microsoft stack, they will be covered for like 90% of the visibility within the environment.
And then I like to choose those Microsoft security logs two-fold. One of them, I have really big set of visibility through one set of products, but also Microsoft likes you to use Microsoft Sentinel as its SIEM, but also Microsoft Defender as its XDR. So they will also provide you benefits and free ingestion of data when you use both.
So that's why it's interesting to start with those Microsoft logs because Microsoft provides a hefty discount if you ingest those logs. So that's why I say get started with the Microsoft Cloud Logs. It will make sure that you can start small without a hefty bill, but you can explore the product, see how it goes, and see how you like it.
After you have done the Microsoft Cloud Logs, you can look into other security products. Not everything is Microsoft, of course. You will have things like a firewall, a proxy. Those things are not within a Microsoft product at all, but it's still valuable to bring them into a SIEM to have a single pane of glass.
So getting those logs into Sentinel is the second thing I do because they still provide a lot of value into the SOC. And then it's looking into what else do I have? What other products do I have? You can look into on-premise logs like your switching or your VMware cluster and get those logs into it, or software as a service products like LastPass or Salesforce and get those audit logs within Sentinel.
But one thing I like to do there is check the data and see what kind of data do I have. Let's take a look at the firewall, for example. Within the firewall, you have different types of logs. You have an admin log showcasing who has disabled this rule, removed a certain final rule, but you also have NetFlow log. So each packet going in and out of the firewall is logged and you can choose to get both of those logs into Sentinel.
Now of course, NetFlow logs, they're really, really noisy. They will have an extremely big size, which will blow up the cost and it might not be interesting to push them into Sentinel because you might not need them within the SIEM. So that's where prefiltering comes into place, and that's the final tip I can give you is through prefiltering, we can say, OK, which data do you want to get into Sentinel and which don't I want to copy into Sentinel?
And it's important that you really think about what data do I want in this thing because not every data is valuable. And if you're just going to be putting the data into it just for the gist of it, then it doesn't make sense. Because again, Sentinel is built for use. So if you're going to be pushing a load of data into it, your bill is going to explode and you're not going to have great time.
Now how can we manage that? We have two options. If we're talking about managing costs and managing the amount of data you put into it, we have two ways. One of them is we can set up ingestion day limits.
And this means we limit the amount of data which is put into Microsoft Sentinel each day. So we can limit the amount of gigabytes which is being sent. The other one is we can reactively monitor the data being put into it. And if it reaches a certain threshold, we can send an alert like send an email or something.
Now I am totally not a fan of using ingestion day limits, and I'll explain why. It's that if you set up an ingestion day limit, say 500 megabytes, and suddenly you get a really big spike, which could be due to an active attack, for example, at the time the limit is reached, data will stop flowing into Microsoft Sentinel, and you won't get a data into your SIEM, which for the SOC can be catastrophic, because you might be missing really, really important data.
So just because I don't want to miss that important data, I won't set up ingestion day limits, but I will monitor what kind of data I'm getting. And if a certain data connectors is pushing too much data into Sentinel, I will quit the ingestion, but then I am in control, and I'm making sure I don't impact the SOC in terms of critical events coming into the system.
Now, once we get the logs into Sentinel, it's time to create incidents. When in Microsoft Sentinel, an incident is something the SOC will investigate, which is surfaced through a couple of ways within the UI, which I'll touch upon later. But first, how do we get incidents? Well, we have three different ways to get incidents.
One of them is through analytic rules. Analytic rules is a way to create a certain query on top of the data based on certain conditions. And if the conditions are met, we can create an incident. Those analytic rules are based on KQL, the Kusto query language, which is the main language of Microsoft Sentinel, and I'll show you a quick example in a bit.
The second one is alert and incident synchronization. So this means we take the incidents from other products and push them into Microsoft Sentinel. A great example could be Microsoft Defender, for example. Microsoft Defender builds its own incidents. We can say that we synchronize the incidents from Microsoft Defender into Microsoft Sentinel. And this doesn't mean that Microsoft Sentinel will create them themselves, we're just syncing them to have a single pane of glass.
And then the last one is Fusion. Fusion is the machine learning algorithm from Microsoft to create incidents based on the data you put into it. How does this look like? Well, Microsoft markets it like we get a bunch of data, we will provide analytics. We will see what is normal, what is not normal. And based on a certain data points, an incident will be created.
Now unfortunately while the marketing speak sounds great, we push data into it and we get an incident, which will be truthful, the truth is, within the three years I'm using Sentinel right now, I have never seen a valuable incident from Fusion. So unfortunately it's really limited and it only creates false positives for me.
It gets two incidents which were created in the SOC before and bundles them into a single incident. And the thing is within our SOC, we always have already researched those two individual incidents apart. So there is no need to create that incident on top. But still the machine learning from Microsoft will create such an incident for us to investigate. So unfortunately, it's really limited, and I don't rest upon it.
And that's also a really important point, is some things are built where you just push data into it and Microsoft Sentinel will spit data out. That's not the case with Microsoft Sentinel. You really have to work upon it and create your own analytic rules to make sure that the incidents are created and that you put the value into it. And only then will create valuable insights and incidents.
Now one thing you can do is Microsoft has the ability to add your own machine learning language. So this means bring your own ML, bring your own machine learning, and you can create your own ML model, which you have built yourself, and then ingest that into Microsoft Sentinel. Then you have a really trained and detailed model based entirely in your environment, which you can use to push data into it.
While this sounds really nice, not all companies will have, for example, three data scientists within the security team. So this is really meant for large security teams who have a large set of knowledge and have specific people developing those machine learning algorithms. So again, really nice marketing talk, but in general, machine learning is not something I focused on while using Microsoft Sentinel.
What do we focus on is the analytic rules. So with those analytic rules, we can create incidents ourselves based on the conditions we choose. And let's dive into the portal to showcase a few examples. Yes, we are currently in the portal. We were here within the data connectors and Contents Hub before. We can go to Analytics next.
And within an Analytics, we have two important steps. We have Active Rules and Rule Templates. If we click on Rule Templates, these are analytic rules Microsoft provides to you as an example for you to use. And this is a great way to get started because all of those rules are built on top of KQL, and it might be really difficult for you to start using that KQL language at first. But by using those rule templates, you can use the Microsoft knowledge first and then tweak it afterwards.
So as an example, we will search a rule, which will look into when a MFA request is denied. So if you're using Azure AD MFA, a user has the option to click approve or deny. When he clicks approve, this means that there's no active sign in by that user and maybe you should investigate that incident because there might be a potential password breached.
So when we have found the rule template we want to use, we can click on Create Rule. And we can see a few properties. The first one is the name of the rule, this will also be the name of the incident it creates, the description, and then we have the different tactics and the severity. The most important bits are within the rule logic.
And here we can see the KQL. So within this KQL, we can see it kind of looks like SQL. It's often compared to SQL because we will always call our table, and from within the table, we will add different operators like a where clause where it will, for example, look for a specific status. And within the status, as the example here, we will look for people who have denied the MFA request. So this is really the heart of the query, the KQL query we'll define when the incident is created.
If we scroll down, we will also find the scheduling. How often should this rule be run? Should this be running every minute, every day, every five minutes, every hour? These are all things you can configure depending on the criticality of this event. Next, we can click on incident settings and we can decide do we want to create an incident, yes or no.
Automated response is something which we'll touch upon within a little bit. Then we can click Review and we will be able to create the analytic group like this. And now after creating that rule template, we can look for it within the active rules. And we can see, as an example, we have this rule created.
Besides everything within the portal, we also have the Microsoft Sentinel GitHub. And throughout that open source GitHub, which is completely free to use, we can also search for rules templates there. And there's even more rule templates on GitHub compared to what's in the portal. Now those rule templates are a great way to get started, but unfortunately, most of them are pretty noisy. So those will generate too many incidents in my opinion, and will blow up the SOC and it will create too much incidents for the SOC analysts to create.
So for me, this is a great way to get started, but you don't have to be afraid to tweak those rules. And again, it's an advantage and a disadvantage at the same time. Microsoft Sentinel is completely open. You can tweak it all to your liking to make sure you only get the incidents you like.
But the disadvantage is you have to tweak it all and put in the work. So what I always recommend for a new customer is try to look in those templates, see if you like any of them, just create them, and see what kind of incidents you get. But certainly tweak them to make sure that you are not overloaded with incidents. Because what I often see with different customers is incident fatigue.
Some people think that the more incidents you create, the more secure your environment is. But what I have seen is a SOC analyst is human, and he will be-- if he does something 200 times, he will be used to be in the same behavior and execute that behavior all at the same time. And if he gets 200 false positives, the next time he gets the same incident, he will probably assume it's a false positive and might go look over it too quickly.
And for me, I want to relook into less is more. To me, it's more important to have 10 incidents which are valuable and which will be a true positive compared to 200 incidents each day because that will not provide the information you need. It's a common misconception, but if you're not looking into all of the incidents Microsoft Sentinel creates, you should really look into, OK, let's see what my noisy rules are and try to dumb it down.
Now how can I dumb it down? How can I decrease the amount of incidents we get? I typically use a few different steps. The first one is try to optimize the KQL queries as much as possible and try to minimize the incidents you create at the first time.
After that, we can look into enrichment. With enrichment, I mean automating some of the basic steps, like going out to a whois server to check who is the ISP of a certain IP address and add it into the incident. Besides that, we can use automation to automate some of the actions.
And finally, we can use Fusion, that machine learning from Microsoft. And here, I like fusion. I like that built-in machine learning because I can augment my built-in-- my-- I can use the machine learning to augment some of the queries that I write. So what I try to do is optimize my KQL as smart as possible, use enrichment and automation, and then combine my incidents with all the signals from that machine learning from Microsoft to really to create smart incidents which are worth investigating.
Now how does incident response look into look like within Microsoft Sentinel? This is where we get into the SOAR part of things. So if we go back to our test environment, we can go into the Incidents tab right here.
Now we'll have to update how many hours I want to check back for incidents and which status they're going to filter upon. Because it's a test environment, I don't have that much data into it. But I want to showcase how the incidents overview looks like, because each incident created right here will provide an entry in this list. And from within this list, you can click upon it and you can view the few details.
And here we then give you few details page, you can see the description of the incident, the title. You can assign it to yourself and you can update the status to say I am now actively working upon those incidents. So you can see right here we have a new status, active and closed.
So we'll start working upon it. I can see which are the entities. I can see the IP address involved, the user involved. And once I'm done with the investigation, I can close it out and I can provide a verdict, if it's a true positive, false positive, or a benign positive, and provides a comment like, this alert rule is wrong, and close the incident.
So really quick overview of how the incident workflow. Works we can also pivot into the raw logs from Microsoft Sentinel by clicking upon Alerts right here. And this will actually create a KQL query, which will dive into the raw data beneath the alert to really showcase what has been going on and why this incident has been created.
Another way to look into an incident is to use the investigation graph. And to do that, we can click on Investigate here and this will provide a visual overview of the two alerts and we can interact with that, move around, and we can also query different things based on the data. Now again, this is something that Microsoft loves to use during a sales conversation because it allows you to visualize an attack.
In the real world, I have never seen a SOC analyst use this, just because it doesn't provide the information you need. It's a really visual approach, but the SOC analyst will never use this because it doesn't showcase detailed information required during an investigation. So again, while this is nice looking, in the real world SOC, SOC analysts will probably not use this functionality.
Diving deeper into the slides, do I like Sentinel for case management? The thing is, as you have seen, within Sentinel case management and incident response is a bit limited, and too limited, in my opinion. One reason is the lack of customization. As you have seen, there's is no option to have additional statuses, for example.
We have new, active, and closed, and that's it. If you want to have a status like on hold or waiting for feedback, something like that, you don't have that in Microsoft Sentinel. So unfortunately, to me, case management is too limited, and we need to be able to customize the experience before I would be using it. Besides that, the reporting is also too limited in terms of SLA reporting and looking into true and false positive rates. There could be lots more possibilities there.
So in essence, I don't recommend to use Microsoft Sentinel when doing incident response and doing case management. So looking into a case, assigning it, documenting it, I advise you to use the partner ecosystem there. There's lots of integrations, native integrations between Sentinel and ticketing tools. We have integrations into Zendesk, ServiceNow, Jira, which can also synchronize the incidents from your scene into your ticketing tool.
And this has two main advantages. One of them is we have a complete customization of those ticketing tools, and we can treat them as we like. But also they will integrate with other Teams. Your company will most likely already be using an existing ticketing tool, so plugging in your SIEM into that ticketing tool is something I really recommend to one, have a single pane of glass across the environment, not for the security team, but for the entire team, but also be able to customize the case management experience.
While doing incident response, I often see the curse of portals. What do I mean by the curse of portals? It's most of the data you have within the Microsoft stack is both available through the portal and through KQL. Now from the start, it's really easy to use the portal to click through all of the data and use that data for investigation.
But in the long term, this will not benefit you because clicking through different portals is really time sensitive and takes up a lot of time. And once you are in the SOC with a ton of incidents and a ton of different things to investigate, KQL and querying the data is really where we should be going. So this is just the tip. I know it's difficult in the start, but try to stay away from the portals and force yourself to use KQL to get the data you need because in the end, you will be much faster within your incident response.
Now, when showcasing Defender and Sentinel, I often see confusion. When should I use Defender and when should I use Sentinel? And for that, I have a few tips and tricks as well. As one of them, and it's pretty obvious, Microsoft and Sentinel will have different vectors type of cover.
So if you look into this framework right here, we will see that Microsoft Defender at the top will also do identify and protect. That really important difference. Microsoft Defender, through antivirus and anti-spam, will stop threats and make sure they never enter the environment.
So that's the protect phase. Well, Microsoft Sentinel will only be detecting and responding, which means within Sentinel within a scene, we are not proactive, we are retroactive, meaning something suspicious has already happens and responding onto it. So that's a really important difference, but we can also see that there is a common ground where Defender can also do detection and response.
And there we can see we can actually use them side by side and use them both to their strengths. And this is also where the integration comes between the two, is we have a native data connector between Sentinel and Defender where we look at-- we can synchronize all of that data from Defender into Sentinel.
Now one tip I want to give you while most of the data within-- most of the data you get from defender and put into Sentinel is free, getting the events, the raw events, into Sentinel is not. And if you just enable an integration, it will be really expensive and you need to be aware that it will probably cost you some-- hundreds or thousands of dollars to get that data into Sentinel, and that might not be worth it to you. Because how does the workflow look like for a SOC analyst is this is an example right here, where we have the SOC analyst on the right, and he will always start looking into Microsoft Sentinel at first because it's a single pane of glass where we have different connectors pushing data into Microsoft Sentinel.
Microsoft Sentinel, like I said, is not only for Microsoft data. We can also push other cloud logs, network logs into it. But once you have an alert or an incident from Defender, we can pivot into the Defender stack to use it to its strengths.
Because in my opinion, it makes no sense if you're using Sentinel to completely ditch Defender because the insights you get from the Defender portal are sometimes much more easy to use compared to Sentinel. So I really recommend to use them both to their strengths. Use Sentinel as a starting point, the single pane of glass, and pivot to Defender if you need.
Of course, with all of the data, a lot of manual labor comes into it. So what we can use? We can use automation. Automation within Sentinel is used through Azure Logic Apps. As Azure Logic Apps are often compared to Power Automate, or Microsoft Flow, which is essentially LEGO code.
What can we do with Logic Apps is for me, there are three different use cases, three different reasons why to do automation. One of them is to do notification and synchronization. So when a new incident comes in, for example, create an email, or send the tickets into Jira, the other one enrichment means when I get an incident about a different a specific IP or an account, I will get data from different sources and push that into Sentinel.
And the third one is automated actions, which means we will automate the response of some of the incidents. A great example is a password reset. When we see a suspicious behavior, let's not wait until a SOC analyst can investigate it and I can do response. Let's do an automated action where the Logic app will automatically block that behavior.
Now getting started within Logic apps is easy because we have that LEGO of code. I do want to give a warning, is that if you're getting started, I like to suggest a modular approach where we use different modules and build on top of each other. Because using a modular approach, we avoid code reuse.
And this is an example on the right. We have two modules, a module for IP and a module for account. And with those modules, we can easily automate some of the things and we don't have to reuse code across different Logic apps because multiple Logic apps will need code, for example, for IP and for accounts. But by using modules, we can automate and avoid some of that code reuse.
Biting into the end of the session, before we end, I want to provide some insights into a real-world SOC architecture and how that looks like in our SOC, for example. And here we can see this is an example. It's really tricky at first, but let's walk through that. This is a multitenant approach where we have different customers. Each have their own Sentinel environment, and these are all connected, all of these environments, into Azure Lighthouse.
From Azure Lighthouse, we will synchronize those incidents into your ticketing tool of choice, which could be Atlassian, that's in that [INAUDIBLE] ServiceNow, and this provides a single pane of glass to do investigation and to do response. And most of the SOC analysts will interact with these Sentinel environments through Lighthouse, but we always have the option to access the individual portals like the Defender portal to use, again, the Defender portal to its strengths.
If we take another view, we can see we have Microsoft Sentinel in the middle. We have the different data sources right here. And one really important one is Azure DevOps. And we use Azure DevOps for automated deployment.
This has two main benefits. One of them is it will speed up the process if you have to deploy a resource to multiple Sentinel environments. But it also provides a [INAUDIBLE] principle where we can have somebody approve the code changes we do before it's pushed into production. So even if you're a single Sentinel environment, I still recommend Azure DevOps or an alternative for that [INAUDIBLE] principle approval.
At the top, we use MISP. With MISP we can use threat intelligence to push additional feeds into Sentinel and to make our SIEM even smarter. And within our SOC, we use Jira Atlassian as an ITSM tool to do case management.
Now before we close up, I want to provide four tips and tricks to get you started. One of them is automation and APIs. As I said multiple times, Sentinel is a really open system with lots of API possibilities. So I really recommend that you use the APIs and the automation because there are so many resources available, so many examples for you to use, and it will really ease the stepping stone into Microsoft Sentinel.
Next up, KQL. You cannot do Sentinel without KQL. So if you're just getting started, KQL is probably the number one step you have to do to get started.
Third one is the cost. Microsoft Sentinel is tricky to calculate the cost and be aware of it. So the only tip I can give you is use the built-in tools, use the monitoring and workbooks to make sure that you keep an eye on the cost and the cost doesn't spike too much.
And then last but not least, the partner ecosystem. We have lots of community folks developing on top of Microsoft Sentinel, but also partners developing integrations from their own products, like a Fortinet, like Cisco, into Microsoft Sentinel. And before you build anything custom, make sure to check the existing resources, the partner resources, and see if you can reuse some of those applications.
Now I have come to the end of my session and I hope I have provided lots of great tips to get you started. As a reminder, we are just getting started with the live Q&A. So feel free to join me in the live Q&A and I can answer any questions live that you might have. To close up, I just want to thank Wes for having this opportunity to speak, and I hope to see you all next year. Thank you.