In my previous blog post, I talked about different approaches you can take to identify insider threats in your Windows environment with user behavior analytics — a rule-based vs. pattern-based approach. Specifically, why pattern-based detection has advantages over relying solely on static rules to detect risky user activities.
Spoiler alert: a pattern-based detection strategy will drastically reduce the number of false positives you receive and result in a more manageable number of actionable alerts.
Identifying what you don’t know
But how do you identify anomalous activity when you don’t specifically know what you’re looking for? The answer lies in modeling various aspects of your user behavior so you have a baseline of their typical activity that can be used to more easily detect abnormal actions.
There are a number of vectors on which you can model user behavior:
Three types of behavior modeling
1. Time-based modeling
This involves mapping the times that a user typically performs activities, such as logging on to Active Directory or accessing files. Having a time-based behavior baseline allows you to identify abnormalities such as:
2. Categorical modeling
Categorical modeling looks for activity that relates to objects that a user interacts with, such as a computer, a geographical location or a file. This baseline allows you to identify atypical behavior including:
3. Continuous modeling
Continuous modeling spots patterns of events that occur within a specific timeframe — which is usually a spike in the volume of a particular activity. This baseline allows you to identify suspicious increases in activities, such as:
The benefit to having a baseline of a user’s behavior along so many different vectors and parameters is that you can use them to identify a variety of potentially abnormal user activities that might indicate suspicious behavior.
Risk scores at multiple levels
Each of the above examples would be assigned its own unique risk score based on the importance and rarity of the activity. A risk score is an objective way of rating the criticality of an action on a scale of one to 100. An activity that receives a score of 95 is likely to result in an alert, while an activity that receives a risk score of 30 will be discarded as a false positive. For example, multiple failed logons would generally receive a lower score based on how frequent they occur as a global activity, while a significant number of user account deletions would be assigned a higher score based on the sensitivity of the action. Ultimately, though, individual user actions rarely raise a security alert in isolation. Only a pattern of indicators, correlated within a short time period, and receiving a high enough cumulative risk score, will raise an alert that requires investigation.
It is important that scoring happens at multiple levels so it can take into account the context of related user actions:
Scoring at multiple levels ensures that innocuous user actions are suppressed and only highly suspicious patterns of activity are highlighted. Individual audit events that are not particularly unique for the user or for the environment do not create threat indicators. And threat indicators that are not correlated with other indicators in the same time period, are eliminated as false positives.
A combination of user behavior modeling and sophisticated risk scoring reduces noise and ensures that there is a manageable number of alerts produced every week, allowing for investigation of the most suspicious user behaviors.
Change Auditor Threat Detection employs pattern-based user behavior analytics to model individual user behavior and detect anomalous activity that could be indicative of suspicious or compromised users. Change Auditor Threat Detection will be generally available in September 2018.
My next blog post will discuss the benefits of user behavior analytics and how embedded user and entity behavior analytics (UEBA) can help enrich your user threat detection program.
Ready to tackle UEBA?