Modeling User Behavior to Identify Insider Threats

In my previous blog post, I talked about different approaches you can take to identify insider threats in your Windows environment with user behavior analytics — a rule-based vs. pattern-based approach. Specifically, why pattern-based detection has advantages over relying solely on static rules to detect risky user activities.

Spoiler alert: a pattern-based detection strategy will drastically reduce the number of false positives you receive and result in a more manageable number of actionable alerts.

Identifying what you don’t know

But how do you identify anomalous activity when you don’t specifically know what you’re looking for?  The answer lies in modeling various aspects of your user behavior so you have a baseline of their typical activity that can be used to more easily detect abnormal actions.

There are a number of vectors on which you can model user behavior:

  1. Time-based modeling
  2. Categorical modeling
  3. Continuous modeling

Three types of behavior modeling

1. Time-based modeling

This involves mapping the times that a user typically performs activities, such as logging on to Active Directory or accessing files. Having a time-based behavior baseline allows you to identify abnormalities such as:

  • When a user logs into Active Directory interactively at an unusual time (for that user)
  • When a privileged user makes changes to AD — such as enabling a disabled user account — at an unusual time
  • When a user accesses sensitive files at an unusual time

2. Categorical modeling

Categorical modeling looks for activity that relates to objects that a user interacts with, such as a computer, a geographical location or a file. This baseline allows you to identify atypical behavior including:

  • When a user logs on to Active Directory from an abnormal location
  • When a user accesses a server that they have never accessed before
  • When a user opens files from a folder location they haven’t accessed previously

3. Continuous modeling

Continuous modeling spots patterns of events that occur within a specific timeframe — which is usually a spike in the volume of a particular activity. This baseline allows you to identify suspicious increases in activities, such as:

  • When an abnormally large number of files are deleted within a one-hour period
  • When a significantly high number of Active Directory changes are made
  • When a user account fails to authenticate an abnormally high number of times

The benefit to having a baseline of a user’s behavior along so many different vectors and parameters is that you can use them to identify a variety of potentially abnormal user activities that might indicate suspicious behavior.

Risk scores at multiple levels

Each of the above examples would be assigned its own unique risk score based on the importance and rarity of the activity.  A risk score is an objective way of rating the criticality of an action on a scale of one to 100.  An activity that receives a score of 95 is likely to result in an alert, while an activity that receives a risk score of 30 will be discarded as a false positive. For example, multiple failed logons would generally receive a lower score based on how frequent they occur as a global activity, while a significant number of user account deletions would be assigned a higher score based on the sensitivity of the action. Ultimately, though, individual user actions rarely raise a security alert in isolation. Only a pattern of indicators, correlated within a short time period, and receiving a high enough cumulative risk score, will raise an alert that requires investigation.

It is important that scoring happens at multiple levels so it can take into account the context of related user actions:

  • Event scoring — Giving each raw event an initial risk score based on the abnormality of its parameters, such as the computer, time or file path.
  • Threat indicator scoring — Grouping similar events as threat indicators and scoring them again to identify abnormal patterns that extend over a period of time.
  • Alert scoring — Correlating threat indicators into an aggregate alert, and assigning a score based on the uniqueness of its composition and the severity of the activities involved.
  • User risk scoring — The user risk score is an aggregate of all the alert scores for that user. This score can be used to highlight the most suspicious users and behaviors in your environment at any given time.

Scoring at multiple levels ensures that innocuous user actions are suppressed and only highly suspicious patterns of activity are highlighted. Individual audit events that are not particularly unique for the user or for the environment do not create threat indicators. And threat indicators that are not correlated with other indicators in the same time period, are eliminated as false positives.

A combination of user behavior modeling and sophisticated risk scoring reduces noise and ensures that there is a manageable number of alerts produced every week, allowing for investigation of the most suspicious user behaviors.

Change Auditor Threat Detection employs pattern-based user behavior analytics to model individual user behavior and detect anomalous activity that could be indicative of suspicious or compromised users.  Change Auditor Threat Detection will be generally available in September 2018.

My next blog post will discuss the benefits of user behavior analytics and how embedded user and entity behavior analytics (UEBA) can help enrich your user threat detection program.

Ready to tackle UEBA?

Related Content