Using Pattern-Based Threat Detection to Identify Suspicious User Activity

Hackers, intruders, malware, disgruntled employees, rogue users. You can’t keep all threats out of your network, so you need a way to effectively detect their activity inside it. Detecting suspicious user activity early can make a world of difference, given that it takes on average 191 days to identify a data breach and 66 days to contain it — at an average cost of $3.62 million per breach! (Ponemon 2017)

The challenge in detecting suspicious behavior, however, is not lack of data. Between native Windows logging and third-party auditing solutions, there is no shortage of raw event information logging every discrete action that a user takes (when they log on to the network, when they open a file, etc.). For example, in a recent test, an actual healthcare company of 80,000 users captured 193 million audit events related to Active Directory, authentication and file activity in just a single month! While a technology company, with 7,000 users, captured 30 million events in the same timeframe.

Historically, the main way all of this data was used was for forensic purposes. After an incident has been detected, the audit data was searched to help piece together the details of the attack and its scope. More frequently now, though, organizations are looking at ways to use this sea of data for more proactive purposes. The vast majority of the audit data you collect represents normal, legitimate activity, such as successful logons and authorized data access events. Suspicious activity accounts for a very small percentage. This highlights the inherent challenge: how do you extract the concerning activities from a sea of noise?

Rule-based detection

The most common approach is to create rules that recognize a specific activity and proactively alert you when it occurs. Rules are appropriate for actions that always deserve scrutiny, such as when a user is added to a built-in administrators group, like Domain Admins. There are two main problems with this approach:

  • Actions that are always considered suspicious are in the minority. More often, an action could be considered suspicious, but only in the appropriate context.
  • In order to create a rule to recognize a particular action, you have to have had previous experience or knowledge of that action causing incidents. Rules can’t capture activities that you’re not anticipating.

Let’s explore the first limitation a little further. In a rule-based approach to detecting suspicious user behavior, you may very likely create rules to alert on actions like:

  • A disabled user account which was recently enabled
  • A high number of consecutive failed logons
  • A large number of files modified in a short period of time

All of these rules have the potential of catching what would be a single action in a larger string of suspicious activities. However, the rule is going to alert every time this action is detected. 95% of the time, ten consecutive failed authentications by a user is due to a forgotten or incorrectly entered password and does not indicate a brute force attack. A large number of modified files in a short time period can just as likely be a result of a user running a payroll application to update monthly financial records — or it could be the result of malware. The problem is that these rules will generate a ton of alerts, the majority of which will be false positives. When you come to learn that most of the alerts you follow-up on are just noise, you understandably stop following up on them — and, unfortunately, legitimate indicators are often obscured by all the noise. A study by FireEye found that 37 percent of organizations worldwide get more than 10,000 alerts a month, which is 14 alerts an hour — and U.S. companies receive five times that amount on average! Rule-based threat detection solutions generate so many alerts that organizations have no choice but to ignore most of them.

So how do you remain proactive but reduce the noise?

The answer is pattern-based user threat detection. Instead of only looking for specific actions in your audit data, you model a baseline of each user’s behavior and use that baseline to discover anomalous activities. Only when a pattern of egregious anomalies is detected for a user, which taken together earn a high enough risk score, is the alert raised for your review. For example, a user fails to log on to Active Directory ten times in a row, followed by a successful logon from a machine they’ve never logged on from before, and that same user subsequently accesses an abnormally high number of files and enables a dormant privileged user account. This series of actions is much more indicative of an active user insider threat or network breach.

The benefit of user behavior modeling, anomaly detection and multiple levels of risk scoring isn’t just the quality of the alerts that are raised, but also the volume. Recall the healthcare organization I referenced earlier.  Over the month-long period where over 190 million audit events were analyzed, only 123 alerts were raised with a pattern-based approach. That’s less than 5 alerts a day — an amount that is much more manageable.

Rules have their place, but they are only a tiny piece in a larger user threat detection strategy.

Change Auditor Threat Detection employs pattern-based user and entity behavior analytics (UEBA) to model individual user behavior and detect anomalous activity that could be indicative of suspicious or compromised users. Change Auditor Threat Detection will be generally available in September 2018.

My next blog post will discuss what’s involved in modeling user behavior and how behavior baselines can be used to identify anomalous user activity.

Ready to Learn More?

Anonymous