Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Azure
GitHub Repository: Azure/Azure-Sentinel-Notebooks
Path: blob/master/tutorials-and-examples/feature-tutorials/EventClustering.ipynb
3253 views
Kernel: Python 3

msticpy - Event Clustering

Often, large sets of events contain a lot of very repetitive and unintersting system processes. However, these frequently have values (e.g. commandline or path content) that varies on each execution. This makes it difficult to find outlying events using standard sorting and grouping techniques. We process the data to extract patterns and use clustering to group these repetitive events into a single row (with an execution count). This makes it easier to find unusual events.

You must have msticpy installed with the "ml" components to run this notebook:

%pip install --upgrade msticpy[ml]
pip install seaborn
# Imports import sys import warnings from msticpy.common.utility import check_py_version MIN_REQ_PYTHON = (3,6) check_py_version(MIN_REQ_PYTHON) from IPython import get_ipython from IPython.display import display, HTML, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt import seaborn as sns sns.set() import networkx as nx import pandas as pd pd.set_option('display.max_rows', 100) pd.set_option('display.max_columns', 50) pd.set_option('display.max_colwidth', 100) from msticpy.data import QueryProvider from msticpy.nbtools import * from msticpy.sectools import * from msticpy.nbtools.foliummap import FoliumMap WIDGET_DEFAULTS = {'layout': widgets.Layout(width='95%'), 'style': {'description_width': 'initial'}} # Some of our dependencies (networkx) still use deprecated Matplotlib # APIs - we can't do anything about it so suppress them from view from matplotlib import MatplotlibDeprecationWarning warnings.simplefilter("ignore", category=MatplotlibDeprecationWarning)

Contents

Processes on Host - Clustering

Sometimes you don't have a source process to work with. Other times it's just useful to see what else is going on on the host. This section retrieves all processes on the host within the time bounds set in the query times widget.

You can display the raw output of this by looking at the processes_on_host dataframe. Just copy this into a new cell and hit Ctrl-Enter.

Usually though, the results return a lot of very repetitive and unintersting system processes so we attempt to cluster these to make the view easier to negotiate. To do this we process the raw event list output to extract a few features that render strings (such as commandline)into numerical values. The default below uses the following features:

  • commandLineTokensFull - this is a count of common delimiters in the commandline (given by this regex r'[\s-\/.,"'|&:;%$()]'). The aim of this is to capture the commandline structure while ignoring variations on what is essentially the same pattern (e.g. temporary path GUIDs, target IP or host names, etc.)

  • pathScore - this sums the ordinal (character) value of each character in the path (so /bin/bash and /bin/bosh would have similar scores).

  • isSystemSession - 1 if this is a root/system session, 0 if anything else.

Then we run a clustering algorithm (DBScan in this case) on the process list. The result groups similar (noisy) processes together and leaves unique process patterns as single-member clusters.

Clustered Processes (i.e. processes that have a cluster size > 1)

from msticpy.analysis.eventcluster import dbcluster_events, add_process_features processes_on_host = pd.read_csv('data/processes_on_host.csv', parse_dates=["TimeGenerated"], infer_datetime_format=True) feature_procs = add_process_features(input_frame=processes_on_host) # you might need to play around with the max_cluster_distance parameter. # decreasing this gives more clusters. (clus_events, dbcluster, x_data) = dbcluster_events(data=feature_procs, cluster_columns=['commandlineTokensFull', 'pathScore', 'isSystemSession'], time_column="TimeGenerated", max_cluster_distance=0.0001) print('Number of input events:', len(feature_procs)) print('Number of clustered events:', len(clus_events)) clus_events[['ClusterSize', 'processName']][clus_events['ClusterSize'] > 1].plot.bar(x='processName', title='Process names with Cluster > 1', figsize=(12,3));
Number of input events: 363 Number of clustered events: 62
Image in a Jupyter notebook
# Looking at the variability of commandlines and process image paths import seaborn as sns sns.set(style="darkgrid") proc_plot = sns.catplot(y="processName", x="commandlineTokensFull", data=feature_procs.sort_values('processName'), kind='box', height=10) proc_plot.fig.suptitle('Variability of Commandline Tokens', x=1, y=1) proc_plot = sns.catplot(y="processName", x="pathLogScore", data=feature_procs.sort_values('processName'), kind='box', height=10, hue='isSystemSession') proc_plot.fig.suptitle('Variability of Path', x=1, y=1);
Image in a Jupyter notebookImage in a Jupyter notebook

The top graph shows that, for a given process, some have a wide variability in their command line content while the majority have little or none. Looking at a couple of examples - like cmd.exe, powershell.exe, reg.exe, net.exe - we can recognize several common command line tools.

The second graph shows processes by full process path content. We wouldn't normally expect to see variation here - as is the cast with most. There is also quite a lot of variance in the score making it a useful proxy feature for unique path name (this means that proc1.exe and proc2.exe that have the same commandline score won't get collapsed into the same cluster).

Any process with a spread of values here means that we are seeing the same process name (but not necessarily the same file) is being run from different locations.

display(clus_events.sort_values('ClusterSize')[['TimeGenerated', 'LastEventTime', 'NewProcessName', 'CommandLine', 'ClusterSize', 'commandlineTokensFull', 'pathScore', 'isSystemSession']])
# Look at clusters for individual process names def view_cluster(exe_name): display(clus_events[['ClusterSize', 'processName', 'CommandLine', 'ClusterId']][clus_events['processName'] == exe_name]) view_cluster('reg.exe')
# Show all clustered processes from msticpy.analysis.eventcluster import plot_cluster # Create label with unqualified path labelled_df = processes_on_host.copy() labelled_df['label'] = labelled_df.apply(lambda x: x.NewProcessName.split("\\")[-1], axis=1) %matplotlib inline #%matplotlib notebook plt.rcParams['figure.figsize'] = (15,10) plot_cluster(dbcluster, labelled_df, x_data, plot_label='label', plot_features=[0,1], verbose=False, cut_off=3, xlabel='CmdLine Tokens', ylabel='Path Score');
Image in a Jupyter notebook

Time showing clustered vs. original data

# Show timeline of events - clustered events nbdisplay.display_timeline( data=clus_events, #overlay_data=processes_on_host, title='Distinct Host Processes (bottom) and All Proceses (top)')
MIME type unknown not supported
MIME type unknown not supported

Contents

Host Logons

Since the number of logon events may be large and, in the case of system logons, very repetitive, we use clustering to try to identity logons with unique characteristics.

In this case we use the numeric score of the account name and the logon type (i.e. interactive, service, etc.). The results of the clustered logons are shown below along with a more detailed, readable printout of the logon event information. The data here will vary depending on whether this is a Windows or Linux host.

from msticpy.analysis.eventcluster import dbcluster_events, add_process_features, char_ord_score host_logons = pd.read_csv('data/host_logons.csv', parse_dates=["TimeGenerated"], infer_datetime_format=True) logon_features = host_logons.copy() logon_features['AccountNum'] = host_logons.apply(lambda x: char_ord_score(x.Account), axis=1) logon_features['LogonHour'] = host_logons.apply(lambda x: x.TimeGenerated.hour, axis=1) # you might need to play around with the max_cluster_distance parameter. # decreasing this gives more clusters. (clus_logons, _, _) = dbcluster_events(data=logon_features, time_column='TimeGenerated', cluster_columns=['AccountNum', 'LogonType'], max_cluster_distance=0.0001) print('Number of input events:', len(host_logons)) print('Number of clustered events:', len(clus_logons)) print('\nDistinct host logon patterns:') display(clus_logons.sort_values('TimeGenerated'))
Number of input events: 14 Number of clustered events: 3 Distinct host logon patterns:
# Display logon details nbdisplay.display_logon_data(clus_logons)

Comparing All Logons with Clustered results relative to Alert time line

clus_logons
# Show timeline of events - all logons + clustered logons # ref marker indicates logon_data = {"Clustered": {"data": clus_logons}, "All Logons": {"data": host_logons}} nbdisplay.display_timeline(data=logon_data, source_columns=['Account', 'LogonType'], ref_event=clus_logons.iloc[0], title='All Host Logons', legend="inline")
MIME type unknown not supported
MIME type unknown not supported

View Process Session and Logon Events in Timelines

This shows the timeline of the clustered logon events with the process tree obtained earlier. This allows you to get a sense of which logon was responsible for the process tree session whether any additional logons (e.g. creating a process as another user) might be associated with the alert timeline.

Note you should use the pan and zoom tools to align the timelines since the data may be over different time ranges.

# Show timeline of events - all events nbdisplay.display_timeline(data=clus_logons, source_columns=['Account', 'LogonType'], title='Clustered Host Logons', height=200) process_tree = pd.read_csv('data/process_tree.csv', parse_dates=["TimeGenerated"], infer_datetime_format=True) nbdisplay.display_timeline(data=process_tree, title='Alert Process Session', height=200)
MIME type unknown not supported
MIME type unknown not supported
MIME type unknown not supported
MIME type unknown not supported
nbdisplay.display_timeline(data=clus_logons, group_by="Account", source_columns=['Account', 'LogonType'], title='Clustered Host Logons', legend="right", yaxis=True)
MIME type unknown not supported
MIME type unknown not supported
# Counts of Logon types by Account host_logons[['Account', 'LogonType', 'TimeGenerated']].groupby(['Account','LogonType']).count()