Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Azure
GitHub Repository: Azure/Azure-Sentinel-Notebooks
Path: blob/master/Guided Hunting - Anomalous Office365 Exchange Sessions.ipynb
3249 views
Kernel: Python 3.10 - SDK v2

Sessionize, Model and Visualise Office Exchange Data

Notebook Version: 1.0
Python Version: Python 3.6 (including Python 3.6 - AzureML)
Required Packages: msticpy, pandas, kqlmagic

Data Sources Required:

  • Log Analytics - OfficeActivity

Configuration Required:

This Notebook presumes you have your Microsoft Sentinel Workspace settings configured in a config file. If you do not have this in place, please read the docs and use this notebook to test.

Description:

Various types of security logs can be broken up into sessions/sequences where each session can be thought of as an ordered sequence of events. It can be useful to model these sessions in order to understand what the usual activity is like so that we can highlight anomalous sequences of events.

In this hunting notebook, we treat the Office Exchange PowerShell cmdlets ("Set-Mailbox", "Set-MailboxFolderPermission" etc) as "events" and then group the events into "sessions" on a per-user basis. We demonstrate the sessionizing, modelling and visualisation on the Office Exchange Admin logs, however the methods used in this notebook can be applied to other log types as well.

A new subpackage called anomalous_sequence has been released to msticpy recently. This library allows the user to sessionize, model and visualize their data via some high level functions. For more details on how to use this subpackage, please read the docs and/or refer to this more documentation heavy notebook. The documentation for this subpackage also includes some suggested guidance on how this library can be applied to some other log types.

High level sections of the notebook:

  • Sessionize your Office Exchange logs data using built-in KQL operators

  • Use the anomalous_sequence subpackage of msticpy to model the sessions

  • Use the anomalous_sequence subpackage of msticpy to visualize the scored sessions

Notebook initialization

The next cell:

  • Checks for the correct Python version

  • Checks versions and optionally installs required packages

  • Imports the required packages into the notebook

  • Sets a number of configuration options

This should complete without errors. If you encounter errors or warnings, please look at the following two notebooks:

from pathlib import Path from IPython.display import display, HTML REQ_PYTHON_VER = "3.10" REQ_MSTICPY_VER = "2.12.0" display(HTML("<h3>Starting Notebook setup...</h3>")) # If the installation fails try to manually install using # %pip install --upgrade msticpy extra_imports = [ "msticpy.analysis.anomalous_sequence.utils.data_structures, Cmd", "msticpy.analysis.anomalous_sequence, anomalous", "msticpy.analysis.anomalous_sequence.model, Model", "typing, List", "typing, Dict", ] import msticpy as mp mp.init_notebook( namespace=globals(), extra_imports=extra_imports, );

Using LogAnalytics Query Provider

msticpy has a QueryProvider class which you can use to connect to your Log Analytics data environment.

# Collect Microsoft Sentinel Workspace Details from our config file and use them to connect try: # Update to WorkspaceConfig(workspace="WORKSPACE_NAME") to use a Workspace other than your default one. # Run WorkspaceConfig().list_workspaces() to see a list of configured workspaces ws_config = mp.WorkspaceConfig() md("Workspace details collected from config file") qry_prov = mp.QueryProvider(data_environment='AzureSentinel') qry_prov.connect(connection_str=ws_config.code_connect_str) except RuntimeError: md("""You do not have any Workspaces configured in your config files. Please run the https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb to setup these files before proceeding""" ,'bold')

Create Sessions from your Office Exchange logs

What is a Session?

In this context, a session is an ordered sequence of events/commands. The anomalous_sequence subpackage can handle 3 different formats for each of the sessions:

  1. sequence of just events/commands. ["Set-User", "Set-Mailbox"]

  2. sequence of events/commands with accompanying parameters. [Cmd(name="Set-User", params={"Identity', "Force"}), Cmd(name="Set-Mailbox", params={"Identity", "AuditEnabled"})]

  3. sequence of events/commands with accompanying parameters and their corresponding values. [Cmd(name="Set-User", params={"Identity": "blahblah", "Force": 'true'}), Cmd(name="Set-Mailbox", params={"Identity": "blahblah", "AuditEnabled": "false"})]

The Cmd datatype can be accessed from msticpy.analysis.anomalous_sequence.utils.data_structures

How will we sessionize the data?

We discuss two possible approaches:

  1. Use the sessionize module from msticpy's anomalous_subsequence subpackage

  2. Sessionize directly inside your KQL query to retrieve data from Log Analytics

In this notebook, we use the second approach (KQL) to sessionize the Office Exchange logs. In order to do the sessionizing using KQL, we make use of the row_window_session function.

However, if you are interested in using msticpy's sessionizing capabilities, then please read the docs and/or refer to this more documentation heavy notebook.

Use Kusto to Sessionize your Logs Data

The cell below contains a kusto query which queries the OfficeActivity table. In this example, we wish for the sessions to be on a per UserId - ClientIP basis. In addition, we require that each session be no longer than 20 minutes in total, with each command no more than 2 minutes apart from each other. (These requirements are somewhat arbitrary and can be adjusted for different data-sets/use-cases etc).

Here are some high level steps to the query:

  • Add a time filter which goes back far enough so you have enough data to train the model.

  • Filter to the desired type of logs.

  • Exclude some known automated users (optional)

  • Sort the rows by UserId, ClientIp, TimeGenerated in ascending order

  • Use the native KQL function row_window_session to create an additonal "begin" column to aid creating the sessions

  • Summarize the commands (and optionally parameters) by UserId, ClientIp, begin

  • Optionally exclude sessions which have only 1 command

Note that in KQL, comments are made using //

# write kql query query = """ let time_back = 60d; OfficeActivity | where TimeGenerated >= ago(time_back) // // filter to the event type of interest | where RecordType == 'ExchangeAdmin' // // exclude some known automated users (optional) | where UserId !startswith "NT AUTHORITY" | where UserId !contains "prod.outlook.com" // // create new dynamic variable with the command as the key, and the parameters as the values | extend params = todynamic(strcat('{"', Operation, '" : ', tostring(Parameters), '}')) | project TimeGenerated, UserId, ClientIP, Operation, params // // sort by the user related columns and the timestamp column in ascending order | sort by UserId asc, ClientIP asc, TimeGenerated asc // // calculate the start time of each session into the "begin" variable // With each session max 20 mins in length with each event at most 2 mins apart. // A new session is created each time one of the user related columns change. | extend begin = row_window_session(TimeGenerated, 20m, 2m, UserId != prev(UserId) or ClientIP != prev(ClientIP)) // // summarize the operations and the params by the user related variables and the "begin" variable | summarize cmds=makelist(Operation), end=max(TimeGenerated), nCmds=count(), nDistinctCmds=dcount(Operation), params=makelist(params) by UserId, ClientIP, begin // //optionally specify an order to the final columns | project UserId, ClientIP, nCmds, nDistinctCmds, begin, end, duration=end-begin, cmds, params // // optionally filter out sessions which contain only one event //| where nCmds > 1 """
# execute the query sessions_df = qry_prov.exec_query(query=query) # I comment out this cell and run it again once it has run to prevent the notebook from slowing down
try: print(sessions_df.shape) except AttributeError as e: sessions_df = _kql_raw_result_.to_dataframe() print(sessions_df.shape)
sessions_df.drop(columns=["params", "param_value_session", "param_session"]).head()

Convert Sessions to Correct Format for the Model

Recall the allowed session types here

So let's see what needs to be done to the sessions_df.

The "cmds" column is already in a suitable format of type (1). This is because it is a list of strings. However, if you are interested in including the parameters (and possibly the values) in the modelling stage, then we need to make use of the Cmd datatype.

In particular, we need to define a custom cleaning function which will transform the "params" column slightly to become a list of the Cmd datatype. This cleaning function is specific to the format of the exchange demo data. Therefore, you may need to tweak it slightly before you can use it on other data sets.

# define a helper function for converting the sessions with params (and values) into a suitable format def process_exchange_session(session_with_params: [List[Dict[str, List[Dict[str, str]]]]], include_vals: bool) -> List[Cmd]: """ Converts an exchange session with params to an allowed format. param session_with_params: example format: [ {'Set-Mailbox': [{'Name': 'MessageCopyForSentAsEnabled', 'Value': 'True'}, {'Name': 'Identity', 'Value': '[email protected]'}]} ] param include_vals: if True, then it will be transformed to a format which includes the values, else the output will just contain the parameters return: list of the Cmd data type which includes either just the parameters, or also the corresponding values """ new_ses = [] for cmd in session_with_params: c = list(cmd.keys())[0] par = list(cmd.values())[0] new_pars = set() if include_vals: new_pars = dict() for p in par: if include_vals: new_pars[p['Name']] = p['Value'] else: new_pars.add(p['Name']) new_ses.append(Cmd(name=c, params=new_pars)) return new_ses
# let's create suitable sessions for params, and suitable sessions for params + values by applying the custom function sessions = sessions_df.cmds.values.tolist() param_sessions = [] param_value_sessions = [] for ses in sessions_df.params.values.tolist(): new_ses_set = process_exchange_session(session_with_params=ses, include_vals=False) new_ses_dict = process_exchange_session(session_with_params=ses, include_vals=True) param_sessions.append(new_ses_set) param_value_sessions.append(new_ses_dict)
# let's see the differences between the three types of sessions ind = 0 print(sessions[ind][:3]) print(param_sessions[ind][:3]) print(param_value_sessions[ind][:3])
# let's add these reformatted sessions as columns to a dataframe data = sessions_df data['session'] = sessions data['param_session'] = param_sessions data['param_value_session'] = param_value_sessions data.head()

Model the sessions

We will give a brief description of how the modelling works under the hood for each of the three session types.

  • Commands only

    • We treat the sessions as an ordered sequence of commands.

    • We apply the Markov Assumption where we assume each command depends only on the command immediately before it.

    • This means the likelihood of each session can be computed by multiplying a sequence of transition probabilities together.

    • We use a sliding window (e.g. of length 3) throughout each session and then use the likelihood of the rarest window as the score for the session.

  • Commands with Parameters

    • All of the above ("commands only" case) except for one difference.

    • This time, we include the parameters in the modelling.

    • We make the assumption that the presence of each parameter is independent conditional on the command.

    • We therefore model the presence of the parameters as independent Bernoulli random variables (conditional on the command)

    • So to compute the likelihood of a session, each transition probability (of the commands) will be accompanied by a product of probabilties (for the parameters).

    • A subtlety to note, is that we take the geometric mean of the product of parameter probabilities. This is so we don't penalise commands which happen to have more parameters set than on average.

    • We use the same sliding window approach used with the "commands only" case.

  • Commands with Parameters and their Values

    • All of the above ("commands with parameters" case) except for one difference.

    • This time, we include the values in the modelling.

    • Some rough heuristics are used to determine which parameters have values which are categorical (e.g. "true" and "false" or "high", "medium" and "low") vs values which are arbitrary strings (such as email addresses). There is the option to override the "modellable_params" directly in the Model class.

    • So to compute the likelihood of a session, each transition probability (of the commands) will be accompanied by a product of probabilties (for the parameters and categorical values).

    • We use the same sliding window approach used with the "commands only" case.

Important note:

If you set the window length to be k, then only sessions which have at least k-1 commands will have a valid (not np.nan) score. The reason for the -1 is because we append an end token to each session by default, so a session of length k-1 gets treated as length k during the scoring.

There are 3 high level functions available in this library

  1. score_sessions

  2. visualize_scored_sessions

  3. score_and_visualize_sessions

We will demonstrate the usage of the first two functions, but the "score_and_visualize_sessions" function can be used in a similar way.

If you want to see more detail about any of the arguments to the functions, you can simply run: help(name_of_function)

We will first demonstrate the high level function for modelling the sessions.

We will do this for the "Commands with Parameters and their Values" session type.

But because we created columns for all three session types, you can set the "session_column" parameter in the "score_sessions" function below to any of the following:

  1. session

  2. param_session

  3. param_value_session

# This function will return a dataframe with two additonal columns appended: # "rarest_window3_likelihood" and "rarest_window3" modelled_df = anomalous.score_sessions( data=data, session_column='param_value_session', window_length=3 )
# Let's view the resulting dataframe in ascending order of the computed likelihood metric modelled_df.sort_values('rarest_window3_likelihood').head()
# we can view individual sessions in more detail modelled_df.sort_values('rarest_window3_likelihood').rarest_window3.iloc[0]

Now we demonstrate the visualization component of the library

We do this using the "visualise_scored_sessions" function. This function returns an interactive timeline plot which allows you to zoom into different sections etc.

  • The time of the session will be on the x-axis.

  • The computed likelihood metric will be on the y-axis.

  • lower likelihoods correspond to rarer sessions.

Important note:

During the scoring/modelling stage, if you set the window length to be k, then only sessions which have at least k-1 commands will appear in the interactive timeline plot. This is because sessions with fewer than k-1 commands will have a score of np.nan. The reason for the -1 is because we append an end token to each session by default, so a session of length k-1 gets treated as length k during the scoring.

# visualise the scored sessions in an interactive timeline plot. anomalous.visualise_scored_sessions( data_with_scores=modelled_df[modelled_df["rarest_window3_likelihood"].notnull()], time_column='begin', # this will appear in the x-axis score_column='rarest_window3_likelihood', # this will appear on the y-axis window_column='rarest_window3', # this will represent the session in the tool-tips source_columns=['UserId', 'ClientIP'] # specify any additonal columns to appear in the tool-tips )