Path: blob/master/machine-learning-notebooks/Guided Hunting - Anomaly detection with Isolation Forest on Windows Logon data For Data Scientist .ipynb
3250 views
Guided Hunting - Anomaly detection with Isolation Forest on Windows Logon data
Details...
Notebook Version: 1.0
Python Version: Python 3.8 - AzureML
Required Packages: Msticpy, Msticnb, matplotlib, ipywidgets
Platforms Supported: Azure Machine Learning Notebooks
Data Source Required: Yes
Data Source: SecurityEvents
Description
In this sample guided scenario notebook, we will demonstrate how to hunt for anamalous user logon activity using Isolation forest model.
We will start with reading historical windows logon data from Microsoft Sentinel workspace, then we will prepocess the dataset using series of data preparation steps such as aggregation, summarization, data type conversion, deriving new fields etc. Then we will perform Feature Engineering and select subset of features from the data prepared from previous steps to create isolation forest model. Finally, we will run the model to score the results and identify anomalies with higher score.
The isolation forest algorithm will split the data into two parts based on random threshold value. It will recursively continue the splitting until each data point is isolated. Then we will detect anomalies using isolation (how far a data point is to the rest of the data). To detect an anomaly the isolation forest takes the average path length (number of splits to isolate a sample) of all the trees for a given instance and uses this to determine if it is an anomaly (average shorter path lengths indicate anomalies)
Disclaimer: Some of the sections in the Notebook such as PCA plot visualization , interpreting SHAP values , customizing anomaly algorithm score will require prior knowledge and understanding of data science algorithms and interpreting results or visualization etc which is typically known to Data Scientists. We have included one liner notes about the context and reasoning where possible but may need to refer other resources to grasp the concepts.

Image Credits: Detecting and preventing abuse on LinkedIn using isolation forests
Please run the cells sequentially to avoid errors.
Please do not use "run all cells".
Notebook initialization
The next cell:
Checks for the correct Python version
Checks versions and optionally installs required packages
Imports the required packages into the notebook
Sets a number of configuration options.
This should complete without errors. If you encounter errors or warnings look at the following two notebooks:
If you are running in the Microsoft Sentinel Notebooks environment (Azure Notebooks or Azure ML) you can run live versions of these notebooks:
You may also need to do some additional configuration to successfully use functions such as Threat Intelligence service lookup and Geo IP lookup. There are more details about this in the ConfiguringNotebookEnvironment notebook and in these documents:
Authentication to LA Workspace
Use the following syntax if you are authenticating using an Azure Active Directory AppId and Secret:
instead of
Note: you may occasionally see a JavaScript error displayed at the end of the authentication - you can safely ignore this.
On successful authentication you should see a popup schema button. To find your Workspace Id go to Log Analytics. Look at the workspace properties to find the ID.
Data Preparation
In this step, we will prepare the Windows logon events and do some preprocessing before we do data modelling. For this case, we are primarily considering logon event ids 4624, 4625 with specific logon type.
4624 and 4625 events are related to Successful sign in and Failed Sign-in. You can check more about the event Ids in below links.
Historical Data Processing
For this model, we can consider upto 21 days of historical data. If you want to generate this anomalies on recurrent basis then depending on scale and volume of the data, you can set up intermediate pipeline to save historical data into custom table and load results from it. Check out the blog for ways to export historical data at scale using notebook Export Historical Log Data from Microsoft Sentinel For this demo, we are retrieving data from the original table. We also have provided demo dataset if you want to test the notebook without connecting to your workspace.
Feature Engineering
In this step, we are creating additional features/columns.
We have selected 4 columns(features) with numeric data points
FailedLogons
SuccessfulLogons
ComputersSuccessfulAccess
SrcIpSuccessfulAccess
and also deriving additional columns by calculating mean, standard deviation and zscores on each of them. Converting to zscores is not necessary for numerical features as Isolation forest are scale invariant but this pre-processing is done so as to use these features later in the visualizations such as PCA. We have also done log scaling as part of data pre-processing steps which is not required but based on various data studies in production environment we have seen it gives finer results. You can skip or add this step based on data study and analyzing results.
Data Modelling
In this step we will specify features to be modelled and run isolation forest algorithm against the data.
Isolation Forest Anomaly detection
In this step, we will select subset of features generated from previous step and use it for data modelling. We will also use Isolation Forest model on the data with selected features and calculate the anomalies.
Data Visualization
In this step, we will explore various ways we can visualize the outliers identified from previous step.
Histogram
3D ScatterPlot using PCA
Correlation Plot
Correlation plot gives idea about how differnt features are correlated to each other. Eg. You will observe linear correlation between FailedLogons with DistinctSrcIp/ DistinctSrcHostName etc. If there are similar correlations between multiple features you can experiment of removing it and see the imapct on outlier results.
Interpreting Global and Local anomalies with SHAP
Global interpretability and local interpretability are two different ways to understand how a machine learning model is making predictions. Global interpretability looks at the model as a whole and tries to understand which features are most important for the model's overall performance. Local interpretability, on the other hand, focuses on a single prediction and tries to understand which features are most important for that particular prediction
SHAP values are generally the difference between the expected output and partial dependence plot at the features value. You can read more scientific details about SHAP values at the documentation. Reading-SHAP-values-from-partial-dependence-plots
Global Interpratation - Summary plot for Feature Importance
Below we have 3 different visualization for global interpretation.
For this visualization , we will pass entire shap_values matrix instead of single entry.
Here is a short description of these visualization types and how to interprete it.
Force plot: A SHAP force plot shows the contribution of each feature to the final prediction for a single data point. The plot has a horizontal axis that shows the SHAP value, which indicates how much a feature contributed to the prediction (positive values indicate that the feature contributed positively, and negative values indicate that the feature contributed negatively). The plot also has horizontal bars for each feature, which indicate the value of the feature for the data point being analyzed. The bars are colored to show whether the feature value is high (in red) or low (in blue). The width of each bar shows how much the feature contributed to the final prediction.
Bar plot: A SHAP bar plot, on the other hand, shows the feature importance for a set of data points. The plot has a horizontal axis that shows the mean SHAP value for each feature across the set of data points being analyzed. The plot also has vertical bars for each feature, which indicate the feature importance (how much the feature contributed to the prediction on average) and the direction of the effect (positive or negative). The bars are colored to show whether the feature had a positive (in red) or negative (in blue) effect on the prediction.
To interpret the SHAP force plot or bar plot, you should look for features with high absolute SHAP values or feature importance. These are the features that have the greatest impact on the prediction. The direction of the SHAP value or feature importance indicates whether the feature has a positive or negative effect on the prediction. For example, a high positive SHAP value or feature importance for the "number of failed login attempts" feature indicates that a high number of failed login attempts is associated with a higher probability of being an anomaly.
Local Interpretation - Feature importance
Below we have 3 different visualization for local interpretation.
For this visualization , we will pass shap_values with index of outlier/inlier instead of whole matrix.
Example Outlier
Example Inlier
Populate dataset with SHAP values
Here we will apply SHAP to the data and concatenate SHAP values with the original dataframe so we will have SHAP values with each row in the dataframe for easier analysis without visualization if required.
Conclusion
In this notebook, we started with windows event logs login data with the goal of finding users with anomalous login patterns. This notebook is targetted towards Data Scientists who can use it to tweak at various stages of the executions from Feature Engineering to Data visualization to explore the data in various ways. We have released another version of this Notebook , targetted towards SOC Analysts/Threat hunters who want to investigate the anomalies resulted from this model and triage to investigate any malicious activity.