Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Azure
GitHub Repository: Azure/Azure-Sentinel-Notebooks
Path: blob/master/tutorials-and-examples/feature-tutorials/PivotFunctions-Introduction.ipynb
3253 views
Kernel: Python (condadev)

MSTICPy Pivot Functions

We recently released a new version of MSTICPy with a feature called Pivot functions. You must have msticpy installed to run this notebook:

%pip install --upgrade msticpy

MSTICpy versions >= 1.0.0

This feature has three main goals:

  • Making it easy to discover and invoke MSTICPy functionality

  • Creating a standardized way to call pivotable functions

  • Letting you assemble multiple functions into re-usable pipelines.

Here are a couple of examples showing calling different kinds of enrichment functions from the IpAddress entity:

>>> from msticpy.datamodel.entities import IpAddress, Host >>> IpAddress.util.ip_type(ip_str="157.53.1.1")) ip result 157.53.1.1 Public >>> IpAddress.util.whois("157.53.1.1")) asn asn_cidr asn_country_code asn_date asn_description asn_registry nets ..... NA NA US 2015-04-01 NA arin [{'cidr': '157.53.0.0/16'... >>> IpAddress.util.geoloc(value="157.53.1.1")) CountryCode CountryName State City Longitude Latitude Asn... US United States None None -97.822 37.751 None...

This second example shows a pivot function that does a data query for host logon events from a Host entity.

>>> Host.AzureSentinel.list_host_logons(host_name="VictimPc") Account EventID TimeGenerated Computer SubjectUserName SubjectDomainName NT AUTHORITY\SYSTEM 4624 2020-10-01 22:39:36.987000+00:00 VictimPc.Contoso.Azure VictimPc$ CONTOSO NT AUTHORITY\SYSTEM 4624 2020-10-01 22:39:37.220000+00:00 VictimPc.Contoso.Azure VictimPc$ CONTOSO NT AUTHORITY\SYSTEM 4624 2020-10-01 22:39:42.603000+00:00 VictimPc.Contoso.Azure VictimPc$ CONTOSO

The pivot functionality exposes operations relevant to a particular entity as methods (or functions) of that entity. These operations include:

  • Data queries

  • Threat intelligence lookups

  • Other data lookups such as geo-location or domain resolution

  • and other local functionality

You can also add other functions from 3rd party Python packages or ones you write yourself as pivot functions.

Terminology

Before we get into things let's clear up a few terms.

Entities

These are Python classes that represent real-world objects commonly encountered in CyberSec investigations and hunting. E.g. Host, URL, IP Address, Account, etc.

Pivoting

This comes from the common practice in CyberSec investigations of navigating from one suspect entity to another. E.g. you might start with an alert identifying a potentially malicious IP Address, from there you 'pivot' to see which hosts or accounts were communicating with that address. From there you might pivot again to look at processes running on the host or Office activity for the account.

Background Reading

This article is available in Notebook form so that you can try out the examples. [TODO]

There is also full documenation of the Pivot functionality on our ReadtheDocs page


Life before pivot functions

Before Pivot functions your ability to use the various bits of functionality in MSTICPy was always bounded by you knowledge of where a certain function was (or your enthusiasm for reading the docs).

For example, suppose you had an IP address that you wanted to do some simple enrichment on.

ip_addr = "20.72.193.242"

First you'd need to locate and import the functions. There might also be (as in the GeoIPLiteLookup class) some initialization step you'd need to do before using the functionality.

from msticpy.sectools.ip_utils import get_ip_type from msticpy.sectools.ip_utils import get_whois_info from msticpy.sectools.geoip import GeoLiteLookup geoip = GeoLiteLookup()

Next you might have to check the help for each function to work it parameters.

help(get_ip_type)
Help on function get_ip_type in module msticpy.sectools.ip_utils: get_ip_type(ip: str = None, ip_str: str = None) -> str Validate value is an IP address and deteremine IPType category. (IPAddress category is e.g. Private/Public/Multicast). Parameters ---------- ip : str The string of the IP Address ip_str : str The string of the IP Address - alias for `ip` Returns ------- str Returns ip type string using ip address module

Then finally run the functions

get_ip_type(ip_addr)
'Public'
get_whois_info(ip_addr)
('MICROSOFT-CORP-MSN-AS-BLOCK, US', {'nir': None, 'asn_registry': 'arin', 'asn': '8075', 'asn_cidr': '20.64.0.0/10', 'asn_country_code': 'US', 'asn_date': '2017-10-18', 'asn_description': 'MICROSOFT-CORP-MSN-AS-BLOCK, US', 'query': '20.72.193.242', 'nets': [{'cidr': '20.34.0.0/15, 20.48.0.0/12, 20.36.0.0/14, 20.40.0.0/13, 20.33.0.0/16, 20.128.0.0/16, 20.64.0.0/10', 'name': 'MSFT', 'handle': 'NET-20-33-0-0-1', 'range': '20.33.0.0 - 20.128.255.255', 'description': 'Microsoft Corporation', 'country': 'US', 'state': 'WA', 'city': 'Redmond', 'address': 'One Microsoft Way', 'postal_code': '98052', 'emails': ['[email protected]', '[email protected]', '[email protected]'], 'created': '2017-10-18', 'updated': '2017-10-18'}], 'raw': None, 'referral': None, 'raw_referral': None})
geoip.lookup_ip(ip_addr)
([{'continent': {'code': 'NA', 'geoname_id': 6255149, 'names': {'de': 'Nordamerika', 'en': 'North America', 'es': 'Norteamérica', 'fr': 'Amérique du Nord', 'ja': '北アメリカ', 'pt-BR': 'América do Norte', 'ru': 'Северная Америка', 'zh-CN': '北美洲'}}, 'country': {'geoname_id': 6252001, 'iso_code': 'US', 'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}}, 'location': {'accuracy_radius': 1000, 'latitude': 47.6032, 'longitude': -122.3412, 'time_zone': 'America/Los_Angeles'}, 'registered_country': {'geoname_id': 6252001, 'iso_code': 'US', 'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}}, 'subdivisions': [{'geoname_id': 5815135, 'iso_code': 'WA', 'names': {'en': 'Washington', 'es': 'Washington', 'fr': 'Washington', 'ja': 'ワシントン州', 'ru': 'Вашингтон', 'zh-CN': '华盛顿州'}}], 'traits': {'ip_address': '20.72.193.242', 'prefix_len': 18}}], [IpAddress(Address=20.72.193.242, Location={ 'AdditionalData': {}, 'CountryCode': 'US', ...)])

At which point you'd discover that the output from each function was somewhat raw and it would take a bit more work if you wanted to combine it in any way (say in a single table).

We'll see how pivot functions address these problems in the remainder of the notebook.

Getting Started with Pivot functions

Typically we use MSTICPy's init_notebook function that handles checking versions and importing some commonly-used packages and modules (both MSTICPy and 3rd party packages like pandas

from msticpy.nbtools.nbinit import init_notebook init_notebook(namespace=globals());
msticpy version installed: 1.0.0rc4 latest published: 0.9.0 Latest version is installed. Processing imports.... Checking configuration.... No errors found. No warnings found. Setting notebook options....

There are some preliminary steps needed before you can use pivot functions. The main one is loading the Pivot class. Pivot functions are added to the entities dynamically. The Pivot class will try to discover relevant functions from queries, Threat Intel providers and various utility functions.

In some cases, notably data queries, the data query functions are themselves created dynamically, so these need to be loaded before you create the Pivot class. (You can always create a new instance of this class, which forces re-discovery, so don't worry if mess up the order of things).

Note in most cases we don't need to connect/authenticate to a data provider prior to loading Pivot

Let's load our data query provider for AzureSentinel

az_provider = QueryProvider("AzureSentinel")
Please wait. Loading Kqlmagic extension...
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>

Now we can load and instantiate the Pivot class.

Why do we need to pass namespace=globals()? Pivot searches through the current objects defined in the Python/notebook namespace. This is most relevant for QueryProviders. In most other cases (like GeoIP and ThreatIntel providers, it will create new ones if it can't find existing ones).

from msticpy.datamodel.pivot import Pivot pivot = Pivot(namespace=globals())
Using Open PageRank. See https://www.domcop.com/openpagerank/what-is-openpagerank

Easy discovery of functionality

Find the entity name you need

The simplest way to do this is simply enumerate (dir) the contents of the MSTPCPy entities sub-package. This should have already been imported by the init_notebook function that we ran earlier.

The items at the beginning of the list with proper capitalization are the entities.

dir(entities)
['Account', 'Alert', 'Algorithm', 'AzureResource', 'CloudApplication', 'Dns', 'ElevationToken', 'Entity', 'File', 'FileHash', 'GeoLocation', 'Host', 'HostLogonSession', 'IpAddress', 'Malware', 'NetworkConnection', 'OSFamily', 'Process', 'RegistryHive', 'RegistryKey', 'RegistryValue', 'SecurityGroup', 'Threatintelligence', 'UnknownEntity', 'Url', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'account', 'alert', 'azure_resource', 'cloud_application', 'difflib', 'dns', 'entity', 'entity_enums', 'entity_graph', 'file', 'file_hash', 'find_entity', 'geo_location', 'host', 'host_logon_session', 'ip_address', 'malware', 'network_connection', 'process', 'registry_key', 'registry_value', 'security_group', 'threat_intelligence', 'unknown_entity', 'url']

We're going to make this a little easier in a forthcoming update with this helper function.

Warning: post-0.9.0 functionality

This will throw and error in v0.9.0 of MSTICPy
entities.find_entity("ip")
Match found 'IpAddress'
msticpy.datamodel.entities.ip_address.IpAddress
entities.find_entity("azure")
No exact match found for 'azure'. Closest matches are 'AzureResource', 'Url', 'Malware'

Listing pivot functions available for an entity

Note you can always address an entity using its qualified path, e.g. entities.IpAddress but if you are going to use one or two entities a lot it will save a bit of typing if you import them explicitly.

from msticpy.datamodel.entities import IpAddress, Host

Once you have the entity you can use the get_pivot_list() function to see which pivot functions are available for it.

IpAddress.get_pivot_list()
['AzureSentinel.SecurityAlert_list_alerts_for_ip', 'AzureSentinel.SigninLogs_list_aad_signins_for_ip', 'AzureSentinel.AzureActivity_list_azure_activity_for_ip', 'AzureSentinel.AzureNetworkAnalytics_CL_list_azure_network_flows_by_ip', 'AzureSentinel.OfficeActivity_list_activity_for_ip', 'AzureSentinel.AzureNetworkAnalytics_CL_get_host_for_ip', 'AzureSentinel.Heartbeat_get_heartbeat_for_ip', 'AzureSentinel.Heartbeat_get_info_by_ipaddress', 'AzureSentinel.Syslog_list_logons_for_source_ip', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_ip', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_hash', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_filepath', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_domain', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_email', 'AzureSentinel.ThreatIntelligenceIndicator_list_indicators_by_url', 'ti.lookup_ip', 'ti.lookup_ipv4', 'ti.lookup_ipv4_OTX', 'ti.lookup_ipv4_Tor', 'ti.lookup_ipv4_VirusTotal', 'ti.lookup_ipv4_XForce', 'ti.lookup_ipv6', 'ti.lookup_ipv6_OTX', 'util.whois', 'util.ip_type', 'util.ip_rev_resolve', 'util.geoloc', 'util.geoloc_ips']

Some of the function names are a little unweildy but, in many cases, this is necessary to avoid name collisions. You might notice from the list that the functions are grouped into containers "AzureSentinel", "ti" and "util" in the above example.

Although this makes the function name even longer we thought that this helped to keep related functionality together - so you don't get a TI lookup, when you thought you were running a query.

Fortunately Jupyter notebooks/IPython support tab completion so you should not normally have to remember these names.

image.png

The containers ("AzureSentinel", "util", etc.) are also callable functions - they just return the list of functions they contain.

IpAddress.util()
whois function ip_type function ip_rev_resolve function geoloc function geoloc_ips function

Now we're ready to run any of the functions for this entity

IpAddress.util.ip_type(ip_addr)
entities.IpAddress.util.whois(ip_addr)
entities.IpAddress.util.ip_rev_resolve(ip_addr)
entities.IpAddress.util.geoloc(ip_addr)
entities.IpAddress.ti.lookup_ip(ip_addr)

Notice that we didn't need to worry about either the parameter name or format (more on this in the next section). Also, whatever the function, the output is always returned as a pandas DataFrame.

For Data query functions you do need to worry about the parameter name

Data query functions are a little more complex than most other functions and specifically often support many parameters. Rather than try to guess which parameter you meant, we require you to be explicit.

To use a data query, we need to authenticate to the provider.

az_provider.connect(WorkspaceConfig(workspace="CyberSecuritySoc").code_connect_str)
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>

If you are not sure of the parameters required by the query you can use the built-in help

Host.AzureSentinel.SecurityAlert_list_related_alerts?
Signature: Host.AzureSentinel.SecurityAlert_list_related_alerts(*args, **kwargs) -> Union[pandas.core.frame.DataFrame, Any] Docstring: Retrieves list of alerts with a common host, account or process Parameters ---------- account_name: str (optional) The account name to find add_query_items: str (optional) Additional query clauses end: datetime (optional) Query end time host_name: str (optional) The hostname to find path_separator: str (optional) Path separator (default value is: \\) process_name: str (optional) The process name to find query_project: str (optional) Column project statement (default value is: | project-rename StartTimeUtc = StartTime, EndTim...) start: datetime (optional) Query start time (default value is: -30) subscription_filter: str (optional) Optional subscription/tenant filter expression (default value is: true) table: str (optional) Table name (default value is: SecurityAlert) File: c:\users\ian\anaconda3\envs\condadev\lib\functools.py Type: function
Host.AzureSentinel.SecurityAlert_list_related_alerts(host_name="victim00").head(5)
<IPython.core.display.Javascript object>

We also have a preview of a notebook tool that lets you browser around entities and their pivot functions, search for a function by keyword and view the help for that function. This is going to be released shortly.

Warning: post-0.9.0 functionality

This will throw and error in v0.9.0 of MSTICPy
Pivot.browse()
VBox(children=(HBox(children=(VBox(children=(HTML(value='<b>Entities</b>'), Select(description='entity', layou…

Standardized way of calling Pivot functions

Due to various factors (historical, underlying data, developer laziness and forgetfullness, etc.) the functionality in MSTICPy can be inconsistent in the way it uses input parameters.

Also, many functions will only accept inputs as a single value, or a list or a DataFrame or some unpredictable combination of these.

Pivot functions allow you to largely forget about this - you can use the same function whether you have:

  • a single value

  • a list (or any iterable) of values

  • a DataFrame with the input value in one of the columns.

Let's take an example.

Suppose we have a set of IP addresses pasted from somewhere that we want to use as input.

0, 172.217.15.99, Public 1, 40.85.232.64, Public 2, 20.38.98.100, Public 3, 23.96.64.84, Public 4, 65.55.44.108, Public 5, 131.107.147.209, Public 6, 10.0.3.4, Private 7, 10.0.3.5, Private 8, 13.82.152.48, Public

We need to convert this into a Python data object of some sort. To do this we can use another Pivot utility %%txt2df. This is a Jupyter/IPython magic function so you can just paste you data in a cell. Use %%txt2df --help in an empty cell to see the full syntax.

The example below we specify a comma separator, that the data has a headers row and to save the converted data as a DataFrame named "ip_df".

Warning this will overwrite any existing variable of this name

%%txt2df --sep , --headers --name ip_df idx, ip, type 0, 172.217.15.99, Public 1, 40.85.232.64, Public 2, 20.38.98.100, Public 3, 23.96.64.84, Public 4, 65.55.44.108, Public 5, 131.107.147.209, Public 6, 10.0.3.4, Private 7, 10.0.3.5, Private 8, 13.82.152.48, Public

For our example we'll also create a standard Python list from the ip column.

ip_list = list(ip_df.ip) print(ip_list)
['172.217.15.99', '40.85.232.64', '20.38.98.100', '23.96.64.84', '65.55.44.108', '131.107.147.209', '10.0.3.4', '10.0.3.5', '13.82.152.48']

How did this work before?

If you recall the earlier example of get_ip_type, passing it a list or DataFrame doesn't result in anything useful.

get_ip_type(ip_list)
['172.217.15.99', '40.85.232.64', '20.38.98.100', '23.96.64.84', '65.55.44.108', '131.107.147.209', '10.0.3.4', '10.0.3.5', '13.82.152.48'] does not appear to be an IPv4 or IPv6 address
'Unspecified'

Pivot versions are (somewhat) agnostic to input data format

However, the pivotized version can accept and correctly process a list

IpAddress.util.ip_type(ip_list)

In the case of a DataFrame, things are a little more complicated - we have to tell the function the name of the column that contains the input data.

IpAddress.util.whois(ip_df) # won't work!
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-29-786e8d7fe15b> in <module> ----> 1 IpAddress.util.whois(ip_df) # won't work! e:\src\microsoft\msticpy\msticpy\datamodel\pivot_register.py in pivot_lookup(*args, **kwargs) 172 # {"data": input_df, "src_column": input_column} 173 input_df, input_column, param_dict = _create_input_df( --> 174 input_value, pivot_reg, parent_kwargs=kwargs 175 ) 176 e:\src\microsoft\msticpy\msticpy\datamodel\pivot_register.py in _create_input_df(input_value, pivot_reg, parent_kwargs) 326 "Please specify the column when calling the function." 327 "You can use one of the parameter names for this:", --> 328 _DF_SRC_COL_PARAM_NAMES, 329 ) 330 # we want to get rid of data=xyz parameters from kwargs, since we're adding them KeyError: ("'ip_column' is not in the input dataframe", 'Please specify the column when calling the function.You can use one of the parameter names for this:', ['column', 'input_column', 'input_col', 'src_column', 'src_col'])
IpAddress.util.whois(ip_df, column="ip") # correct

Note: for most functions you can ignore the parameter name and just specify it as a positional parameter. You can also use the original parameter name of the underlying function or the placeholder name "value".

The following are all equivalent:

IpAddress.util.ip_type(ip_list) IpAddress.util.ip_type(ip_str=ip_list) IpAddress.util.ip_type(value=ip_list) IpAddress.util.ip_type(data=ip_list)

When passing both a DataFrame and column name use:

IpAddress.util.ip_type(data=ip_df, column="col_name")

You can also pass an entity instance of an entity as a input parameter. The pivot code knows which attribute or attributes of an entity will provider the input value.

ip_entity = IpAddress(Address="40.85.232.64") IpAddress.util.ip_type(ip_entity)

Iterable/DataFrame inputs and single-value functions

Many of the underlying functions only accept single values as inputs. Examples of these are the data query functions - typically they expect a single host name, IP address, etc.

Pivot knows about the type of parameters that the function accepts. It will adjust the input to match the expectations of the underlying function. If a list or DataFrame is passed as input to a single-value function Pivot will split the input and call the function once for each value. It then combines the output into a single DataFrame before returning the results.

You can read a bit more about how this is done in the Appendix TODO

Data queries - where does the time range come from?

The Pivot class has a buit-in time range. This is used by default for all queries. Don't worry - you can change it easily

Pivot.current.timespan
TimeStamp(start=2021-03-10 18:33:43.314239, end=2021-03-11 18:33:43.314239, period=-1 day, 0:00:00)

You can edit the time range interactively

Pivot.current.edit_query_time()
VBox(children=(HTML(value='<h4>Set time range for pivot functions.</h4>'), HBox(children=(DatePicker(value=dat…

Or by setting the timespan property directly

from msticpy.common.timespan import TimeSpan # TimeSpan accepts datetimes or datestrings timespan = TimeSpan(start="02/01/2021", end="02/15/2021") Pivot.current.timespan = timespan

In an upcoming release there is also a convenience function for setting the time directly with Python datetimes or date strings

Warning:

post-0.9.0 functionality
This will throw and error in v0.9.0 of MSTICPy

Pivot.current.set_timespan(start="2020-02-06 03:00:00", end="2021-02-15 01:42:42")

You can also override the built-in time settings by specifying start and end as parameters.

dt1 = Pivot.current.timespan.start dt2 = Pivot.current.timespan.end Host.AzureSentinel.SecurityAlert_list_related_alerts(host_name="victim00", start=dt1, end=dt2)
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-36-a726e25f79ba> in <module> ----> 1 Host.AzureSentinel.SecurityAlert_list_related_alerts(host_name="victim00", start=dt1, end=dt2) NameError: name 'dt1' is not defined

Supplying extra parameters

The Pivot layer will pass any unused keyword parameters to the underlying function. This does not usually apply to positional parameters - if you want parameters to get to the function, you have to name them explicitly. In this example the add_query_items parameter is passed to the underlying query function

entities.Host.AzureSentinel.SecurityEvent_list_host_logons( host_name="victimPc", add_query_items="| summarize count() by LogonType" )
<IPython.core.display.Javascript object>

Pivot Pipelines

Because all pivot functions accept DataFrames as input and produce DataFrames as output, it means that it is possible to chain pivot functions into a pipeline.

Joining input to output

You can join the input to the output. This usually only makes sense when the input is a DataFrame. It lets you keep the previously accumumated results and tag on the additional columns produced by the pivot function you are calling.

The join parameter supports "inner", "left", "right" and "outer" joins (be careful with the latter though!) See pivot joins documentation

Although joining is useful in pipelines you can use it on any function whether in a pipeline or not.

entities.IpAddress.util.whois(ip_df, column="ip", join="inner")

Pipelines

Pivot pipelines are implemented pandas customr accessors. Read more about Extending pandas here

When you load Pivot it adds the mp_pivot accessor. This appears as an attribute to DataFrames.

>>> ips_df.mp_pivot <msticpy.datamodel.pivot_pd_accessor.PivotAccessor at 0x275754e2208>

The main pipelining function run is a method of mp_pivot. run requires two parameters - the pivot function to run and the column to use as input. See mp_pivot.run documentation

Here is an example of using it to call 4 pivot functions, each using the output of the previous function as input and using the join parameter to accumulate the results from each stage.

Let's step through it line by line.

  1. The whole thing is surrounded by a pair of parentheses - this is just to let us split the whole expression over multiple lines without Python complaining.

  2. Next we have ips_df - this is just the starting DataFrame, our input data.

  3. Next we call the mp_pivot.run() accessor method on this dataframe. We pass it the pivot function that we want to run and the input column name. This column name is the column in ips_df where our input IP addresses are. We've also specified an join type of inner. In this case the join type doesn't really matter since we know we get exactly one output row for every input row.

  4. We're using the pandas query function to filter out unwanted entries from the previous stage. In this case we only want Public IP addresses. This illustrates that you can intersperse standard pandas functions in the same pipeline. We could have also added a column selector expression ([["col1", "col2"...]]) if we wanted to filter the columns passed to the next stage

  5. We are calling a further pivot function - whois. Remember the "column" parameter always refers to the input column, i.e. the column from previous stage that we want to use in this stage.

  6. We are calling geoloc to get geo location details joining with a left join - this preserves the input data rows and adds null columns in any cases where the pivot function returned no result.

  7. Is the same as 6 except is a data query to see if we have any alerts that contain these IP addresses. Remember, in the case of data queries we have to name the specific query parameter that we want the input to go to. In this case, each row value in the "ip" column from the previous stage will be sent to the query.

  8. Finally we close the parentheses to form a valid Python expression. The whole expression returns a DataFrame so we can add further pandas operations here (like .head(5) shown here).

ip_list = [ "192.168.40.32", "192.168.1.216", "192.168.153.17", "3.88.48.125", "10.200.104.20", "192.168.90.101", "192.168.150.50", "172.16.100.31", "192.168.30.189", "10.100.199.10", ] ips_df = pd.DataFrame(ip_list, columns=["IP"])
( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(10) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip", join="left") ).head(5)
<IPython.core.display.Javascript object>

Other pipeline functions

In addition to run, the mp_pivot accessor also has the following functions:

  • display - this simply displays the data at the point called in the pipeline. You can add an optional title, filtering and the number or rows to display

  • tee - this forks a copy of the dataframe at the point it is called in the pipeline. It will assign the forked copy to the name given in the var_name parameter. If there is an existing variable of the same name it will not overwrite it unless you add the clobber=True parameter.

In both cases the pipelined data is passed through unchanged. See Pivot functions help for more details.

Use of these is shown below

... .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.tee(var_name="geoip_df", clobber=True) # << save a copy called 'geoip_df' .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip", join="left")

In the next release we've also implemented:

  • tee_exec - this executes a function on a forked copy of the DataFrame The function must be a pandas function or custom accessor. A good example of the use of this might be creating a plot or summary table to display partway through the pipeline.

Extending Pivot - adding your own (or someone else's) functions

You can add pivot functions of your own. You need to supply:

  • the function

  • some metadata that describes where the function can be found and how the function works

Full details of this are described here.

The published version of Pivot doesn't let you add functions defined inline (i.e. in the notebook itself) but this will be possible in the next release.

Assume that we've created this function in a Python module my_module.py

%%writefile my_module.py """U-case and hash""" from hashlib import md5 def my_func(input: str): md5_hash = "-".join(hex(b)[2:] for b in md5("hello".encode("utf-8")).digest()) return { "Title": input.upper(), "Hash": md5_hash }
Writing my_module.py

Create a definition file

%%writefile my_func.yml pivot_providers: my_func_defn: src_func_name: my_func src_module: my_module entity_container_name: cyber input_type: value entity_map: Host: HostName func_input_value_arg: input func_new_name: upper_hash_name
Writing my_func.yml
from msticpy.datamodel.pivot_register_reader import register_pivots register_pivots("my_func.yml")
Host.cyber.upper_hash_name("host_name")

In the next release, this will be available as a simple function that can be used to add a function defined in the notebook.

Warning: post-0.9.0 functionality

This will throw and error in v0.9.0 of MSTICPy
from hashlib import md5 def my_func2(input: str): md5_hash = "-".join(hex(b)[2:] for b in md5("hello".encode("utf-8")).digest()) return { "Title": input.upper(), "Hash": md5_hash } Pivot.add_pivot_function( func=my_func2, container="cyber", # which container it will appear in on the entity input_type="value", entity_map={"Host": "HostName"}, func_input_value_arg="input", func_new_name="il_upper_hash_name", ) Host.cyber.il_upper_hash_name("host_name")

Conclusion

We've taken a short tour through the MSTICPy looking at how they make the functionality in the package easier to discover and use. I'm particularly excited about the pipeline functionality. In the next release we're going to make it possible to define reusable pipelines in configuration files and execute them with a single function call. This should help streamline some common patterns in notebooks for Cyber hunting and investigation.

Please send any feedback or suggestions for improvements to [email protected] or create an issue on https://github.com/microsoft/msticpy.

Happy hunting!

Get some input data

query = """ SecurityAlert | where AlertName == "Time series anomaly detection for total volume of traffic" | project AlertName, Description, Entities | extend Entities = todynamic(Entities) | mvexpand with_itemindex=Index Entities | extend IP = Entities["Address"] """ ips = az_provider.exec_query(query) ips_df = ips[["IP"]].drop_duplicates()
<IPython.core.display.Javascript object>
entities.IpAddress.util.ip_type(data=ips_df, column="IP", join="inner")

Pivot functions that we want to execute

entities.IpAddress.util.ip_type entities.IpAddress.util.whois entities.IpAddress.util.geoloc entities.IpAddress.AzureSentinel.SecurityAlert_list_related_alerts

We could do this

df = entities.IpAddress.util.ip_type(data=ips, column="IP", join="inner") df2 = entities.IpAddress.util.whois(data=df, column="IP", join="inner") df3 = entities.IpAddress.util.geoloc(data=df2, column="IP", join="inner") df3 = entities.IpAddress.AzureSentinel.SecurityAlert_list_related_alerts(data=df3, column="IP", join="inner")

.... but there's a better way

ips_df.mp_pivot
<msticpy.datamodel.pivot_pd_accessor.PivotAccessor at 0x275754e2208>
( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(10) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip") ).head(5)
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
ips_df
entities.IpAddress.util.whois(data=ips_df, column="IP")

Simple pipeline

Note:

  • inline query to filter to only "Public" IPs

  • mp_pivot.display function to display intermediate results

( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(10) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip", join="left") ).head(5)
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
ip_test_df = ( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(10) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") ).head(5)
# %%debug ip_test_df.mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip", join="left")
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>

Inline filtering and tee function

Save intermediate results to a DataFrame

( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(10) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.tee(var_name="whois_df", clobber=True) .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip") ).head(5)
<IPython.core.display.Javascript object>

Add a display function

( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(5) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip") .mp_pivot.display(title="Alerts Sample", head=5) .mp_timeline.plot( title="IPs with alerts", source_columns=["AlertName", "MatchingIps"], ) );
( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(5) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip") .mp_pivot.display(title="Alerts Sample", head=5) .mp_timeline.plot( title="IPs with alerts", source_columns=["AlertName", "MatchingIps"], ) );
<IPython.core.display.Javascript object>
MIME type unknown not supported
MIME type unknown not supported

Post-0.9.0 feature

Saving and re-using pipelines as yaml

You can specify a pipeline using YAML syntax and execute directly with a DataFrame input.

Here is an example pipeline file with two pipelines, each with multiple steps.

pipelines: pipeline1: description: Pipeline 1 description steps: - name: get_logons step_type: pivot function: util.whois entity: IpAddress comment: Standard pivot function params: column: IpAddress join: inner - name: disp_logons step_type: pivot_display comment: Pivot display params: title: "The title" cols: - Computer - Account query: Computer.str.startswith('MSTICAlerts') head: 10 - name: tee_logons step_type: pivot_tee comment: Pivot tee params: var_name: var_df clobber: True - name: tee_logons_disp step_type: pivot_tee_exec comment: Pivot tee_exec with mp_timeline.plot function: mp_timeline.plot params: source_columns: - Computer - Account - name: logons_timeline step_type: pd_accessor comment: Standard accessor with mp_timeline.plot function: mp_timeline.plot params: source_columns: - Computer - Account pipeline2: description: Pipeline 2 description steps: - name: get_logons step_type: pivot function: util.whois entity: IpAddress comment: Standard pivot function params: column: IpAddress join: inner - name: disp_logons step_type: pivot_display comment: Pivot display params: title: "The title" cols: - Computer - Account query: Computer.str.startswith('MSTICAlerts') head: 10 - name: tee_logons step_type: pivot_tee comment: Pivot tee params: var_name: var_df clobber: True

Create a sample YAML pipeline

from msticpy.datamodel.pivot_pipeline import Pipeline pipelines_yml = """ pipelines: pipeline1: description: Pipeline 1 description steps: - name: get_ip_type step_type: pivot function: util.ip_type entity: IpAddress comment: Get IP Type params: column: IP join: inner - name: filter_public step_type: pd_accessor comment: Filter to only public IPs function: query pos_params: - result == "Public" - name: whois step_type: pivot function: util.whois entity: IpAddress comment: Get Whois info params: column: IP join: inner """

Qe can store this in a file:

with open("pipelines.yml", "w") as fh: fh.write(pipelines_yml)

Load the pipeline and print out what it would look like in code

pipelines = list(Pipeline.from_yaml(pipelines_yml)) print(pipelines[0].print_pipeline())
# Pipeline 1 description ( input_df # Get IP Type .mp_pivot.run(IpAddress.util.ip_type, column='IP', join='inner') # Filter to only public IPs .query('result == "Public"') # Get Whois info .mp_pivot.run(IpAddress.util.whois, column='IP', join='inner') )

Run the pipeline

pipeline1 = pipelines[0] result_df = pipeline1.run(data=ips_df, verbose=True) result_df.head(3)
Steps: 0%| | 0/3 [00:00<?, ?it/s]
step = get_ip_type PipelineExecStep(accessor='mp_pivot.run', pos_params=[], params={'func': <function get_ip_type at 0x0000028BAFA17048>, 'column': 'IP', 'join': 'inner'}, text=".mp_pivot.run(IpAddress.util.ip_type, column='IP', join='inner')", comment='Get IP Type') step = filter_public PipelineExecStep(accessor='query', pos_params=['result == "Public"'], params={}, text='.query(\'result == "Public"\')', comment='Filter to only public IPs') step = whois PipelineExecStep(accessor='mp_pivot.run', pos_params=[], params={'func': <function get_whois_df at 0x0000028BAFA89F78>, 'column': 'IP', 'join': 'inner'}, text=".mp_pivot.run(IpAddress.util.whois, column='IP', join='inner')", comment='Get Whois info')

Adding your own pivot functions

A simple example

def my_func(input: str): return { "title": input.upper(), "text": "something" } Pivot.add_pivot_function( func=my_func, container="cyber", input_type="value", entity_map={"Host": "HostName"}, func_input_value_arg="input", func_new_name="upper_name", )
entities.Host.cyber.upper_name("host_name")
( ips_df .mp_pivot.run(entities.IpAddress.util.ip_type, column="IP", join="inner") .query("result == 'Public'").head(2) .mp_pivot.run(entities.IpAddress.util.whois, column="ip", join="left") .mp_pivot.run(entities.IpAddress.util.geoloc, column="ip", join="left") .mp_pivot.display(title="Geo Lookup", cols=["IP", "City"]) # << display an intermediate result .mp_pivot.run(entities.IpAddress.AzureSentinel.SecurityAlert_list_alerts_for_ip, source_ip_list="ip") .mp_pivot.display(title="Alerts Sample", head=2) .mp_pivot.run(entities.Host.cyber.upper_name, column="Severity") ).head(3)

A more realistic example.

This function extracts individual elements from a list column into separate rows. In this case the nets column.

( ips_df # Get IP Type .mp_pivot.run(IpAddress.util.ip_type, column='IP', join='inner') # Filter to only public IPs .query(expr='result == "Public"') # Get Whois info .mp_pivot.run(IpAddress.util.whois, column='IP', join='inner') ).head(1)
def extract_nets(data, col): out_series = [] for net in result_df.nets: for entry in net: out_series.append(pd.Series(entry)) return pd.DataFrame(out_series) Pivot.add_pivot_function( func=extract_nets, container="whois", input_type="dataframe", entity_map={"IpAddress": "Address"}, func_df_param_name="data", func_df_col_param_name="col", func_new_name="extract_nets", )
from msticpy.datamodel.entities import IpAddress ( ips_df # Get IP Type .mp_pivot.run(IpAddress.util.ip_type, column='IP', join='inner') # Filter to only public IPs .query(expr='result == "Public"') # Get Whois info .mp_pivot.run(IpAddress.util.whois, column='IP', join='inner') .mp_pivot.run(IpAddress.whois.extract_nets, column='nets') )

Appendix - how do pivot wrappers work?

In Python you can create functions that return other functions. On the way they can change how the arguments and output are processed.

Take this simple function that just applies proper capitalization to an input string.

def print_me(arg): print(arg.capitalize()) print_me("hello")
Hello

If we try to pass a list to this function we get an expected exception about lists not supporting capitalize

print_me(["hello", "world"])
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-66-94b3e61eb86f> in <module> ----> 1 print_me(["hello", "world"]) NameError: name 'print_me' is not defined

We could create a wrapper function that checked the input and iterated over the individual items if arg is a list. The works but we don't want to have to do this for every function that we want to have flexible input!

def print_me_list(arg): if isinstance(arg, list): for item in arg: print_me(item) else: print_me(arg) print_me_list("hello") print_me_list(["how", "are", "you", "?"])
Hello How Are You ?

Instead we can create a function wrapper. The outer function dont_care_func defines an inner function, list_or_str and then returns this function. The inner function list_or_str is what implements the same "is-this-a-string-or-list" logic that we saw in the previous example. Crucially though, it isn't hard-coded to call print_me but calls whatever function passed to it from the outer function dont_care_func.

# Our magic wrapper def dont_care_func(func): def list_or_str(arg): if isinstance(arg, list): for item in arg: func(item) else: func(arg) return list_or_str

How do we use this?

We simply pass the function that we want to wrap to dont_care_func. Recall, that this function just returns an instance of the inner function. In this particular instance the value func will have been replaced by the actual function print_me.

print_stuff = dont_care_func(print_me)

Now we have a wrapped version of print_me that can handle different types of input. Magic!

print_stuff("hello") print_stuff(["how", "are", "you", "?"])
Hello How Are You ?

We can also define further functions and create wrapped versions of those by passing them to dont_care_func.

def shout_me(arg): print(arg.upper(), "\U0001F92C!", end=" ") shout_stuff = dont_care_func(shout_me)
shout_stuff("hello") shout_stuff(["how", "are", "you", "?"])
HELLO 🤬! HOW 🤬! ARE 🤬! YOU 🤬! ? 🤬!

The wrapper functionality in Pivot is a bit more complex than this but essentially operates this way.