Path: blob/master/tutorials-and-examples/feature-tutorials/PivotFunctions.ipynb
3253 views
MSTICPy Pivot Functions
What are Pivot Functions?
MSTICPy has a lot of functionality distributed across many classes and modules. However, there is no simple way to discover where these functions are and what types of data the function is relevant to.
Pivot functions bring this functionality together grouped around Entities.
Entities are representations real-world objects found commonly in CyberSec investigations. Some examples are: IpAddress, Host, Account, URL
You can also chain pivot functions together to create a processing pipeline that does multiple operations on data:
We'll see examples of how to do these pivoting queries later in the notebook.
MSTICPy has had entity classes from the very early days but, until now, these have only been used sporadically in the rest of the package.
The pivot functionality exposed operations relevant to a particular entity as methods of that entity. These operations could include:
Data queries
Threat intelligence lookups
Other data lookups such as GeoLocation or domain resolution
and other local functionality
What is Pivoting?
The name comes from the common practice of Cyber investigators navigating between related entities. For example an entity/investigation chain might look like the following:
| Step | Source | Operation | Target |
|---|---|---|---|
| 1 | Alert | Review alert -> | Source IP(A) |
| 2 | Source IP(A) | Lookup TI -> | Related URLs |
| Malware names | |||
| 3 | URL | Query web logs -> | Requesting hosts |
| 4 | Host | Query host logons -> | Accounts |
At each step there are one or more directions that you can take to follow the chain of related indicators of activity in a possible attack.
Bringing these functions into a few, well-known locations makes it easier to use MSTICPy to carry out this common pivoting pattern in Jupyter notebooks.
Getting started
The pivoting library depends on a number of data providers used in MSTICPy. These normally need to be loaded an initialized before starting the Pivot library.
This is mandatory for data query providers such as the AzureSentinel, Splunk or MDE data providers. These usually need initialization and authentication steps to load query definitions and connect to the service.
Note: you do not have to authenticate to the data provider before loading Pivot.
However, some providers are populated with additional queries only after connecting
to the service. These will not be added to the pivot functions unless you create a new Pivot object.
This is optional with providers such as Threat Intelligence (TILookup) and GeoIP. If you do not initialize these before starting Pivot they will be loaded with the defaults as specified in your msticpyconfig.yaml. If you want to use a specific configuration for any of these, you should load and configure them before starting Pivot.
Load one or more data providers
Initialize the Pivot library
You can either pass an explicit list of providers to Pivot or let it look for them in the notebook global namespace. In the latter case, the Pivot class will use the most recently-created instance of each that it finds.
What happens at initialization?
Any instantiated data providers are searched for relevant queries. Any queries found are added to the approriate entity or entities.
TI provider is loaded and entity-specific lookups (e.g. IP, Url, File) are added as pivot functions
Miscellaneous Msticpy functions and classes (e.g. GeoIP, IpType, Domain utils) are added as pivot functions to the appropriate entity.
You can add additional functions as pivot functions by creating a registration template and importing the function. Details of this are covered later in the document.
Pivot function list
Because we haven't yet loaded the Pivot library nothing is listed.
Initializing the Pivot library
You will usually see some output as provider libraries are loaded.
Note: Although you can assign the created Pivot object to a variable you normally don't need to do so.
You can access the current Pivot instance using the class attributePivot.current
See the list of providers loaded by the Pivot class
Notice that TILookup was loaded even though we did not create an instance of TILookup beforehand.
After loading the Pivot class, entities have pivot functions added to them
Pivot functions are grouped into containers
Data queries are grouped into a container with the name of the data provider to which they belong. E.g. AzureSentinel queries are in a container of that name, Spunk queries would be in a "Splunk" container.
TI lookups are put into a "ti" container
All other built-in functions are added to the "other" container.
The containers themselves are callable and will return a list of their contents. Containers are also iterable - each iteration returns a tuple (pair) of name/function values.
In notebooks/IPython you can also use tab completion to get to the right function.
This is alternative way of listing the pivots for an Entity
Using the Pivot Browser
Pivot also has a utility that allows you to browse entities and the pivot functions attached to them. You can search for functions with desired keywords, view help for the specific function and copy the function signature to paste into a code cell.
Running a pivot function
Pivot functions have flexible input types. They can be used with the following types of parameters:
entity instances (e.g. where you have an IpAddress entity with a populated address field)
single values (e.g. a DNS domain name)
lists of values (e.g. a list of IpAddresses)
pandas DataFrames (where one or more of the columns contains the input parameter data)
Pivot functions normally return results as a dataframe (although some complex functions such as Notebooklets can return composite results objects containing multiple dataframes and other object types.
Signature: IpAddress.util.ip_type(ip: str = None, ip_str: str = None) -> str
Docstring:
Validate value is an IP address and deteremine IPType category.
(IPAddress category is e.g. Private/Public/Multicast).
Parameters
----------
ip : str
The string of the IP Address
ip_str : str
The string of the IP Address - alias for `ip`
Returns
-------
str
Returns ip type string using ip address module
File: e:\src\microsoft\msticpy\msticpy\sectools\ip_utils.py
Type: function
Parameter names
Positional parameter - If the function only accepts one parameter you can usually just supply it without a name - as a positional parameter (see first and third examples below)
Native parameter - You can also use the native parameter name - i.e. the name that the underlying function expects and that will be shown in the help(function) output
Generic parameter - You can also use the generic parameter name "value" in most cases.
If in doubt, use help(entity.container.func) or entity.container.func?
Using an entity as a parameter
Behind the scenes the Pivot api is using a mapping of entity attributes to supply the right value to the function parameter.
Using a list (or other iterable) as a parameter
Many of the underlying functions will accept either single values or collections (usually in DataFrames) of values as input. Even in cases where the underlying function does not accept iterables as parameters, the Pivot library will usually be able to iterate through each value and collate the results to hand you back a single dataframe.
Note: there are some exceptions to this - usually where the underlying function
is long-running or expensive and has opted not to accept iterated calls.
Notebooklets are an example of these.
Where the function has multiple parameters you can supply a mixture of iterables and single values.
In this case, the single-valued parameters are re-used on each call, paired with the item in the list(s) taken from the multi-valued parameters
You can also use multiple iterables for multiple parameters.
In this case the iterables should be the same length. If they are different lengths the iterations stop after the shorted list/iterable is exhausted.
For example:
The function will execute with the pairings (1, "a"), (2, "b") and (3, "c) - (4, _) will be ignored
Using DataFrames as input
Using a dataframe as input requires a slightly different syntax since you not only need to pass the dataframe as a parameter but also tell the function which column to use for input.
To specify the column to use, you can use the name of the parameter that the underlying function expects or one of these generic names:
column
input_column
input_col
src_column
src_col
Note these generic names are not shown in the function help
Joining input to output data
You might want to return a data set that is joined to your input set. To do that use the "join" parameter.
The value of join can be:
inner
left
right
outer
To preserve all rows from the input, use a "left" join. To keep only rows that have a valid result from the function use "inner" or "right"
Note while most functions only return a single output row for each input row
some return multiple rows. Be cautious using "outer" in these cases.
DataQuery Pivot functions
A significant difference between the functions that we've seen so far and data query functions is that the latter do not accept generic parameter names.
When you use a named parameter in a data query pivot, you must specify the name that the query function is expecting. If in doubt, use "?" prefix to show the function help.
Example:
Setting time parameters for queries interactively
Use the edit_query_time function to set/change the time range used by queries.
With no parameters it defaults to a period of [UtcNow - 1 day] to [UtcNow].
Or you can specify a timespan to use with the TimeSpan class.
Setting the timespan programmatically
You can also just set the timespan directly on the pivot object
What queries do we have?
Adding additional parameters
The example below shows using the host entity as an initial parameter (Pivot is using the attribute mapping assign the host_name function parameter the value of host.fqdn).
The second parameter is a list of event IDs specified explicitly.
Signature: Host.AzureSentinel.list_host_events_by_id(*args, **kwargs) -> Union[pandas.core.frame.DataFrame, Any]
Docstring:
Retrieves list of events on a host
Parameters
----------
add_query_items: str (optional)
Additional query clauses
end: datetime
Query end time
event_list: list (optional)
List of event IDs to match
(default value is: has)
host_name: str
Name of host
host_op: str (optional)
The hostname match operator
(default value is: has)
query_project: str (optional)
Column project statement
start: datetime
Query start time
table: str (optional)
Table name
(default value is: SecurityEvent)
File: c:\users\ian\anaconda3\envs\condadev\lib\functools.py
Type: function
Using iterables as parameters to data queries
Some data queries accept "list" items as parameters (e.g. many of the IP queries accept a list of IP addresses). These work as expected, with a single query calling sending the whole list as a single parameter.
Signature: IpAddress.AzureSentinel.list_aad_signins_for_ip(*args, **kwargs) -> Union[pandas.core.frame.DataFrame, Any]
Docstring:
Lists Azure AD Signins for an IP Address
Parameters
----------
add_query_items: str (optional)
Additional query clauses
end: datetime (optional)
Query end time
ip_address_list: list
The IP Address or list of Addresses
start: datetime (optional)
Query start time
(default value is: -5)
table: str (optional)
Table name
(default value is: SigninLogs)
File: c:\users\ian\anaconda3\envs\condadev\lib\functools.py
Type: function
Using iterable values where the query function was designed to only accept single values
In this case the pivot function will iterate through the values of the iterable, making a separate query for each and then joining the results.
We can see that this function only accepts a single value for "account_name".
Signature: Account.AzureSentinel.list_aad_signins_for_account(*args, **kwargs) -> Union[pandas.core.frame.DataFrame, Any]
Docstring:
Lists Azure AD Signins for Account
Parameters
----------
account_name: str
The account name to find
add_query_items: str (optional)
Additional query clauses
end: datetime (optional)
Query end time
start: datetime (optional)
Query start time
(default value is: -5)
table: str (optional)
Table name
(default value is: SigninLogs)
File: c:\users\ian\anaconda3\envs\condadev\lib\functools.py
Type: function
Combining multiple iterables and single-valued parameters
The same rules as outline earlier for multiple parameters of different types apply to data queries
Using DataFrames as input
This is similar to using dataframes for other pivot functions.
We must use the data parameter to specify the input dataframe. You supply the column name from your input dataframe as the value of the parameters expected by the function.
Now we have our dataframe:
we specify
account_dfas the value of thedataparameter.in our source (input) dataframe, the column that we want to use as the input value for each query is
Userwe specify that column name as the value of the function parameter
On each iteration, the column value from a subsequent row will be extracted and given as the parameter value for the function parameter.
Note:
If the function parameter type is a "list" type - i.e. it expects a list of values
the parameter value will be sent as a list and only a single query is executed.
If the query function has multiple "list" type parameters, these will be
populated in the same way.
Note2:
If you have multiple parameters fed by multiple input columns AND one or more
of the function parameters is not a list type, the the query will be broken
into queries for each row. Each sub-query getting its values from a single row
of the input dataframe.
Threat Intelligence Lookups
These work in the same way as the functions described earlier. However, there are a few peculiarities of the Threat Intel functions:
Provider-specific functions
Queries for individual providers are broken out into separate functions You will see multiple lookup_ipv4 functions, for example: one with no suffix and one for each individual TI provider with a corresponding suffix. This is a convenience to let you use a specific provider more quickly. You can still use the generic function (lookup_ipv4) and supply a providers parameter to indicate which providers you want to use.
IPV4 and IPV6
Some providers treat these interchangably and use the same endpoint for both. Other providers do not explicitly support IPV6 (e.g. the Tor exit nodes provider). Still others (notably OTX) use different endpoints for IPv4 and IPv6.
If you are querying IPv4 you can use either the lookup_ip function or one of the lookup_ipv4 functions. In most cases, you can also use these functions for a mixture of IPv4 and v6 addresses. However, in cases where a provider does not support IPv6 or uses a different endpoint for IPv6 queries you will get no responses.
Entity mapping to IoC Types
This table shows the mapping between and entity type and IoC Types:
| Entity | IoCType |
|---|---|
| IpAddress | ipv4, ipv6 |
| Dns | domain |
| File | filehash (incl |
| md5, sha1, sha256) | |
| Url | url |
Note: Where you are using a File entity as a parameter, there is a complication.
A file entity can have multiple hash values (md5, sha1, sha256 and even sha256 authenticode).
Thefile_hashattibute of File is used as the default parameter.
In cases where a file has multiple hashes the highest priority hash (in order
sha256, sha1, md5, sha256ac) is returned.
If you are not using file entities as parameters (and specifying the input values
explicitly or via a Dataframe or iterable, you can ignore this.
Lookup from a DataFrame
To specify the source column you can use either "column" or "obs_column"
Chaining pivot and other functions
Because pivot functions can take dataframes as inputs and return them as outputs, you can create chains of pivot functions. You can also add other items to the chain that input or output dataframes.
For example, you could build a chain that included the following:
take IP addresses from firewall alerts
lookup the IPs in Threat Intel providers filtering those that have high severity
lookup the any remote logon events sourced at those IPs
display a timeline of the logons
To make building these types of pipelines easier we've implemented some pandas helper functions. These are available in the mp_pivot property of pandas DataFrames, once Pivot is imported.
mp_pivot.run
run lets you run a pivot function as a pandas pipeline operation.
Let's take an example of a simple pivot function using a dataframe as input
We can us mp_pivot.run to do this:
The pandas extension takes care of the data=my_df parameter. We still have to add any other required parameters (like the column specification in this case. When it runs it returns its output as a DataFrame and the next operation (drop_duplicates()) runs on this output.
Depending on the scenario you might want to preserve the existing dataframe contents (most of the pivot functions only return the results of their specific operation - e.g. whois returns ASN information for an IP address). You can carry the columns of the input dataframe over to the output from the pivot function by adding a join parameter to the mp_pivot.run() call. Use a "left" to keep all of the input rows regardless of whether the pivot function returned a result for that row. Use an "inner" join to return only rows where the input had a positive result in the pivot function.
There are also a couple of convenience functions. These only work in an IPython/Jupyter environment.
mp_pivot.display
mp_pivot.display will display the intermediate results of the dataframe in the middle of a pipeline. It does not change the data at all, but does give you the chance to display a view of the data partway through processing. This is useful for debugging but its main purpose is to give you a way to show partial results without having to break the pipeline into pieces and create unnecessary throw-away variables that will add bulk to your code and clutter to your memory.
display supports some options that you can use to modify the displayed output:
title - displays a title above the data
cols - a list of columns to display (others are hidden)
query - you can filter the output using a df.query() string. See https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html?highlight=query#pandas.DataFrame.query for more details
head - limits the display to the first
headrows
These options do not affect the data being passed through the pipeline - only how the intermediate output is displayed.
mp_pivot.tee
mp_pivot.tee behaves a little like the Linux "tee" command. It allows the data to pass through unchanged but allows you to create a variable that is a snapshot of the data at that point in the pipeline. It takes a parameter var_name and assigns the current DataFrame instance to that name. So, when your pipeline has run you can access partial results (again, without having to break up your pipeline to do so).
By default, it will not overwrite an existing variable of the same name unless you specify clobber=True in the call to tee.
mp_pivot.tee_exec
behaves similarly to the "tee" function above except that it will try to execute the DataFrame accessor function on the input DataFrame. The name of the function (as a string) can be passed named as the value of the df_func named parameter, or the first positional. The function must be a method of a pandas DataFrame - this includes built-in functions such as .query, .sort_values or a custom function added as a custom pd accessor function (see Extending pandas)
mp_pivot.tee_exec allows the input data to pass through unchanged but will also send a snapshot of the data at that point in the pipeline to the named function. You can also pass arbitrary other named arguments to the tee_exec. These arguments will be passed to the df_func function.
Example
The example below shows the use of mp_pivot.run and mp_pivot.display.
This takes an existing DataFrame - suspcious_ips - and:
displays the top 5 rows of the dataframe
checks for threat intelligence reports on any of the IP addresses
uses pandas
queryto filter only the high severity hitscalls the whois pivot function to obtain ownership information for these IPs (note that we join the results of the previous step here usine
join='left'so our output will be all TI result data plus whois datacalls a pivot data query to check for Azure Active Directory logins that have an IP address source that matches any of these addresses.
The final step uses another MSTICPy pandas extension to plot the login attempts on a timeline chart.
Example output from pipelined functions
This is what the pipelined functions should output (although the results will obviously not be the same for your environment).

Adding custom functions to the pivot interface
To do this you need the following information
| Item | Description | Required |
|---|---|---|
| src_module | The src_module to containing the class or function | Yes |
| class | The class containing function | No |
| src_func_name | The name of the function to wrap | Yes |
| func_new_name | Rename the function | No |
| input type | The input type that the wrapped function expects (dataframe iterable value) | Yes |
| entity_map | Mapping of entity and attribute used for function | Yes |
| func_df_param_name | The param name that the function uses as input param for DataFrame | If DF input |
| func_df_col_param_name | The param name that function uses to identify the input column name | If DF input |
| func_out_column_name | Name of the column in the output DF to use as a key to join | If DF output |
| func_static_params | dict of static name/value params always sent to the function | No |
| func_input_value_arg | Name of the param that the wrapped function uses for its input value | No |
| can_iterate | True if the function supports being called multiple times | No |
| entity_container_name | The name of the container in the entity where the func will appear | No |
The entity_map controls where the pivot function will be added. Each entry requires an Entity name (see msticpy.datamodel.entities) and an entity attribute name. This is only used if an instance of the entity is used as a parameter to the function. For IpAddress in the example below, the pivot function will try to extract the value of the Address attribute when an instance of IpAddress is used as a function parameter.
This means that you can specify different attributes of the same entity for different functions (or even for two instances of the same function)
The func_df_param_name and func_df_col_param_name are needed only if the source function takes a dataframe and column name as input parameters.
func_out_column_name is relevant if the source function returns a dataframe. In order to join input data with output data this needs to be the column in the output that has the same value as the function input (e.g. if you are processing IP addresses and the column name in the output DF containing the IP is named "ip_addr", put "ip_addr" here.)
When you have this information create or add this to a yaml file with the top-level element pivot_providers.
Example from the msticpy ip_utils who_is function
Once you have your yaml definition file you can call
Note, this is not persistent. You will need to call this each time you start a new session.
register_pivot_providers docstring
Adding ad hoc pivot functions
You can also add ad hoc functions as pivot functions. This is probably a less common scenario but may be useful for testing and development.
You can either create a PivotRegistration object and supply that (along with the func parameter), to this method.
Alternatively, you can supply the pivot registration parameters as keyword arguments: