Giter VIP home page Giter VIP logo

msticpy's Introduction

MSTIC Jupyter and Python Security Tools

GitHub Actions build Azure Pipelines build Downloads BlackHat Arsenal 2020

Microsoft Threat Intelligence Python Security Tools.

msticpy is a library for InfoSec investigation and hunting in Jupyter Notebooks. It includes functionality to:

  • query log data from multiple sources
  • enrich the data with Threat Intelligence, geolocations and Azure resource data
  • extract Indicators of Activity (IoA) from logs and unpack encoded data
  • perform sophisticated analysis such as anomalous session detection and time series decomposition
  • visualize data using interactive timelines, process trees and multi-dimensional Morph Charts

It also includes some time-saving notebook tools such as widgets to set query time boundaries, select and display items from lists, and configure the notebook environment.

Timeline

The msticpy package was initially developed to support Jupyter Notebooks authoring for Azure Sentinel. While Azure Sentinel is still a big focus of our work, we are extending the data query/acquisition components to pull log data from other sources (currently Splunk, Microsoft Defender for Endpoint and Microsoft Graph are supported but we are actively working on support for data from other SIEM platforms). Most of the components can also be used with data from any source. Pandas DataFrames are used as the ubiquitous input and output format of almost all components. There is also a data provider to make it easy to and process data from local CSV files and pickled DataFrames.

The package addresses three central needs for security investigators and hunters:

  • Acquiring and enriching data
  • Analyzing data
  • Visualizing data

We welcome feedback, bug reports, suggestions for new features and contributions.

Installing

For core install:

pip install msticpy

If you are using MSTICPy with Azure Sentinel you should install with the "azsentinel" extra package:

pip install msticpy[azsentinel]

or for the latest dev build

pip install git+https://github.com/microsoft/msticpy

Upgrading

To upgrade msticpy to the latest public non-beta release, run:

pip install --upgrade msticpy

Note it is good practice to copy your msticpyconfig.yaml and store it on your disk but outside of your msticpy folder, referencing it in an environment variable. This prevents you from losing your configurations every time you update your msticpy installation.

Documentation

Full documentation is at ReadTheDocs

Sample notebooks for many of the modules are in the docs/notebooks folder and accompanying notebooks.

You can also browse through the sample notebooks referenced at the end of this document to see some of the functionality used in context. You can play with some of the package functions in this interactive demo on mybinder.org.

Binder


Log Data Acquisition

QueryProvider is an extensible query library targeting Azure Sentinel/Log Analytics, Splunk, OData and other log data sources. It also has special support for Mordor data sets and using local data.

Built-in parameterized queries allow complex queries to be run from a single function call. Add your own queries using a simple YAML schema.

Data Queries Notebook

Data Enrichment

Threat Intelligence providers

The TILookup class can lookup IoCs across multiple TI providers. built-in providers include AlienVault OTX, IBM XForce, VirusTotal and Azure Sentinel.

The input can be a single IoC observable or a pandas DataFrame containing multiple observables. Depending on the provider, you may require an account and an API key. Some providers also enforce throttling (especially for free tiers), which might affect performing bulk lookups.

TIProviders and TILookup Usage Notebook

GeoLocation Data

The GeoIP lookup classes allow you to match the geo-locations of IP addresses using either:

Folium map

GeoIP Lookup and GeoIP Notebook

Azure Resource Data, Storage and Azure Sentinel API

The AzureData module contains functionality for enriching data regarding Azure host details with additional host details exposed via the Azure API. The AzureSentinel module allows you to query incidents, retrieve detector and hunting queries. AzureBlogStorage lets you read and write data from blob storage.

Azure Resource APIs, Azure Sentinel APIs, Azure Storage

Security Analysis

This subpackage contains several modules helpful for working on security investigations and hunting:

Anomalous Sequence Detection

Detect unusual sequences of events in your Office, Active Directory or other log data. You can extract sessions (e.g. activity initiated by the same account) and identify and visualize unusual sequences of activity. For example, detecting an attacker setting a mail forwarding rule on someone's mailbox.

Anomalous Sessions and Anomalous Sequence Notebook

Time Series Analysis

Time series analysis allows you to identify unusual patterns in your log data taking into account normal seasonal variations (e.g. the regular ebb and flow of events over hours of the day, days of the week, etc.). Using both analysis and visualization highlights unusual traffic flows or event activity for any data set.

Time Series anomalies

Time Series

Visualization

Event Timelines

Display any log events on an interactive timeline. Using the Bokeh Visualization Library the timeline control enables you to visualize one or more event streams, interactively zoom into specific time slots and view event details for plotted events.

Timeline

Timeline and Timeline Notebook

Process Trees

The process tree functionality has two main components:

  • Process Tree creation - taking a process creation log from a host and building the parent-child relationships between processes in the data set.
  • Process Tree visualization - this takes the processed output displays an interactive process tree using Bokeh plots.

There are a set of utility functions to extract individual and partial trees from the processed data set.

Process Tree

Process Tree and Process Tree Notebook

Data Manipulation and Utility functions

Pivot Functions

Lets you use MSTICPy functionality in an "entity-centric" way. All functions, queries and lookups that relate to a particular entity type (e.g. Host, IpAddress, Url) are collected together as methods of that entity class. So, if you want to do things with an IP address, just load the IpAddress entity and browse its methods.

Pivot Functions and Pivot Functions Notebook

base64unpack

Base64 and archive (gz, zip, tar) extractor. It will try to identify any base64 encoded strings and try decode them. If the result looks like one of the supported archive types it will unpack the contents. The results of each decode/unpack are rechecked for further base64 content and up to a specified depth.

Base64 Decoding and Base64Unpack Notebook

iocextract

Uses regular expressions to look for Indicator of Compromise (IoC) patterns - IP Addresses, URLs, DNS domains, Hashes, file paths. Input can be a single string or a pandas dataframe.

IoC Extraction and IoCExtract Notebook

eventcluster (experimental)

This module is intended to be used to summarize large numbers of events into clusters of different patterns. High volume repeating events can often make it difficult to see unique and interesting items.

Clustering

This is an unsupervised learning module implemented using SciKit Learn DBScan.

Event Clustering and Event Clustering Notebook

auditdextract

Module to load and decode Linux audit logs. It collapses messages sharing the same message ID into single events, decodes hex-encoded data fields and performs some event-specific formatting and normalization (e.g. for process start events it will re-assemble the process command line arguments into a single string).

syslog_utils

Module to support an investigation of a Linux host with only syslog logging enabled. This includes functions for collating host data, clustering logon events and detecting user sessions containing suspicious activity.

cmd_line

A module to support the detection of known malicious command line activity or suspicious patterns of command line activity.

domain_utils

A module to support investigation of domain names and URLs with functions to validate a domain name and screenshot a URL.

Notebook widgets

These are built from the Jupyter ipywidgets collection and group common functionality useful in InfoSec tasks such as list pickers, query time boundary settings and event display into an easy-to-use format.

Time span Widget

Alert browser


Example MSTICPy notebooks

MSTICPy Notebooks

More Notebooks on Azure Sentinel Notebooks GitHub

Azure Sentinel Notebooks

Example notebooks:

View directly on GitHub or copy and paste the link into nbviewer.org

Notebook examples with saved data

See the following notebooks for more examples of the use of this package in practice:

Supported Platforms and Packages


Contributing

For (brief) developer guidelines, see this wiki article Contributor Guidelines

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

msticpy's People

Contributors

2xyo avatar anoryx avatar aramirezmartin avatar ashwin-patil avatar ccianelli22 avatar ctoma73 avatar d3vzer0 avatar dependabot-preview[bot] avatar dependabot[bot] avatar dqirvin avatar florianbracq avatar fr0gger avatar grantv9 avatar ianhelle avatar juju4 avatar kant avatar karishma-dixit avatar liamkirton avatar lucky-luk3 avatar microsoftopensource avatar nbareil avatar pareid avatar pensivepaddle avatar petebryan avatar roopeshvs avatar rrevuelta avatar ryan-aus avatar sbs2001 avatar tatsuya-hasegawa avatar tonybaloney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

msticpy's Issues

Datetime parameter not working with queries.

Created from issues raised on DL:

Currently working with some custom queries in msticpy. However using the datetime parameter seems to not be working. The query runs (gives no errors), but it says that there are no query results. If I copy the query and run it in Log Analytics or MDE, it gives results. Example is with MDE but same applies with LA.

Custom query:
image

Calling it from a query provider in notebook:
image

Same query being ran in MDE:
image

Inconsistent connection behaviour between regular Python and IPyNB

I have received a tenant_id, client_id, client_scret, workspace_id and workspace_name.

When I use Kqlmagic to connect like the following:

%kql loganalytics://tenant='tenant_id';clientid='client_id';clientsecret='client_secret';workspace='workspace_id';alias='workspace_name'

It works perfectly.

But when I use msticpy to connect using the following:

qry_prov = QueryProvider(data_environment='LogAnalytics')
qry_prov.connect(connection_str="loganalytics://tenant='tenant_id';clientid='client_id';clientsecret='client_secret';workspace='workspace_id';alias='workspace_name'")

It fails to connect.
I am assuming it is because using this data environment I can only supply a tenant_id and workspace_id.

When I try

qry_prov = QueryProvider('MDATP')
qry_prov.connect(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)

It connects succesfully, but then I can only access qry_prov.MDATP.* and not the qry_prov.SecurityAlert.list_alerts() that I require.

Replace requests with httpx

Requests is in maintenance mode and has requirements on outdated packages. It also does not support async

We should replace requests with httpx 0.18 throughout the code base.

init_notebook return AttributeError

Describe the bug
Showed error message: AttributeError: 'list' object has no attribute 'items', when run init_notebook(namespace=globals());

To Reproduce
Successfully installed msticpy-1.3.1
Steps to reproduce the behavior:
❯ python
Python 3.9.6 (default, Jun 29 2021, 05:25:02)
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

from msticpy.nbtools.nbinit import init_notebook
init_notebook(namespace=globals());

  1. See error

Desktop (please complete the following information):

  • OS: OSX 11.5.2

Additional context

<IPython.core.display.HTML object>
Processing imports....
Imported: pd (pandas), IPython.get_ipython, IPython.display.display, IPython.display.HTML, IPython.display.Markdown, widgets (ipywidgets), pathlib.Path, plt (matplotlib.pyplot), matplotlib.MatplotlibDeprecationWarning, np (numpy), msticpy, msticpy.data.QueryProvider, msticpy.nbtools.foliummap.FoliumMap, msticpy.common.utility.md, msticpy.common.utility.md_warn, msticpy.common.wsconfig.WorkspaceConfig, msticpy.datamodel.pivot.Pivot, msticpy.datamodel.entities
Checking configuration....
Setting notebook options....
Traceback (most recent call last):
File "", line 1, in
File "/Users/User/notebook/.venv/lib/python3.9/site-packages/msticpy/nbtools/nbinit.py", line 293, in init_notebook
prov_dict = load_user_defaults()
File "/Users/User/notebook/.venv/lib/python3.9/site-packages/msticpy/nbtools/user_config.py", line 74, in load_user_defaults
prov_dict = _load_query_providers(user_defaults)
File "/Users/User/notebook/.venv/lib/python3.9/site-packages/msticpy/nbtools/user_config.py", line 83, in _load_query_providers
for prov_name, qry_prov_entry in user_defaults.get("QueryProviders").items():
AttributeError: 'list' object has no attribute 'items'

proc_tree_build_winlx some objects not datetime

When creating a proc tree with proc_tree_build_winlx, some of the dataframes that get created do not create a 'TimeGenerated_orig_par' column. This causes the column to hold the type Object, generating an exception when it's treated as a datetime object later in the code.

Data is pulled in from Splunk windows event logs and used as follows

import pandas as pd
from msticpy.sectools import proc_tree_build_winlx as ptree
from msticpy.sectools import proc_tree_schema

SPLWIN_EVENT_SCH = proc_tree_schema.ProcSchema(
    time_stamp="date",
    process_name="New_Process_Name",
    process_id="New_Process_ID",
    parent_name="Creator_Process_Name",
    parent_id="Creator_Process_ID",
    logon_id="Subject_Logon_ID",
    target_logon_id="Target_Logon_ID",
    cmd_line="Process_Command_Line",
    user_name="Subject_Account_Name",
    path_separator="\\",
    user_id="Subject_User_Sid",
    event_id_column="EventCode",
    event_id_identifier="4688",
    host_name_column="host",
)

p_tree_win = ptree.extract_process_tree(procs, SPLWIN_EVENT_SCH)

Generating: AttributeError: Can only use .dt accessor with datetimelike values
At line: proc_data[timestamp_col].dt.round("10us").dt.strftime(TS_FMT_STRING)

I believe it becomes a problem on this line:
merged_procs.loc[root_procs_crit, "TimeGenerated_orig_par"] = time_zero
Because not all rows will contain datetime objects as a result

I also had to make the following changes to get it as far as I did I believe because of my custom schema

    # Extract synthentic rows for the parents of root processes
    inferred_parents = (
        merged_procs[root_procs_crit][
            [
                schema.event_id_column,
                schema.host_name_column,
                schema.parent_id,
                "EffectiveLogonId_par",
                "Creator_Process_Name", # Changed from "ParentProcessName"
                "parent_proc_lc",
            ]
        ]
        .rename(
            columns={
                schema.parent_id: schema.process_id,
                "Creator_Process_Name": schema.process_name, # Changed key from "ParentProcessName"
                "parent_proc_lc": "new_process_lc",
                "EffectiveLogonId_par": schema.logon_id,
            }
        )
        .assign(TimeGenerated=time_zero, EffectiveLogonId=merged_procs[schema.logon_id])
        .drop_duplicates()
    )

I included my sample DF as json output since it was easy to redact versus pickle.
evtlogs.zip

Timeline with an event duration

Hi,

I would like to display an horizontal bar (start - end) for each event in a timeline.

Example: in this case, I would like to visualize how long a domain has been seen in passive DNS data:

import pandas as pd 
import json
from msticpy.nbtools import nbdisplay
data = [{"rrtype": "A", "time_last": 1445413846, "time_first": 1444238290,
   "count": 15, "rrname": "raybaneyeglasses.us.com.", "rdata":
   "151.237.189.86"},
  {"rrtype": "A", "time_last": 1445478986, "time_first": 1443590863,
   "count": 56, "rrname": "abercrombieandfitchoutletsonline.com.", "rdata":
   "151.237.189.86"},
  {"rrtype": "A", "time_last": 1445061327, "time_first": 1444404624,
   "count": 13, "rrname": "furlaoutletsonline.in.net.", "rdata":
   "151.237.189.86"},
  {"rrtype": "A", "time_last": 1444735169, "time_first": 1444302797,
   "count": 42, "rrname": "burberryoutlet.top.", "rdata": "151.237.189.86"},
  {"rrtype": "A", "time_last": 1444948864, "time_first": 1444948864,
   "count": 2, "rrname": "discountnfljerseys.top.", "rdata": "151.237.189.86"}]

df = pd.DataFrame.from_records(data)
df['time_first'] = pd.to_datetime(df['time_first'], unit="s")
df['time_last'] = pd.to_datetime(df['time_last'], unit="s")
nbdisplay.display_timeline(data=df, time_column="time_first", source_columns=["rrname", "rrtype", "rdata", "time_last"], group_by="rrname")

So instead of this :
image

I would prefer this:
image

Describe the solution you'd like

Add a parameter time_column_end to msticpy.nbtools.timeline.display_timeline:

nbdisplay.display_timeline(data=df, time_column="time_first", time_column_end="time_last", source_columns=["rrname", "rrtype", "rdata", "time_last"], group_by="rrname")

Issues creating process tree with custom schema

Describe the bug
I am trying to create a schema from some macOS logs created by the Elastic agent. I have attempted to implement a custom schema as described here. I am getting an error

UnboundLocalError: local variable 'extr_proc_tree' referenced before assignment

that does not appear to be any variable I have control over and am not sure if this is a bug or an error in how I am using this.

To Reproduce
Steps to reproduce the behavior:

  1. Clone the repo here - https://github.com/jaimeatwork/macOS-ATTACK-DATASET
  2. Use a jupyter notebook and virtualenv as described in your installation instructions
  3. Read the data with the following code
import json
import pandas as pd
import yaml
import glob
from msticpy.sectools import ptree
from msticpy.sectools.proc_tree_builder import build_process_tree
from msticpy.nbtools import process_tree 

# Assumes this path is correct, may need to adjust depending on where you cloned the repo
files = glob.glob("./**/*.json")
attacks = []
for file in files:
    with open(file, 'r') as f:
        attack = {}
        data = json.loads(f.read())
        attack[file] = data
        attacks.append(attack)

# picked a single file with 10 events to build a sample tree
for attack in attacks:
    for k, v in attack.items():
        if k == "./Persistence/persistence_emond_process_execution.json":
            df = pd.json_normalize(v['documents'])



from msticpy.sectools.proc_tree_builder import build_process_tree, LX_EVENT_SCH
from msticpy.sectools import proc_tree_build_winlx as winlx
from copy import copy

cust_lx_schema = copy(LX_EVENT_SCH)
cust_lx_schema.time_stamp = "_source.@timestamp"
cust_lx_schema.host_name_column = "_source.host.hostname"
cust_lx_schema.process_id = "_source.process.pid"
cust_lx_schema.parent_id = "_source.process.parent.pid"
cust_lx_schema.cmd_line = "_source.process.command_line"
cust_lx_schema.event_id_column = "_source.data_stream.dataset"
cust_lx_schema.cmd_line = "_source.process.command_line"
cust_lx_schema.event_id_identifier = "endpoint.events.process"
cust_lx_schema.path_separator = "/"

process_tree.plot_process_tree(df, schema=cust_lx_schema)
  1. Observe the error

error output


---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
/var/folders/6j/kg1frr2j61l2_xx9m01j_x3w0000gn/T/ipykernel_64665/961226578.py in <module>
     13 cust_lx_schema.path_separator = "/"
     14 
---> 15 process_tree.plot_process_tree(df, schema=cust_lx_schema)

~/.local/share/virtualenvs/macOS-ATTACK-DATASET-JrS6VCr6/lib/python3.8/site-packages/msticpy/nbtools/process_tree.py in plot_process_tree(data, schema, output_var, legend_col, show_table, **kwargs)
    187     pid_fmt = kwargs.pop("pid_fmt", "hex")
    188 
--> 189     data, schema, levels, n_rows = _pre_process_tree(data, schema, pid_fmt=pid_fmt)
    190     if schema is None:
    191         raise ProcessTreeSchemaException("Could not infer data schema from data set.")

~/.local/share/virtualenvs/macOS-ATTACK-DATASET-JrS6VCr6/lib/python3.8/site-packages/msticpy/nbtools/process_tree.py in _pre_process_tree(proc_tree, schema, pid_fmt)
    298     missing_cols = _check_proc_tree_schema(proc_tree)
    299     if missing_cols:
--> 300         proc_tree = build_process_tree(procs=proc_tree, schema=schema)
    301 
    302     if schema is None:

~/.local/share/virtualenvs/macOS-ATTACK-DATASET-JrS6VCr6/lib/python3.8/site-packages/msticpy/sectools/proc_tree_builder.py in build_process_tree(procs, schema, show_summary, debug)
    254     elif schema == MDE_EVENT_SCH:
    255         extr_proc_tree = mde.extract_process_tree(procs, debug=debug)
--> 256     merged_procs_keys = _add_tree_properties(extr_proc_tree)
    257 
    258     # Build process paths

UnboundLocalError: local variable 'extr_proc_tree' referenced before assignment

Expected behavior
Expect a plot of processes with parent and child relationships according to example images

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macOS
  • Browser Chrome
  • Version 93

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Access Reviews

Is your feature request related to a problem? Please describe.
I really like the idea behind Access Reviews, but it is very manual in that during the review process, the Reviewer has to manually go an check all the permissions and the access that a user has before they can Approve/Deny. This process is different when doing access reviews for applications, as it lists all the users with access to a particular application.

Describe the solution you'd like
It would be time-saving and efficient if the user's permissions would be listed from within the review portal and the reviewer would not have to click around trying to establish the user's access and if its approved or not. This way, it would then be possible to have corresponding Approve/Deny buttons for each permission.

Describe alternatives you've considered
A corresponding feature that uses AI and a correlation of sort to flag out suspicious permissions based off on various metrics like results from previous Access Reviews, current role, other users with similar roles and sign-in activity.

Additional context
N/A

Do not throw MsticpyVTNoDataError exception when no data is return from vt_lookup.lookup_iocs - vtlookupv3

Describe the bug

  1. investigate a list of observables
  2. enrich this list of observables with vt_lookup.lookup_iocs from vtlookupv3
  3. get an exception if an observable is not present on virustotal instead of the result

To Reproduce

import pandas as pd
from msticpy import init_notebook
from msticpy.sectools.vtlookupv3 import VTLookupV3, VTEntityType
from msticpy.common.provider_settings import get_provider_settings

import nest_asyncio
nest_asyncio.apply()

init_notebook(namespace=globals());
vt_key = get_provider_settings("TIProviders")["VirusTotal"].args["AuthKey"]
vt_lookup = VTLookupV3(vt_key)

samples_raw = """018ac8f95d5e14b92011cdbfc8c48056ca4891161ed6bdd268770a5b56bb327f
DONOTEXISTDONOTEXISTDONOTEXIST"""
samples = pd.DataFrame({"target":samples_raw.split(),"target_type":"file"})
vt_lookup.lookup_iocs(observables_df=samples, observable_column="target")

Minimized stacktrace

APIError                                  Traceback (most recent call last)
APIError: ('NotFoundError', 'Resource not found.')
The above exception was the direct cause of the following exception:
MsticpyVTNoDataError                      Traceback (most recent call last)
MsticpyVTNoDataError: An error occurred requesting data from VirusTotal

Expected behavior

A dataframe with 2 rows

  1. a first row with the data for 018ac8f95d5e14b92011cdbfc8c48056ca4891161ed6bdd268770a5b56bb327f
  2. a second raw with NaN or None value for DONOTEXISTDONOTEXISTDONOTEXIST (or an equivalent to inform the analyst that the observable was not found on VT)

Not finding an observable on VT is fundamentally not an error, it is an event that is potentially expected and this should not prevent retrieving the results for the other observables.

Additional context

msticpy version installed: 1.3.1

MDE should support multiple clouds and regional endpoints

MDE provider currently uses these API definitions

        self.req_body = {
            "client_id": None,
            "client_secret": None,
            "grant_type": "client_credentials",
            "resource": "https://api.securitycenter.microsoft.com",
        }
        self.oauth_url = "https://login.windows.net/{tenantId}/oauth2/token"
        self.api_root = "https://api.securitycenter.microsoft.com/"
        self.api_ver = "api"

We need to move these to use cloud-specific endpoints.
https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/gov?view=o365-worldwide#api

It would be ideal to support regional endpoints as well.
https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/exposed-apis-list?view=o365-worldwide

Have base Entity handle time stamps

Timestamps associated with entities should have this implemented in the base entity in order to help ensure DRP philosophy and maintain v3 compatability.

In addition timestamps should be added to entity edges to better represent temporal connection of entities.

Interactive Timeline control

Create UI/container to allow user to build timeline visualization from multiple data sets:

  • Add/remove data set
  • Set time bounds for data set
  • Add observations (text notes) to timeline

build_process_tree struck in while loop

Describe the bug
I tried to run the build_process_tree on mordor data. It seems to went into a loop. I did some check and seems like "max_depth" in _build_proc_tree() is always -1?

To Reproduce

df2 = mdr_data.small.windows.credential_access.host.empire_shell_rubeus_asktgt_createnetonly()
df2.rename(columns={'Hostname': 'Computer', 'ActivityID': 'TenantId'}, inplace=True)
df_new = df2[['TenantId', 'Computer', 'EventID', '@timestamp', 'NewProcessName', 'NewProcessId', 'ParentProcessName', 'ProcessId', 'SubjectLogonId', 'TargetLogonId', 'CommandLine', 'SubjectUserName', 'SubjectUserSid']].drop_duplicates()
df_new['@timestamp'] = pd.to_datetime(df_new['@timestamp'])
p_tree_win = ptree.build_process_tree(df_new, schema=schema, show_progress=True)

Screenshots

image

MSTICPY 1.4.0/1,4.1 import error AttributeError: 'AzureCloudConfig' object has no attribute 'cloud'

MSTICPy fails to load when Azure key exists in msticpyconfig.yaml but has no settings.

Reported by Sreedar Ande @ microsoft

To Reproduce
Steps to reproduce the behavior:

  1. Create/edit msticpyconfig.yaml
  2. Add or replace "Azure" key at top level and give it the value of "{}"
Azure: {}
AzureSentinel:
  Workspaces:
      ...
  1. import msticpy
  2. See error
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/msticpy/common/azure_auth_core.py in <module>
    248 
    249 # externally callable function using the class above
--> 250 _AZ_CACHED_CONNECT = _AzCachedConnect()
        global _AZ_CACHED_CONNECT = undefined
        global _AzCachedConnect = <class 'msticpy.common.azure_auth_core._AzCachedConnect'>
    251 az_connect_core = _AZ_CACHED_CONNECT.connect
    252 
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/msticpy/common/azure_auth_core.py in __init__(self=<msticpy.common.azure_auth_core._AzCachedConnect object>)
    217         """Initialize the class."""
    218         self.az_credentials: Optional[AzCredentials] = None
--> 219         self.cred_cloud: str = self.current_cloud
        self.cred_cloud = undefined
        global str = undefined
        self.current_cloud = undefined
    220 
    221     @property
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/msticpy/common/azure_auth_core.py in current_cloud(self=<msticpy.common.azure_auth_core._AzCachedConnect object>)
    222     def current_cloud(self) -> str:
    223         """Return current cloud."""
--> 224         return AzureCloudConfig().cloud
        global AzureCloudConfig.cloud = undefined
    225 
    226     def connect(self, *args, **kwargs):
AttributeError: 'AzureCloudConfig' object has no attribute 'cloud'

Expected behavior
No error

Module azure-cli-core dependency incompatibility

Describe the bug
When installing msticpy's latest version v0.8.0, the module azure-cli-core has an incompatibility issue with the azure-mgmt-resource dependency.

ERROR: azure-cli-core 2.12.0 has requirement azure-mgmt-resource==10.2.0, but you'll have azure-mgmt-resource 15.0.0 which is incompatible.

To Reproduce
In a new Python virtual environment, install the latest release from GitHub.

pip install git+https://github.com/microsoft/[email protected]

Expected behavior
I expect the installation to occur without errors.

Output

[...]
Building wheels for collected packages: msticpy
  Building wheel for msticpy (setup.py) ... done
  Created wheel for msticpy: filename=msticpy-0.8.0-py3-none-any.whl size=315577 sha256=ecfc2754cd6f3f3dc22b135ae938fc7ac98a8059b15f6f0e1d6ad60e6d867728
  Stored in directory: /tmp/pip-ephem-wheel-cache-bfkkevlx/wheels/f0/85/c4/d76217db44f43c45c8fe70ea7ab54b5223f2367fe7eae80a57
Successfully built msticpy
ERROR: azure-cli-core 2.12.0 has requirement azure-mgmt-resource==10.2.0, but you'll have azure-mgmt-resource 15.0.0 which is incompatible.
Installing collected packages: Pygments, backcall, ipython-genutils, traitlets, ptyprocess, pexpect, parso, jedi, wcwidth, prompt-toolkit, decorator, pickleshare, ipython, tornado, pyzmq, six, python-dateutil, jupyter-core, jupyter-client, ipykernel, retrying, plotly, psutil, pyjwt, numpy, pytz, pandas, Markdown, urllib3, idna, chardet, certifi, requests, scipy, pillow, pyparsing, cycler, kiwisolver, matplotlib, seaborn, tabulate, colorama, jmespath, pyyaml, argcomplete, knack, pkginfo, portalocker, msal, msal-extensions, oauthlib, requests-oauthlib, isodate, msrest, pycparser, cffi, cryptography, adal, azure-core, azure-mgmt-core, pyopenssl, msrestazure, applicationinsights, azure-cli-telemetry, azure-common, azure-mgmt-resource, pynacl, bcrypt, paramiko, humanfriendly, azure-cli-core, prettytable, lxml, soupsieve, beautifulsoup4, MarkupSafe, Jinja2, Werkzeug, click, itsdangerous, flask, pyperclip, Kqlmagic, attrs, azure-identity, azure-keyvault-secrets, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-monitor, azure-mgmt-network, azure-mgmt-subscription, typing-extensions, packaging, bokeh, wrapt, deprecated, dnspython, branca, folium, maxminddb, async-timeout, multidict, yarl, aiohttp, geoip2, ipwhois, prometheus-client, terminado, pyrsistent, jsonschema, nbformat, Send2Trash, argon2-cffi, jupyterlab-pygments, mistune, pandocfilters, nest-asyncio, async-generator, nbclient, defusedxml, testpath, webencodings, bleach, entrypoints, nbconvert, notebook, widgetsnbextension, ipywidgets, jeepney, SecretStorage, keyring, networkx, threadpoolctl, joblib, scikit-learn, splunk-sdk, patsy, statsmodels, requests-file, tldextract, tqdm, msticpy
Successfully installed Jinja2-2.11.2 Kqlmagic-0.1.113.post1 Markdown-3.2.2 MarkupSafe-1.1.1 Pygments-2.7.1 SecretStorage-3.1.2 Send2Trash-1.5.0 Werkzeug-1.0.1 adal-1.2.4 aiohttp-3.6.2 applicationinsights-0.11.9 argcomplete-1.12.0 argon2-cffi-20.1.0 async-generator-1.10 async-timeout-3.0.1 attrs-20.2.0 azure-cli-core-2.12.0 azure-cli-telemetry-1.0.6 azure-common-1.1.25 azure-core-1.8.1 azure-identity-1.3.1 azure-keyvault-secrets-4.2.0 azure-mgmt-compute-17.0.0 azure-mgmt-core-1.2.0 azure-mgmt-keyvault-7.0.0 azure-mgmt-monitor-1.0.1 azure-mgmt-network-16.0.0 azure-mgmt-resource-15.0.0 azure-mgmt-subscription-0.6.0 backcall-0.2.0 bcrypt-3.2.0 beautifulsoup4-4.9.1 bleach-3.2.1 bokeh-2.2.1 branca-0.4.1 certifi-2020.6.20 cffi-1.14.3 chardet-3.0.4 click-7.1.2 colorama-0.4.3 cryptography-3.1 cycler-0.10.0 decorator-4.4.2 defusedxml-0.6.0 deprecated-1.2.10 dnspython-2.0.0 entrypoints-0.3 flask-1.1.2 folium-0.11.0 geoip2-4.0.2 humanfriendly-8.2 idna-2.10 ipwhois-1.2.0 ipykernel-5.3.4 ipython-7.18.1 ipython-genutils-0.2.0 ipywidgets-7.5.1 isodate-0.6.0 itsdangerous-1.1.0 jedi-0.17.2 jeepney-0.4.3 jmespath-0.10.0 joblib-0.16.0 jsonschema-3.2.0 jupyter-client-6.1.7 jupyter-core-4.6.3 jupyterlab-pygments-0.1.1 keyring-21.4.0 kiwisolver-1.2.0 knack-0.7.2 lxml-4.5.2 matplotlib-3.3.2 maxminddb-2.0.2 mistune-0.8.4 msal-1.0.0 msal-extensions-0.1.3 msrest-0.6.19 msrestazure-0.6.4 msticpy-0.8.0 multidict-4.7.6 nbclient-0.5.0 nbconvert-6.0.5 nbformat-5.0.7 nest-asyncio-1.4.0 networkx-2.5 notebook-6.1.4 numpy-1.19.2 oauthlib-3.1.0 packaging-20.4 pandas-1.1.2 pandocfilters-1.4.2 paramiko-2.7.2 parso-0.7.1 patsy-0.5.1 pexpect-4.8.0 pickleshare-0.7.5 pillow-7.2.0 pkginfo-1.5.0.1 plotly-4.10.0 portalocker-1.7.1 prettytable-0.7.2 prometheus-client-0.8.0 prompt-toolkit-3.0.7 psutil-5.7.2 ptyprocess-0.6.0 pycparser-2.20 pyjwt-1.7.1 pynacl-1.4.0 pyopenssl-19.1.0 pyparsing-2.4.7 pyperclip-1.8.0 pyrsistent-0.17.3 python-dateutil-2.8.1 pytz-2020.1 pyyaml-5.3.1 pyzmq-19.0.2 requests-2.24.0 requests-file-1.5.1 requests-oauthlib-1.3.0 retrying-1.3.3 scikit-learn-0.23.2 scipy-1.5.2 seaborn-0.11.0 six-1.15.0 soupsieve-2.0.1 splunk-sdk-1.6.14 statsmodels-0.12.0 tabulate-0.8.7 terminado-0.9.1 testpath-0.4.4 threadpoolctl-2.1.0 tldextract-2.2.3 tornado-6.0.4 tqdm-4.49.0 traitlets-5.0.4 typing-extensions-3.7.4.3 urllib3-1.25.10 wcwidth-0.2.5 webencodings-0.5.1 widgetsnbextension-3.5.1 wrapt-1.12.1 yarl-1.5.1

Desktop (please complete the following information):

  • OS: Ubuntu 20
  • Version: Python 3.8.2, pip 20.0.2

Support ElasticSearch as a data provider

Describe the solution you'd like
Support for ElasticSearch data sources in the same way other data providers are supported in MSTICPy

This has been discussed separately to this issue and will be expanded on as work starts.

Update VirusTotalV3 module to support broader API set

Currently VT3 module only supports file, ip, domain, url simple APIs,
It also a generic get_object API
We'd like to be able support some of the richer APIs to retrieve file behaviors, contacted IPs and domains.
Should also add support for outputting detonation/behavior process tree in a format consumable by the Process Tree visualization

Expand AzureSentinel to include support for other API functions

Is your feature request related to a problem? Please describe.
AzureSentinel doesn't currently support all the Sentinel REST API features.

Describe the solution you'd like
Expand AzureSentinel to support additional features such as the creating and updating of Bookmarks and Watchlists.

Drop support for Python 3.6

Is your feature request related to a problem? Please describe.
Python 3.6 will reach end of life at the end of 2021. There will be limited support for updates and security fixes.
There are also features of later releases that we would like to use in development.

Describe the solution you'd like
We are proposing to make the minimum supported version of Python 3.8

Describe alternatives you've considered
Python 3.7 should still work but there would be no guarantees

Regularize NBWidgets

Current the widgets have a variety of parameters and properties that don't quite match and don't follow the ipywidgets property names.
This mainly applies to the selection widgets.

To Dos:

  • All widgets should expose a layout property - which is the overall HBox/VBox container widget
  • All widgets should expose a value property - ideally a get and set
  • ipywidgets in the control should be exposed as public attributes so that they can be manipulated if necessary
  • Use the following parameter conventions:
    • description for title
    • options for select options
    • width (if appropriate)
    • height (if appropriate)
  • for selection widgets give a way to programmatically select the selected option.

Add support for OSQuery

MSTICPy should expand its supported security data sources to include OSQuery - https://www.osquery.io/

The ideal solution would include OSQuery as a data connector in the same pattern as existing data connectors.

Unclear error messages with Pivots and dataframe

When using pivot functions with a dataframe errors caused by incorrect column names are unclear.
If a column name that isn't present in the dataframe is specified the pivot falls back to using the default value rather than telling the user that the columns isn't in the dataframe. This then leads to an error stating the default value isn't in the dataframe instead (if that is the case).

image

Allow for workspace details to be passed at _init_ for AzureSentinel

Is your feature request related to a problem? Please describe.
Currently you have to specify the Azure Sentinel workspace you want to interact with for each and every call you make to Azure Sentinel which is cumbersome.

Describe the solution you'd like
Allow a set of Workspace details to be passed in at _init_ that are then used for all future calls.

Missing files in source package on PyPI

In the Source Dist on PyPI the setup.py refers to requirements.txt and other files that are not present in the package causing the setup to fail.

     Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-download-4kdqlqg5/msticpy/setup.py", line 39, in <module>
        with open("requirements.txt", "r") as fh:
    FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'

Missing files probably need to be added to https://github.com/microsoft/msticpy/blob/master/MANIFEST.in

Lookup IOC raw_result_fmt Error

Describe the bug
I was following the directions from the document - https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html#notebook - and received an error when performing the IOC lookup.

To Reproduce
Steps to reproduce the behavior:

  1. Build a NB that contains the required dependencies and has a valid msticpyconfig.yaml

Then try to execute the following code from the directions:
image

  1. See output/error:

image

Expected behavior
I had expected to see detail.raw_result_fmt get displayed after the print statement "Raw Results". instead there was an error message

Desktop (please complete the following information):

  • OS: OSX 10.15.6
  • VSCode 1.49.1

TI Browser throws exception if no results

The TI browser throws an exception if the results data doesn't contain any results of the requested severities. Ideally the browser should display with no results in it.

image

Driver for Azure Resource Graph

Hi mstic team,

I wrote a custom driver/data environment that leverages the Azure Python SDK to hit the Azure Resource Graph. This lets us query for resources using the power of the YAML templates. I know about azure_data, but I think the Resource Graph has access to a much wider body of security relevant information.

image

Is this something that folks would be interested in if I opened a PR? If so, is there anyone in particular I could work with to get that over the line?

If this isn't the right channel to ask, feel free to nuke this issue.

All the best,
Ryan

Timeline view : group_by legend and event display are in opposite orders

Is your feature request related to a problem? Please describe.
A picture is worth a thousand words. From official documentation example :
image
From top to bottom:

  • on the legend : Bordeaux, blue
  • on diplay : blue, bordeaux

Describe the solution you'd like
Have both the legend and display be consistent, it would help a lot when visualizing more events.

I can work on a PR this weekend.

Error is thrown when AzureSentinel config is not present in msticpyconfig.yaml file

Describe the bug
An error is thrown when AzureSentinel config is not present in the msticpyconfig.yaml file.

To Reproduce
Steps to reproduce the behavior:

  1. Use a notebook from MSTICpy project that contains the following code block:
    from msticpy import init_notebook
    init_notebook(namespace=globals());
  2. Create a msticpyconfig.yaml file that has the AzureSentinel portion commented out:
# AzureSentinel:
#   # Note if you do not specify any settings in the Args key for the AzureSentinel
#   # provider, it will default to using your default Azure Sentinel workspace.
#   Args:
#     WorkspaceID: "your-azure-sentinel-workspace-id"
#     TenantID: "your-azure-sentinel-tenant-id"
#   Primary: False
#   Provider: "AzSTI"
  1. Execute the code block/cell
  2. See error output attached. Error is output but config file is still parsed and follow-on commands can still be executed in subsequent cells:
# iplocation = GeoLiteLookup()
# loc_result, ip_entity = iplocation.lookup_ip(ip_address='90.156.201.97')

Expected behavior
If AzureSentinel config is commented out or not included in the config file, silence the warning error. Only provide warning error if AzureSentinel config is present but incorrectly populated.

Screenshots
image
image
image

Follow-on commands to show config file was successfully parsed:
image

Desktop (please complete the following information):

  • OS: MacBookPro MacOS 11.2.3
  • Browser: N/A, VScode
  • Version Insiders - latest build

Installation error

Describe the bug
Running https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/TroubleShootingNotebooks.ipynb in Azure ML got following error during installation of msticpy (full installation log):

Collecting iniconfig
  Using cached iniconfig-1.1.1.tar.gz (8.1 kB)
  Installing build dependencies ... -� �error
  ERROR: Command errored out with exit status 1:
   command: /anaconda/envs/azureml_py36/bin/python /anaconda/envs/azureml_py36/lib/python3.6/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-k0pbth3e/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=41.2.0' wheel 'setuptools_scm>3'
       cwd: None

Python version: 3.6.9

To Reproduce
Steps to reproduce the behavior:

  1. Run https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/TroubleShootingNotebooks.ipynb in Azure ML Notebooks
    error

Data provider documentation re-write

We currently have most of the DP documentation crammed into a single doc.
This mixes provider-specific docs with generic items like creating queries.
As we expand the number of providers this makes the page difficult to navigate and find the relevant information.

We should restructure something like this:

  • Introduction and general use - loading providers, listing, browsing queries, etc.
  • MS Sentinel provider
  • Splunk provider
  • Kusto/Azure Data Explorer provider
  • etc.
  • Advanced querying
    • Multi-instance providers
    • Splitting large queries by time period
  • Writing your own queries - the query template format
  • Using queries from pivot functions
  • Query reference (would be nice if we could generate links to actual queries from these.

Switch to using notebook %pip magic when in IPython env for check_and_install_missing_packages

In Azure ML, using pip in a subprocess or using "!pip" from the notebook causes cross-kernel problems for the conda environment.
Specifically if you !pip install a package in py36 conda environment, then switch to py38 environment you cannot either import the package, nor can you !pip install it. Pip finds the package in the py36 environment and claims it is already installed.
The package is however, not available to the py38 kernel.

Change the code in check_and_install_missing_packages to:

  • detect if running in IPython
  • use get_ipython().run_line_magic
  • run %pip install pkg

Use kqlmagic native time ranges in queries

We currently submit queries to KQL magic with time ranges in the query. It is more efficient to have the time range as API parameters.

As well as including the start/end time in the string we should use Kqlmagic time bounds parameters as well.

Kqlmagic syntax - https://github.com/microsoft/jupyter-Kqlmagic/blob/e483b34c605a2d332e078ea74e6015dad5f89d77/HISTORY.md#new-feature-timespan-query-property

For time period spec see https://en.wikipedia.org/wiki/ISO_8601

ProcessTree does not work with MS Defender process events.

Describe the bug
Using the MDE provider the DeviceProcess data does not correctly work with the MP ProcessTree. There are two issues:

  • Timestamps are not converted to TZ-aware datetime objects in the returned dataframe
  • The column schema no longer matches the schema expected by the ProcessTree builder code.

When trying to display the process tree you get the error:

AttributeError: Can only use .dt accessor with datetimelike values

However, even if you manually update the columns to datetimes, the process still fails with schema mismatch.

Returned schema from MDE provider is

Timestamp:  object
DeviceId:  object
DeviceName:  object
ActionType:  object
FileName:  object
FolderPath:  object
SHA1:  object
SHA256:  object
MD5:  object
FileSize:  int64
ProcessVersionInfoCompanyName:  object
ProcessVersionInfoProductName:  object
ProcessVersionInfoProductVersion:  object
ProcessVersionInfoInternalFileName:  object
ProcessVersionInfoOriginalFileName:  object
ProcessVersionInfoFileDescription:  object
ProcessId:  int64
ProcessCommandLine:  object
ProcessIntegrityLevel:  object
ProcessTokenElevation:  object
ProcessCreationTime:  object
AccountDomain:  object
AccountName:  object
AccountSid:  object
AccountUpn:  object
AccountObjectId:  object
LogonId:  int64
InitiatingProcessAccountDomain:  object
InitiatingProcessAccountName:  object
InitiatingProcessAccountSid:  object
InitiatingProcessAccountUpn:  object
InitiatingProcessAccountObjectId:  object
InitiatingProcessLogonId:  int64
InitiatingProcessIntegrityLevel:  object
InitiatingProcessTokenElevation:  object
InitiatingProcessSHA1:  object
InitiatingProcessSHA256:  object
InitiatingProcessMD5:  object
InitiatingProcessFileName:  object
InitiatingProcessFileSize:  int64
InitiatingProcessVersionInfoCompanyName:  object
InitiatingProcessVersionInfoProductName:  object
InitiatingProcessVersionInfoProductVersion:  object
InitiatingProcessVersionInfoInternalFileName:  object
InitiatingProcessVersionInfoOriginalFileName:  object
InitiatingProcessVersionInfoFileDescription:  object
InitiatingProcessId:  int64
InitiatingProcessCommandLine:  object
InitiatingProcessCreationTime:  object
InitiatingProcessFolderPath:  object
InitiatingProcessParentId:  int64
InitiatingProcessParentFileName:  object
InitiatingProcessParentCreationTime:  object
InitiatingProcessSignerType:  object
InitiatingProcessSignatureStatus:  object
ReportId:  int64 

[FeatureRequest] Cross query multiple instances

Is your feature request related to a problem? Please describe.
For most companies, there are likely multiple instances of logs due to data residency or other architecture constraint.
Analyst may want to execute queries and playbook on those multiple instances in one play instead of one by one, to have a global view, to quicken analysis or else.

Describe the solution you'd like

  1. Each data provider should support configuration of multiple instances - currently only Sentinel.
  2. Each data provider should allow async query so multiple queries can be executed in background. believe missing for kqlmagic
  3. Add in msticpy to allow one query to execute on an instances set. this likely should add an instance column to allow differentiation.

Describe alternatives you've considered

The normal current alternative is to execute playbook once per instance or loop by instances with manual data merging.

MSTICPy config should support Azure cloud config

Describe the bug
Currently MSTICPy is mostly ignorant of Azure cloud environment. A number of components require authentication to cloud-specific AAD endpoints.

To Reproduce
Steps to reproduce the behavior:

  1. Load an MSTICPy notebook
  2. Configure workspace/tenant for an Azure gov cloud workspace
  3. Try to authenticate
  4. It will default to the public cloud and the authentication will fail

Expected behavior
We need a single point to configure Azure cloud setting. All relevant components should take this config item when authenticating and performing other operations.

Screenshots
If applicable, add screenshots to help explain your problem.

check and update MaxMindDB appears to be failing silently

For the function _check_and_update_db, I am troubleshooting the following:
iplocation = GeoLiteLookup(api_key=value, db_folder = Path('./max_mind_db/'), force_update=True)

  • API key is valid
  • GeoLiteLookup is able to pull the current path and run IP lookups against the stored DB
  • Output is "Latest local Maxmind City Database present is older than 30 days. Attempting to download new database to max_mind_db"
  • When VScode electron app reaches out to maxmind, I whitelist it
  • No additional error codes or failures are presented and I would have expected this elif block of code to fire since force_update is set to True
    elif force_update and auto_update: print( "force_update is set to True.", f"Attempting to download new database to {db_folder}", ) try: db_is_current = self._download_and_extract_archive(url, db_folder) except MsticpyConfigException as no_key_err: warnings.warn(" ".join(no_key_err.args))
  • Result - I don't see the message "force_update is set to True. Attempting to download new database to {db_folder}" and the MaxMind DB is not updated and keeps previous timestamp

Any help would be greatly appreciated! Been fun test driving this python tooling, thanks infosec Jupyterthon :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.