Giter VIP home page Giter VIP logo

generalised_shapelets's Introduction

Generalised Interpretable Shapelets for Irregular Time Series
[arXiv]

A generalised approach to the shapelet method used in time series classification, in which a time series is described by its similarity to each of a collection of 'shapelets'. Given lots of well-chosen shapelets, then you can now look at those similarities and conclude that "This time series is probably of class X, because it has a very high similarity to shapelet Y."

We extend the method by:

  • Extending to irregularly sampled, partially observed multivariate time series.
  • Differentiably optimising the shapelet lengths. (Previously a discrete parameter.)
  • Imposing interpretability via regularisation.
  • Introducing generalised discrepancy functions for domain adaptation.

This gives a way to classify time series, whilst being able to answer questions about why that classification was chosen, and even being able to give new insight into the data. (For example, we demonstrate the discovery of a kind of spectral gap in an audio classification problem.)

Despite the similar names, shapelets have nothing to do with wavelets.


Library

We provide a PyTorch-compatible library for computing the generalised shapelet transform here.

Results

Accuracies on ten different datasets:

The first 14 MFC coefficients for an audio recording from the Speech Commands dataset, along with the learnt shapelet, and the difference between them.:

Interpreting why a class was chosen based on similarity to a shapelet, on the PenDigits dataset:

Using a pseudometric uncovers a spectral gap in an audio classification problem:

Citation

@article{kidger2020shapelets,
    author={Kidger, Patrick and Morrill, James and Lyons, Terry},
    title={{Generalised Interpretable Shapelets for Irregular Time Series}},
    year={2020},
    journal={arXiv:2005.13948}
}

Reproducing the experiments

Requirements

  • python==3.7.4
  • numpy==1.18.3
  • scikit-learn==0.22.2
  • six==1.15.0
  • scipy==1.4.1
  • sktime==0.3.1
  • torch==1.4.0
  • torchaudio==0.4.0
  • tqdm==4.46.0
  • signatory==1.2.0.1.4.0 [This must be installed after PyTorch]

The following are also needed if you wish to run the interpretability notebooks:

  • jupyter==1.0.0
  • matplotlib==3.2.1
  • seaborn==0.10.1

Finally, the torchshapelets package (in this repository) must be installed via: python torchshapelets/setup.py develop

Downloading the data

  • python get_data/uea.py
  • python get_data/speech_commands.py

Running the experiments

First make a folder at experiments/results, which is where the results of the experiments will be stored. Each model is saved after training for later analysis, so make this a symlink if you need to save on space. All experiments can be run via:

  • python experiments/uea.py <argument>
  • python experiments/speech_commands.py <argument2>

where <argument> is one of:

  • all: run every experiment. Not recommended, will take forever.
  • hyperparameter_search_old: do hyperparameter searches for the performance of the classical shapelet transform on the UEA datasets.
  • hyperparameter_search_l2: do hyperparameter searches for the performance of the generalised shapelet transform on the UEA datasets with missing data.
  • comparison_test: actually use the hyperparameter searches (hardcoded to the results we found) for the UEA comparison between classical and generalised shapelets.
  • missing_and_length_test: actually use the hyperparameter searches (hardcoded to the results we found) for the test about learning lengths and missing data.
  • pendigits_interpretability: run models for just PenDigits, and then save the resulting shapelets.

and <argument2> is one of:

  • all: Run every experiment. Not recommended, will take forever.
  • old: Run just the classical shapelet transform.
  • new: Run just the generalised shapelet transform.

Note that the code uses a lot of memory, and takes a long time to run. It's very much research code, not production code. See LIMIATIONS.md for some discussion on why.

Model evaluation

Once an experiment has been completed, model performance can be viewed using the experiments/parse_results.py script. Simply run the file with an argument that corresponds to the name of a folder in experiments/results. For example, suppose we have run the UEA comparison test, then results can be viewed by running:

  • python experiments/parse_results.py uea_comparison

Also see the notebooks in the notebooks directory, for an investigation into the interpretability of these models.

generalised_shapelets's People

Contributors

jambo6 avatar patrick-kidger avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.