Giter VIP home page Giter VIP logo

coinception's Introduction

Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach.

Requirements

The requirements.txt file are attached for list of packages required.

  • Python 3.9.16
  • torch==2.0.0
  • scikit_learn==0.24.2
  • pywavelets==1.4.1
  • pandas
  • scipy
  • statsmodels
  • matplotlib
  • Bottleneck

The dependencies can be installed with this single-line command:

pip install -r requirements.txt

Datasets

The datasets are all publicly available online, put into data/ folder in the following way:

cd ..
mkdir data/
cd data
  • 128 UCR datasets: After downloading and unzip-ing the compressed file, rename the folder to UCR.
  • 30 UEA datasets: After downloading and unzip-ing the compressed file, rename the folder to UEA.
  • 3 ETT datasets: Download 3 files ETTh1.csv, ETTh2.csv and ETTm1.csv.
  • Electricity dataset: After downloading and unzip-ing the compressed file, run preprocessing file at CoInception/preprocessing/preprocess_electricity.py and placed at ../data/electricity.csv.
  • Yahoo dataset: First register for using the dataset, then downloading and unzip-ing the compressed file, run preprocessing file at CoInception/preprocessing/preprocess_yahoo.py and placed at ../data/yahoo.pkl.
  • KPI dataset: After downloading and unzip-ing the compressed file, run preprocessing file at CoInception/preprocessing/preprocess_kpi.py and placed at ../data/kpi.pkl.

Training and Evaluating

Run this one-line command for both training and evaluation:

python train.py <dataset_name> <run_name> --loader <loader> --batch-size <batch_size> --repr-dims <repr_dims> --gpu <gpu> --eval --save_ckpt

Example:

python -u train.py Chinatown UCR --loader UCR --batch-size 8 --repr-dims 320 --max-threads 8 --seed 42 --eval

The detailed descriptions about the arguments are as following:

Parameter name Description
dataset_name (required) The dataset name
run_name (required) The folder name used to save model, output and evaluation metrics. This can be set to any word
loader The data loader used to load the experimental data. This can be set to UCR, UEA, forecast_csv, forecast_csv_univar, anomaly, or anomaly_coldstart
batch_size The batch size (defaults to 8)
repr_dims The representation dimensions (defaults to 320)
gpu The gpu no. used for training and inference (defaults to 0)
eval Whether to perform evaluation after training
save_ckpt Whether to save checkpoint (default: False)

(For descriptions of more arguments, run python train.py -h.)

Scripts: The scripts for reproduction are provided in scripts/ folder.

Acknowledgement

This codebase is partially inherited from these below repositories, we want to express our thank to the authors:

  • TS2Vec: TS2Vec: Towards Universal Representation of Time Series (AAAI-22)
  • TNC: Unsupervised Representation Learning for TimeSeries with Temporal Neighborhood Coding (ICLR 2021)
  • T-Loss: Unsupervised Scalable Representation Learning for Multivariate Time Series (NeurIPS 2019)

coinception's People

Contributors

anhduy0911 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.