Giter VIP home page Giter VIP logo

mtscomp's Introduction

Multichannel time series lossless compression in Python

Build Status Coverage Status

This library implements a simple lossless compression scheme adapted to time-dependent high-frequency, high-dimensional signals. It is being developed within the International Brain Laboratory with the aim of being the compression library used for all large-scale electrophysiological recordings based on Neuropixels. The signals are typically recorded at 30 kHz and 10 bit depth, and contain several hundreds of channels.

Compression scheme

The requested features for the compression scheme were as follows:

  • Lossless compression only (one should retrieve byte-to-byte exact decompressed data).
  • Written in pure Python (no C extensions) with minimal dependencies so as to simplify distribution.
  • Scalable to large sample rates, large number of channels, long recording time.
  • Faster than real time (i.e. it should take less time to compress than to record).
  • Multithreaded so as to leverage multiple CPU cores.
  • On-the-fly decompression and random read accesses.
  • As simple as possible.

The compression scheme is the following:

  • The data is split into chunks along the time axis.
  • The time differences are computed for all channels.
  • These time differences are compressed with zlib.
  • The compressed chunks (and initial values of each chunk) are appended in a binary file.
  • Metadata about the compression, including the chunk offsets within the compressed binary file, are saved in a secondary JSON file.

Saving the offsets allows for on-the-fly decompression and random data access: one simply has to determine which chunks should be loaded, and load them directly from the compressed binary file. The compressed chunks are decompressed with zlib, and the original data is recovered with a cumulative sum (the inverse of the time difference operation).

With large-scale neurophysiological recordings, we achieved a compression ratio of 3x.

As a consistency check, the compressed file is by default automatically and transparently decompressed and compared to the original file on a byte-per-byte basis.

Dependencies

  • Python 3.7+
  • NumPy
  • tqdm [for the progress bar]

For development only:

  • flake8
  • pytest
  • pytest-cov
  • coverage

Installation

pip install mtscomp

Command-line interface

Example:

# Compression: specify the number of channels, sample rate, dtype, optionally save the parameters
# as default in ~/.mtscomp with --set-default
mtscomp data.bin -n 385 -s 30000 -d int16 [--set-default]
# Decompression
mtsdecomp data.cbin -o data.decomp.bin

Usage:

usage: mtscomp [-h] [-d DTYPE] [-s SAMPLE_RATE] [-n N_CHANNELS] [-p CPUS]
               [-c CHUNK] [-nc] [-v] [--set-default]
               path [out] [outmeta]

Compress a raw binary file.

positional arguments:
  path                  input path of a raw binary file
  out                   output path of the compressed binary file (.cbin)
  outmeta               output path of the compression metadata JSON file
                        (.ch)

optional arguments:
  -h, --help            show this help message and exit
  -d DTYPE, --dtype DTYPE
                        data type
  -s SAMPLE_RATE, --sample-rate SAMPLE_RATE
                        sample rate
  -n N_CHANNELS, --n-channels N_CHANNELS
                        number of channels
  -p CPUS, --cpus CPUS  number of CPUs to use
  -c CHUNK, --chunk CHUNK
                        chunk duration
  -nc, --no-check       no check
  -v, --debug           verbose
  --set-default         set the specified parameters as the default



usage: mtsdecomp [-h] [-o [OUT]] [--overwrite] [-nc] [-v] cdata [cmeta]

Decompress a raw binary file.

positional arguments:
  cdata                 path to the input compressed binary file (.cbin)
  cmeta                 path to the input compression metadata JSON file (.ch)

optional arguments:
  -h, --help            show this help message and exit
  -o [OUT], --out [OUT]
                        path to the output decompressed file (.bin)
  --overwrite, -f       overwrite existing output
  -nc, --no-check       no check
  -v, --debug           verbose

High-level API

Example:

import numpy as np
from mtscomp.mtscomp import compress, decompress

# Compress a .bin file into a pair .cbin (compressed binary file) and .ch (JSON file).
compress('data.bin', 'data.cbin', 'data.ch', sample_rate=20000., n_channels=256, dtype=np.int16)
# Decompress a pair (.cbin, .ch) and return an object that can be sliced like a NumPy array.
arr = decompress('data.cbin', 'data.ch')
X = arr[start:end, :]  # decompress the data on the fly directly from the file on disk
arr.close()  # Close the file when done

Low-level API

Example:

import numpy as np
from mtscomp import Writer, Reader

# Define a writer to compress a flat raw binary file.
w = Writer(chunk_duration=1.)
# Open the file to compress.
w.open('data.bin', sample_rate=20000., n_channels=256, dtype=np.int16)
# Compress it into a compressed binary file, and a JSON header file.
w.write('data.cbin', 'data.ch')
w.close()

# Define a reader to decompress a compressed array.
r = Reader()
# Open the compressed dataset.
r.open('data.cbin', 'data.ch')
# The reader can be sliced as a NumPy array: decompression happens on the fly. Only chunks
# that need to be loaded are loaded and decompressed.
# Here, we load everything in memory.
array = r[:]
# Or we can decompress into a new raw binary file on disk.
r.tofile('data_dec.bin')
r.close()

Implementation details

  • Multithreading: since Python's zlib releases the GIL, the library uses multiple threads when compressing a file. The chunks are grouped in batches containing as many chunks as threads. After each batch, the chunks are written in the binary file in the right order (since the threads of the batch have no reason to finish in order).

Performance

Performance on an Neuropixels dataset (30 kHz, 385 channels) and Intel 10-core i9-9820X CPU @ 3.3GHz:

  • Compression ratio: -63% (compressed files are nearly 3x smaller)
  • Compression time (20 threads): 88 MB/s, 4x faster than real time
  • Decompression time (single-threaded at the moment): 22 MB/s, 3x faster than real time

mtscomp's People

Contributors

k1o0 avatar oliche avatar rossant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mtscomp's Issues

compression ratio decreases with file size

Hi,

I noticed in my data that the compression ratio decreases a lot with increasing file size. While I get ~2.5x compression with a 20ish GB file, I only achieve ~1.4x with a 350GB file. Is that to be expected or can I tweak the parameters to achieve a better rate of compression? Thanks for your help.
PS: For context: I am using Neuropixel 1.0 and I spike sort a whole session by concatenating all data from it (using KS3). Then, I want to compress the drift corrected file that kilosort generates.

Any help is appreciated.

Cheers

Laurenz

Once in production, release versioned test datasets

  • Add data/v1/data.bin, cbin, ch files to the repository after the first production release (and similarly at each release and format version change)
  • Automatic tests that check that the library can properly write and read these files.

This will ensure that the different format versions are properly taken into account.

Compute hashes of compressed file and store in meta-data

There has been some local copy issues lately, and it became difficult to know if the problem came from a) an corruption of the uncompressed file, b) a compression issue, c) a corruption of the compressed file.

Spikeglx stores the SHA1 hash of the binary file, so we were able to show the issues came from a corruption of the uncompressed file, but it would be useful to add a hash in the metadata file to assert the integrity of the compressed file if needed.

About implementation, an option could be using hashlib.sha1().update() as it allows to do this online without incurring any I/O.

Possible extension to handle directories?

Hi!
We developed a thin wrapper around mtscomp in order to compress files from directories. I wonder if this is something that could be added to mtscomp directly? Maybe in a form of additional executables, so that the semantic of the current executables stays the same.

Can not compress catgt files (tcat)

Hey,
I've been using mtscomp for compressing ~200 files, with great efficiency. I compress Neuropixels data files (ap bin). For some reason, I cannot compress the next level of analysis (after processing with Catgt). These are tcat files, which have seemingly good .meta files. The error says that something is wrong with the meta file (example meta file attached).
Do you have any insight in the topic?
Thanks!
Yonatan
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.