Giter VIP home page Giter VIP logo

scarr's Introduction

SCARR

SCARR is a Side-Channel Analysis (SCA) framework written in Python that is optimized for performance over compressed data sets to minimize storage needs. At this early stage, SCARR should be considered experimental, i.e., API changes may happen at any time and without prior notice. In addition, only a limited set of analysis algorithms and pre-processing options is currently implemented, but this will change in the future as we hope to continue SCARR as an active open-source project.

SCARR is mainly intended for educational and research purposes. If as an individual you find SCARR useful, please contribute, give us a shout-out, or consider buying us coffee (this project currently runs on coffee only). If you are an organization and you benefit from this development, please consider making an unrestricted gift to the Hardware Security Research Lab at Oregon State University (led by Vincent Immler) to promote SCARR's continued development.

Table of Contents

SCARR Features

SCARR is designed to support the following:

  • Fast out-of-core computations (processed data can be larger than available memory)
  • Processed data can be int or float (raw oscilloscope data or digitally pre-processed)
  • Multiple tiles from EM-measurements are stored in the same data set to identify Regions-of-Interest (ROIs)
  • Advanced indexing for fast Trace-of-Interest (TOI) and Point-of-Interest (POI) selections
  • Analysis algorithms currently include: SNR, TVLA, CPA, MIA (more to come, check here)

SCARR also aims at maximizing I/O efficiency, including the asynchronous prefetch of (compressed) data sets.

Install

SCARR can be installed with pip3 from GitHub directly:

pip3 install "git+https://github.com/decryptofy/scarr.git"

Alternatively, you can clone the repository to also get the most recent versions of the SCARR Jupyter notebooks:

git clone [email protected]:decryptofy/scarr.git
cd scarr
git submodule update --init jupyter

Afterwards, you can install SCARR by typing:

python3 -m pip install .

Please note: for now, the reference OS for SCARR is Ubuntu 22.04 LTS with its default Python 3.10. See here why we will not support older Python versions.

To make use of Jupyter notebooks, you may want to use VS Code and its Jupyter plugin, or PyCharm, but any other option to run Jupyter notebooks should work, too.

Usage Warning

Some computations in SCARR can push your hardware to its limits and beyond. Caution is advised when running sustained compute loads as many consumer-grade hardware is not made for this. Especially for laptops, please take into account the best practices under USAGE.md. Additionally, SCARR does not do any memory-checking. When attempting computations that exceed the available memory, then based on OS-level settings, SCARR or other applications may be terminated by the OS, resulting in potential data loss. During heavy computations, it is time for coffee, as you cannot use your computer for anything else but SCARR. See also: DISCLAIMER.md

Getting Started with SCARR

After installing SCARR and consideration of the usage warning, please proceed as follows:

  • select a Jupyter notebook from the jupyter subdirectory or its corresponding repository.
  • determine corresponding example data set(s) and download from Box.com: click here
  • run Jupyter notebook to use SCARR

Important note for downloading from Box.com: we are currently in the process of finding optimized ways for making the download process more convenient and more reliable. Until then, please avoid downloading whole directories that also have trace files in them, as Box.com will attempt to create a .zip prior to the download (causing a timeout while doing so). Select and download data sets only individually.

SCARR's File Format for Side-Channel Analysis Data

Zarr is a great file format and we use its DirectoryStore as SCARR's native file format. Each data set is represented by a directory that contains the following basic structure:

  • traces:
    • directory.zarr/X/Y/traces
  • metadata:
    • directory.zarr/X/Y/ciphertext
    • directory.zarr/X/Y/plaintext
    • (optional) directory.zarr/X/Y/key

Traces can be left uncompressed or compressed. A chunking of (5000,1000) is recommended. All metadata is left uncompressed and chunked as (5000,16) for AES128. X and Y are the logical coordinates of EM side-channel measurements. Power measurements use the same structure only with /0/0/ as coordinates for /X/Y/.

We are actively supporting the "Zarr-Python Benchmarking & Performance" group to further speed-up Zarr.

Working with Other File Formats

SCARR only works with its native format and we have no plans to support other file formats. Should you have previously recorded data in other file formats, then you need to convert these data sets to Zarr. We collect example scripts for this format conversion here, e.g., to convert separate .npy files to a combined .zarr. These scripts are not actively maintained and not optimized.

Platform Compatibility

SCARR is developed with High-Performance Computing (HPC) considerations in mind. Optimum performance can rely on many aspects of its configuration and the underlying platform. The default batch size (the number of traces processed in parallel at a given point in time) is 5000. Depending on the platform and chosen analysis, other values between 1000 and 10000 may give better results. Please also take into account the following:

  • We recommend CPUs with 8 or more physical (performance) cores, preferably with AVX512
  • SCARR is optimized for CPUs with SMT (Hyper-Threading); otherwise, mp.pool parameters are not optimal
  • A combination of performance and efficiency cores is not specifically considered in mp.pool either
  • Fast, low-latency memory should be used (e.g., DDR5-6400 and CL < 32)
  • SCARR should not be used on NUMA platforms as this degrades performance in unexpected ways
  • SCARR is designed to run on Linux/Unix; Windows may work but is not supported
  • ulimits need to be adjusted when processing many tiles/byte-positions at the same time

Contributing (inbound=outbound)

We want to keep this a no-nonsense project and promote contributions, while minimizing risks to the well-being of the project. If you would like to contribute bug fixes, improvements, and new features back to SCARR, please take a look at our Contributor Guide to see how you can participate in this open source project.

Consistent with Section D.6. of the GitHub Terms of Service as of November 16, 2020, and the Mozilla Public License, v. 2.0., the project maintainer for this project accepts contributions using the inbound=outbound model. When you submit a pull request to this repository (inbound), you are agreeing to license your contribution under the same terms as specified under License (outbound).

Note: this is modeled after the terms for contributing to Ghidra. Our reasoning for this licensing is explained here.

License

This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/. This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0.

Authors

SCARR was initiated and designed by Vincent Immler out of a necessity to support his teaching and research at Oregon State University. Under his guidance, two undergraduate students at Oregon State University, Jonah Bosland and Stefan Ene, developed the majority of the initial implementation during the summer 2023. Peter Baumgartner helped us with the testing and analysis on NUMA platforms.

Additional contributions by (new contributors, add yourself here):

  • Matt Ruff
  • Kevin Yuan
  • Alexander Merino
  • Tristan Long
  • Kayla Barton

Copyright

Copyright for SCARR (2023-2024) by Vincent Immler.

Citation

If you use SCARR in your research, please cite our paper: "High-Performance Design Patterns and File Formats for Side-Channel Analysis" by Jonah Bosland, Stefan Ene, Peter Baumgartner, Vincent Immler. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2024(2), 769–794.

DOI: click here

Acknowledgements

Jonah Bosland has been funded through the Office of Naval Research (ONR) Workforce Development grant (N000142112552) during June-November 2023. Stefan Ene has been funded through the Summer 2023 Research Experience for Undergraduates (REU) program by the School of EECS at Oregon State University.

scarr's People

Contributors

alexmerino13 avatar bartokay avatar boslandj avatar decryptofy avatar kevin-defang-yuan avatar longtr28 avatar ruffm avatar stefanene avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

scarr's Issues

Typos and Grammar in ARCHITECTURE.md

I found a few typos and grammatical improvements that could be made in this file.

Ex:
Possessive singular of class:

This directory contains the TraceHandler class which is the file handler of SCARR. This class' main responsibilities are to compute information (i.e. indices/ranges for each batch) required for batching and pass back batches of data that the container class requests.

Sentence structure and missing commas:

This directory contains all available engines that SCARR currently supports. Engines are the main compute class of SCARR. Their main role is to contain the batch-wise algorthims that SCARR uses to compute results. Currently SCARR supports two kinds of Engines, leakage detection (i.e. SNR) and key extraction (i.e. CPA). The first is used as a metric to determine if an implementation is attackable. While the second is an attack to draw cryptographic keys out of said implementation. Any Engine reliant on metadata (i.e. ciphertext, keys, plaintext) for its computation requires the use of a member of the model_values class which passes back values based on said metadata and most times a model of some kind (i.e. Hamming Weight).

Is it alright if I make these changes and fix a couple other minor typos?

Provide VScode instructions as an alternative getting started method.

I have familiarity with VScode, but not with using Jupyter Notebook. I think this may be the case for many OSU students. In my brief experience running jupyter notebook through the browser is a good deal slower with a less intuitive interface. I would like to put together a tutorial for using VScode and running the first jupyter notebook on it (0_scarr_start_here_V2.ipynb) for true beginners like me.

Please let me know if there's any feedback you would like to provide for this suggestion (even if it is that this isn't a good enough idea).
Thank you!

Link to "Contributing (inbound-outbound)" in README.md table of contents does not work.

Steps to reproduce:

  1. Navigate to https://github.com/decryptofy/scarr
  2. Scroll down to Table of Contents
  3. Click "Contributing (inbound = outbound)"
  4. Expected behavior: URL is updated in browser and browser navigates to contribution section of README.md
    Actual behavior: URL is updated but navigation does not occur

Before click:
image

After click:
image

OS: Windows 11
Browser: Chrome Version 125.0.6422.76 (Official Build) (64-bit)

Elaboration of CONTRIBUTING.md and Contributor Guide

Hello! The purpose of this issue is to request expansion of the documentation in the contributor’s guide to formalize it. The “Contributing Process” is clearly stated as TBD, but it would be helpful to potentially add more specificity to the contributor’s guide, perhaps via a more extensive description of how to contribute to each of the following: development, documentation, feature requests, patches, bug reporting, and user support. It may be even more helpful to include links to pull requests and successful prior instances of each form of contribution for future contributors to look to for guidance. Below is one possible way to begin adding more detail to the Contributor’s Guide:
Contributor Guide

Another idea to improve this documentation through the same lens is to include a “Steps to a Successful Developer Contribution” section. I have seen some large and successful open-source projects include one as part of their documentation for contributors. The idea below could potentially be a starting point to introduce this:
Contributing Process

I plan to follow up on this suggestion in the coming weeks with pull requests that will outline specific, actionable, and more thorough changes for others to review and refine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.