Giter VIP home page Giter VIP logo

unas's Introduction

μNAS

μNAS (micro-NAS or mu-NAS) is a neural architecture search system that specialises in finding ultra-small models suitable for deploying on microcontrollers: think < 64 KB memory and storage requirement. μNAS achieves this by explicitly targeting three primary resource bottlenecks: model size, latency and peak memory usage.

For a full description of methodology and experimental results, please see the accompanying paper "μNAS: Constrained Neural Architecture Search for Microcontrollers".

Changelog from arXiv v1:

  • correctly reported the number of MACs for the DS-CNN baseline for the Speech Commands dataset.
  • fixed Speech Commands hyperparameters and updated found models
  • add smaller CIFAR-10 model in the comparison table
  • add search times to the comparison table
  • update discussion on pruning, search convergence and the use of soft constraints

Usage

Setup

μNAS uses Python 3.7+ with the environment described by Pipfile: to create an environment with all correct packages preinstalled simply run pipenv install in the cloned repository.

To run

The search is configured using Python configuration files (see configs for examples and config.py for configuration file schema), which specify the search algorithm, how candidate models are going to be trained (incl. any pruning configuration) and resource bounds. μNAS can be invoked using driver.py which immediately delegates to the configured search algorithm.

For example, to search for MNIST models with Aging Evolution and structured pruning, run the following:

pipenv run python driver.py configs/cnn_mnist_struct_pru.py --name "example_mnist"

Navigating the code

  • cnn/mlp: contains a search space description for convolutional neural networks / multilayer perceptrons, together with all allowed morphisms (changes) to a candidate architecture.

  • configs: example search configurations,

  • dataset: loaders for various datasets, conforming to the interface in dataset/dataset .py

  • dragonfly_adapters: (Bayesian optimisation only) extra code to interoperate with Dragonfly. We found that we had to rely on internal implementation of the framework for it to correctly use our customised kernel, search space and a genetic algorithm optimiser for acq. functions, thus the module contains a fair amount of monkey-patches.

  • resource_models: an independent library that allows representing and computing resource usage of arbitrary computation graphs.

  • search_algorithms: implements aging evolution and Bayesian optimisation search algorithms; each search algorithm is also responsible for scheduling model training and correctly serialising & restoring the search state. Both use ray under the hood to parallelise the search.

  • teachers: a collection of teacher models for distillation.

  • test: automated sanity tests for search space implementations.

  • model_trainer.py: code for training candidate models.

  • pruning.py: implements Dynamic Model Pruning with Feedback as a Keras callback, used during training.

  • generate_tflite_models.py: generates random small models for latency benchmarking on a microcontroller.

  • search_state_processor.py: loads and visualises μNAS search state files.

  • architecture.py/config.py/search_space.py/schema_types.py base classes for candidate architectures, search configuration and free variables of the search space.

Notes on deploying found models

In the interest of storage, μNAS does not save final weights of discovered models (though it can be modified to do so): μNAS uses aging evolution and does not share trained weights across candidate models, which encourages finding models that can be trained to good accuracy from scratch. You can easily instantiate a Keras model from a found architecture (see API in architecture.py).

μNAS assumes a runtime where each operator is executed one at a time and in full, such as "TensorFlow Lite Micro". You can quantise and convert Keras models to the TFLite format using helper functions in utils.py. Note that:

  • μNAS only calculates resource usage of a model and does not take particular framework overheads into account.

  • μNAS assumes that one of the input buffers to an Add operator can be reused as an output buffer if it is not used elsewhere (to minimise peak memory usage); this optimisation is not available in TF Lite Micro at the time of writing.

  • The operator execution order that gives the smallest peak memory usage is not recorded in the model: use tflite-tools to optimise your tflite model prior to deploying.

unas's People

Contributors

eliberis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.