Giter VIP home page Giter VIP logo

nas-benchmark's Introduction

NAS-Benchmark

This repository includes the code used to evaluate NAS methods on 5 different datasets, as well as the code used to augment architectures with different protocols, as mentioned in our ICLR 2020 paper (https://arxiv.org/abs/1912.12522). Scripts examples are provided in each folder.

ICLR 2020 video poster presentation

The video from our ICLR 2020 poster presentation is available at https://iclr.cc/virtual_2020/poster_HygrdpVKvr.html.

Plots

All code used to generate the plots of the paper can be found in the "Plots" folder.

Randomly Sampled Architectures

You can find all sampled architectures and corresponding training logs in Plots\data\modified_search_space.

Data

In the data folder, you will find the data splits for Sport-8, MIT-67 and Flowers-102 in .csv files.

You can download these datasets on the following web sites :

Sport-8: http://vision.stanford.edu/lijiali/event_dataset/

MIT-67: http://web.mit.edu/torralba/www/indoor.html

Flowers-102: http://www.robots.ox.ac.uk/~vgg/data/flowers/102/

The data path has to be set the following way: dataset/train/classes/images for the training set, dataset/test/classes/images for the test set.

We used the following repositories:

DARTS

Paper: Liu, Hanxiao, Karen Simonyan, and Yiming Yang. "Darts: Differentiable architecture search." arXiv preprint arXiv:1806.09055 (2018).

Unofficial updated implementation: https://github.com/khanrc/pt.darts

P-DARTS

Paper: Xin Chen, Lingxi Xie, Jun Wu, Qi Tian. "Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation." ICCV, 2019.

Official implementation: https://github.com/chenxin061/pdarts

CNAS

Paper: Weng, Yu, et al. "Automatic Convolutional Neural Architecture Search for Image Classification Under Different Scenes." IEEE Access 7 (2019): 38495-38506.

Official implementation: https://github.com/tianbaochou/CNAS

StacNAS

Paper: Guilin Li et al. "StacNAS: Towards Stable and Consistent Differentiable Neural Architecture Search." arXiv preprint arXiv:1909.11926 (2019).

Implementation: provided by the authors

ENAS

Paper: Pham, Hieu, et al. "Efficient neural architecture search via parameter sharing." arXiv preprint arXiv:1802.03268 (2018).

Official Tensorflow implementation: https://github.com/melodyguan/enas

Unofficial Pytorch implementation: https://github.com/MengTianjian/enas-pytorch

MANAS

Paper: Maria Carlucci, Fabio, et al. "MANAS: Multi-Agent Neural Architecture Search." arXiv preprint arXiv:1909.01051 (2019).

Implementation: provided by the authors.

NSGA-NET

Paper: Lu, Zhichao, et al. "NSGA-NET: a multi-objective genetic algorithm for neural architecture search." arXiv preprint arXiv:1810.03522 (2018).

Official implementation: https://github.com/ianwhale/nsga-net

NAO

Paper: Luo, Renqian, et al. "Neural architecture optimization." Advances in neural information processing systems. 2018.

Official Pytorch implementation: https://github.com/renqianluo/NAO_pytorch

For the two following methods, we have not yet performed consistent experiments (therefore the methods are not included in the paper). Nonetheless, we provide runnable code that could provide relevant insights (similar to those provided in the paper on the other methods) on these methods.

PC-DARTS

Paper: Xu, Yuhui, et al. "PC-DARTS: Partial Channel Connections for Memory-Efficient Differentiable Architecture Search." arXiv preprint arXiv:1907.05737 (2019).

Official implementation: https://github.com/yuhuixu1993/PC-DARTS

PRDARTS

Paper: Laube, Kevin Alexander, and Andreas Zell. "Prune and Replace NAS." arXiv preprint arXiv:1906.07528 (2019).

Official implementation: https://github.com/cogsys-tuebingen/prdarts

AutoAugment

Paper: Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation policies from data." arXiv preprint arXiv:1805.09501 (2018).

Unofficial Pytorch implementation: https://github.com/DeepVoltaire/AutoAugment

Citation

If you found this work useful, consider citing us:

@inproceedings{yang2020nasefh,
title={NAS evaluation is frustratingly hard},
author={Antoine Yang and Pedro M. Esperança and Fabio M. Carlucci},
booktitle={ICLR},
year={2020}}

nas-benchmark's People

Contributors

antoyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nas-benchmark's Issues

There is a mistake in the DARTS code

Hi,

I just find there is a mistake in your released DARTS code.

In architect.py, the hessian is computed as follows:
hessian = [(p-n) / 2.*eps for p, n in zip(dalpha_pos, dalpha_neg)]

However, referring to paper and the official release code, it should be computed as follows:
hessian = [(p-n) / (2.*eps) for p, n in zip(dalpha_pos, dalpha_neg)]

But it seems this mistake does not hurt the performance. LOL

Change in seed not working?

Hi @antoyang, Thanks for releasing the benchmark. I am trying to search a cell using DARTS as mentioned in the README. However, once I change the random seed to a different value or sometimes even with the default seed (--seed 2), the search is not working. It becomes idle at the very beginning (training seems to be not working) although it is using GPU memory. Am I missing something here? Can you please comment on this? Thanks.

Screen Shot 2020-02-11 at 2 09 29 AM

Random sampled architecture vs. neural architecture search algorithm,for more complex tasks?

Thanks for your great work and very powerful opinions!

It is indeed difficult to evaluate how good or really effective a proposed neural architecture search algorithm as it is described. Random sampling in search space is a powerful baseline for comparsion. Not just for image classification, even for more complex tasks such as dense prediction tasks (segmentation, pose estimation ...), NAS evaluation is still frustratingly hard. When applying NAS for larger datasets like MPII and comparing the random sampled architectures from search space (micro and macro) with the first-order gradient-based search method proposed by DARTS, random sampling method still performs surprisingly powerful for human pose estimation architectures searching.

I wonder if you are willing to set up benckmarks for more tasks and datasets in the future?

about dataset split

in data folder
Sport8_test.csv bocce 22 columns

have a image name test

is it bug ?

searching micro cell for cifar100

Hi, I wanted to know while searching for micro cell for cifar100 using ENAS, does the validation and test accuracy improves over time. I am getting extremely low accuracy even if I train for longer period.

MANAS not found

Hello. thank you for the work. i am personally interested in MANAS.But i could not see it implementation
if you add it that will be great

about the experiment augement

hi

I am research the NAS for my master degree.

I am confuse about the paper in Table 2 "nb cells A other datasets " parameter .

This parameter is setting as 8.

Is it reference by the origin NAS paper's setting on imagenet or it's setting by yourself?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.