Giter VIP home page Giter VIP logo

ainet's Introduction

AINet: Association Implanation Network for Superpixel Segmentation

This is is a PyTorch implementation of the superpixel segmentation network introduced in ICCV paper (2021):

Association Implanation Newwork for Superpixel Segmentation

Introduction

The Illustration of AINet:

workflow

The visual comparison of our AINet and the SOTA methods:

workflow

By merging superpixels, some object proposals could be generated:

workflow

Prerequisites

The training code was mainly developed and tested with python 2.7, PyTorch 0.4.1, CUDA 9, and Ubuntu 16.04.

During test, we make use of the component connection method in SSN to enforce the connectivity in superpixels. The code has been included in /third_paty/cython. To compile it:

cd third_party/cython/
python setup.py install --user
cd ../..

Demo

Quick taste! Specify the image path and use the pretrained model to generate superpixels for an image

python run_demo.py --image=PATH_TO_AN_IMAGE --output=./demo 

The results will be generate in a new folder under /demo called spixel_viz.

Data preparation

To generate training and test dataset, please first download the data from the original BSDS500 dataset, and extract it to <BSDS_DIR>. Then, run

cd data_preprocessing
python pre_process_bsd500.py --dataset=<BSDS_DIR> --dump_root=<DUMP_DIR>
python pre_process_bsd500_ori_sz.py --dataset=<BSDS_DIR> --dump_root=<DUMP_DIR>
cd ..

The code will generate three folders under the <DUMP_DIR>, named as /train, /val, and /test, and three .txt files record the absolute path of the images, named as train.txt, val.txt, and test.txt.

Training

Once the data is prepared, we should be able to train the model by running the following command:

python main.py --data=<DATA_DIR> --savepath=<PATH_TO_SAVE_CKPT> --workers 4 --input_img_height 208 --input_img_width 208 --print_freq 20 --gpu 0 --batch-size 16  --suffix '_myTrain' 

If you want to continue training from a ckpt, just add --pretrained=<PATH_TO_CKPT>. You can specify the training config in the 'train.sh' script.

The training log can be viewed from the tensorboard session by running

tensorboard --logdir=<CKPT_LOG_DIR> --port=8888

If everything is set up properly, reasonable segmentation should be observed after 10 epochs.

Testing

We provide test code to generate: 1) superpixel visualization and 2) the.csv files for evaluation.

To test on BSDS500, run

python run_infer_bsds.py --data_dir=<DUMP_DIR> --output=<TEST_OUTPUT_DIR> --pretrained=<PATH_TO_THE_CKPT>

To test on NYUv2, please first extract our pre-processed dataset from /nyu_test_set/nyu_preprocess_tst.tar.gz to <NYU_TEST> , or follow the intruction on the superpixel benchmark to generate the test dataset, and then run

python run_infer_nyu.py --data_dir=<NYU_TEST> --output=<TEST_OUTPUT_DIR> --pretrained=<PATH_TO_THE_CKPT>

Evaluation

We use the code from superpixel benchmark for superpixel evaluation. A detailed instruction is available in the repository, please

(1) download the code and build it accordingly;

(2) edit the variables $SUPERPIXELS, IMG_PATH and GT_PATH in /eval_spixel/my_eval.sh, example:

IMG_PATH='/home/name/superpixel/AINet/BSDS500/test'
GT_PATH='/home/name/superpixel/AINet/BSDS500/test/map_csv'

../../bin_eval_summary_cli /home/name/superpixel/AINet/eval/test_multiscale_enforce_connect/SPixelNet_nSpixel_${SUPERPIXEL}/map_csv $IMG_PATH $GT_PATH

(3)run

cp /eval_spixel/my_eval.sh <path/to/the/benchmark>/examples/bash/
cd  <path/to/the/benchmark>/examples/
bash my_eval.sh

(4) run

cp ./eval_spixel/my_eval.sh <path/to/the/benchmark>/examples/bash/
cd  <path/to/the/benchmark>/examples/

#the results will be saved to: /home/name/superpixel/AINet/eval/test_multiscale_enforce_connect/SPixelNet_nSpixel_54/map_csv/
bash my_eval.sh

several files should be generated in the map_csv folders in the corresponding test outputs including summary.txt, result.txt etc;

(5) cd AINet/eval_spixel

python plot_benchmark_curve.py --path '/home/name/superpixel/AINet/eval/test_multiscale_enforce_connect/' #will generate the similar curves in the paper

Citation

If you use our code, please cite our work:

@InProceedings{Wang_2021_ICCV,
    author    = {Wang, Yaxiong and Wei, Yunchao and Qian, Xueming and Zhu, Li and Yang, Yi},
    title     = {AINet: Association Implantation for Superpixel Segmentation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {7078-7087}
}

Acknowledgement

This code is built on the top of SCN: https://github.com/fuy34/superpixel_fcn Thank the authors' contribution.

ainet's People

Contributors

yanfangcs avatar

Stargazers

 avatar Aithusa avatar MadYe avatar  avatar Linfeng Wu avatar  avatar Yan Xu avatar  avatar hhhhuangli avatar haojie avatar Shi Shang avatar Yunyang Xu avatar  avatar ChengruZhu avatar  avatar  avatar  avatar  avatar Hailongn Jin avatar TimZ avatar Jixin avatar Jun avatar

Watchers

 avatar

Forkers

wangyxxjtu

ainet's Issues

if no boundary perceiving loss is used

Hi, thanks for your work.
if no boundary perceiving loss is used, will the result keep the clear edge and show superpixel clustering in semantic segmentaion task?
do you think this method work well in depth estimation task?

Get an error while training

Hi,

Thanks for sharing your code. To run your code I have tried to follow your instruction:
After running Prerequisites and Data preparation part, when I was trying to run tain.sh script with following addresses
python main.py --data='/home/adnan/superpixel/BSR/BSDS500/data/images/test/' --savepath='./ckpt/' --workers $worker --input_img_height 208 --input_img_width 208 --print_freq 20 --gpu $gpu --batch-size 16 --suffix '_myTrain'

I got a syntax error
File "main.py", line 118 save_path = os.path.abspath(args.savepath) + '/' + f'{args.dataset}{args.suffix}' #os.path.join(args.dataset, save_path + '_' + timestamp )

And the reason is {args.dataset}

I can see you import dataset at the module, however couldn't find it in directories.

Is python and pytorch version error?

in readme file, I found the run env is python=2.7, torch=0.4.1, but when I use the 2.7+0.4.1,
it is throw error about f'', so I change the python to 3.6, this error was sullotioned,
after following the readme, The testing section can pass。
but traning section can't pass, run error.
I guess it's the running environment error.
What is the environment we use?

ERROR 'map_csv is a file when evaluate the results' using my_eval.sh

when I evaluate my output, I got a error as follows, I revise it by README.md, Idon't know the reason. hoping your response. thank you very much.

ruby@ruby-MS-7D36:~/code/superpixel-benchmark/examples$ bash bash/my_eval.sh 
54
bash/my_eval.sh: 行 43: /home/ruby/code/AINET/output/test_multiscale_enforce_connect/SPixelNet_nSpixel_54/map_csv: 是一个目录

the settings in the my_eval.sh as follows:

SUPERPIXELS=("54" "96" "150" "216" "294" "384" "486" "600" "726" "864" "1014" "1176" "1350" "1536" "1944") #

#SUPERPIXELS=("293" "1175") #
IMG_PATH=/home/ruby/code/AINET/test
GT_PATH=/home/ruby/code/AINET/test/map_csv

for SUPERPIXEL in "${SUPERPIXELS[@]}"
do
echo $SUPERPIXEL
/home/ruby/code/AINET/output/test_multiscale_enforce_connect/SPixelNet_nSpixel_${SUPERPIXEL}/map_csv $IMG_PATH $GT_PATH

done

train for rgb or lab?

Hi
I find the image maybe is trained in rgb, not for lab? it's sure?
could you tell me which should i take, rgb or lab, to train?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.