Giter VIP home page Giter VIP logo

deep-spectral-segmentation's Introduction

Deep Spectral Methods for Unsupervised Localization and Segmentation (CVPR 2022 - Oral)

Project Demo Conference Paper

Description

This code accompanies the paper Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization.

Abstract

Unsupervised localization and segmentation are long-standing computer vision challenges that involve decomposing an image into semantically-meaningful segments without any labeled data. These tasks are particularly interesting in an unsupervised setting due to the difficulty and cost of obtaining dense image annotations, but existing unsupervised approaches struggle with complex scenes containing multiple objects. Differently from existing methods, which are purely based on deep learning, we take inspiration from traditional spectral segmentation methods by reframing image decomposition as a graph partitioning problem. Specifically, we examine the eigenvectors of the Laplacian of a feature affinity matrix from self-supervised networks. We find that these eigenvectors already decompose an image into meaningful segments, and can be readily used to localize objects in a scene. Furthermore, by clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions, i.e. semantic segmentations. Experiments on complex datasets (Pascal VOC, MS-COCO) demonstrate that our simple spectral method outperforms the state-of-the-art in unsupervised localization and segmentation by a significant margin. Furthermore, our method can be readily used for a variety of complex image editing tasks, such as background removal and compositing.

Demo

Please check out our interactive demo on Huggingface Spaces! The demo enables you to upload an image and outputs the eigenvectors extracted by our method. It does not perform the downstream tasks in our paper (e.g. semantic segmentation), but it should give you some intuition for how you might use utilize our method for your own research/use-case.

Examples

Examples

How to run

Dependencies

The minimal set of dependencies is listed in requirements.txt.

Data Preparation

The data preparation process simply consists of collecting your images into a single folder. Here, we describe the process for Pascal VOC 2012. Pascal VOC 2007 and MS-COCO are similar.

Download the images into a single folder. Then create a text file where each line contains the name of an image file. For example, here is our initial data layout:

data
└── VOC2012
    ├── images
    │   └── {image_id}.jpg
    └── lists
        └── images.txt

Extraction

We first extract features from images and stores these into files. We then extract eigenvectors from these features. Once we have the eigenvectors, we can perform downstream tasks such as object segmentation and object localization.

The primary script for this extraction process is extract.py in the extract/ directory. All functions in extract.py have helpful docstrings with example usage.

Step 1: Feature Extraction

First, we extract features from our images and save them to .pth files.

With regard to models, our repository currently only supports DINO, but other models are easy to add (see the get_model function in extract_utils.py). The DINO model is downloaded automatically using torch.hub.

Here is an example using dino_vits16:

python extract.py extract_features \
    --images_list "./data/VOC2012/lists/images.txt" \
    --images_root "./data/VOC2012/images" \
    --output_dir "./data/VOC2012/features/dino_vits16" \
    --model_name dino_vits16 \
    --batch_size 1
Step 2: Eigenvector Computation

Second, we extract eigenvectors from our features and save them to .pth files.

Here, we extract the top K=5 eigenvectors of the Laplacian matrix of our features:

python extract.py extract_eigs \
    --images_root "./data/VOC2012/images" \
    --features_dir "./data/VOC2012/features/dino_vits16" \
    --which_matrix "laplacian" \
    --output_dir "./data/VOC2012/eigs/laplacian" \
    --K 5

The final data structure after extracting eigenvectors looks like:

data
├── VOC2012
│   ├── eigs
│   │   └── {outpur_dir_name}
│   │       └── {image_id}.pth
│   ├── features
│   │   └── {model_name}
│   │       └── {image_id}.pth
│   ├── images
│   │   └── {image_id}.jpg
│   └── lists
│       └── images.txt
└── VOC2007
    └── ...

At this point, you are ready to use the eigenvectors for downstream tasks such as object localization, object segmentation, and semantic segmentation.

Object Localization

First, clone the dino repo inside this project root (or symlink it).

git clone https://github.com/facebookresearch/dino

Run the steps above to save your eigenvectors inside a directory, which we will now call ${EIGS_DIR}. You can then move to the object-localization directory and evaluate object localization with:

python main.py \
    --eigenseg \
    --precomputed_eigs_dir ${EIGS_DIR} \
    --dataset VOC12 \
    --name "example_eigs"

Object Segmentation

To perform object segmentation (i.e. single-region segmentations), you first extract features and eigenvectors (as described above). You then extract coarse (i.e. patch-level) single-region segmentations from the eigenvectors, and then turn these into high-resolution segmentations using a CRF.

Below, we will give example commands for the CUB bird dataset (CUB_200_2011). To download this dataset, as well as the three other object segmentation datasets used in our paper, you can follow the instructions in unsupervised-image-segmentation. Then make sure to specify the data_root parameter in the config/eval.yaml.

For example:

# Example dataset
DATASET=CUB_200_2011

# Features
python extract.py extract_features \
    --images_list "./data/object-segmentation/${DATASET}/lists/images.txt" \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --output_dir "./data/object-segmentation/${DATASET}/features/dino_vits16" \
    --model_name dino_vits16 \
    --batch_size 1

# Eigenvectors
python extract.py extract_eigs \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --features_dir "./data/object-segmentation/${DATASET}/features/dino_vits16/" \
    --which_matrix "laplacian" \
    --output_dir "./data/object-segmentation/${DATASET}/eigs/laplacian_dino_vits16" \
    --K 2 \


# Extract single-region segmentatiosn
python extract.py extract_single_region_segmentations \
    --features_dir "./data/object-segmentation/${DATASET}/features/dino_vits16" \
    --eigs_dir "./data/object-segmentation/${DATASET}/eigs/laplacian_dino_vits16" \
    --output_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/patches/laplacian_dino_vits16"

# With CRF
# Optionally, you can also use `--multiprocessing 64` to speed up computation by running on 64 processes
python extract.py extract_crf_segmentations \
    --images_list "./data/object-segmentation/${DATASET}/lists/images.txt" \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --segmentations_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/patches/laplacian_dino_vits16" \
    --output_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/crf/laplacian_dino_vits16" \
    --downsample_factor 16 \
    --num_classes 2

After this extraction process, you should have a file with full-resolution segmentations. Then to evaluate on object segmentation, you can move into the object-segmentation directory and run python main.py. For example:

python main.py predictions.root="./data/object-segmentation" predictions.run="single_region_segmentation/crf/laplacian_dino_vits16"

By default, this assumes that all four object segmentations are available. To run on a custom dataset or only a subset of these datasets, simply edit configs/eval.yaml.

Also, if you want to visualize your segmentations, you should be able to use streamlit run extract.py vis_segmentations (after installing streamlit).

Semantic Segmentation

For semantic segmentation, we provide full instructions in the semantic-segmentation subfolder.

Acknowledgements

L. M. K. acknowledges the generous support of the Rhodes Trust. C. R. is supported by Innovate UK (project 71653) on behalf of UK Research and Innovation (UKRI) and by the European Research Council (ERC) IDIU-638009. I. L. and A. V. are supported by the VisualAI EPSRC programme grant (EP/T028572/1).

We would like to acknowledge LOST (paper and code), whose code we adapt for our object localization experiments. If you are interested in object localization, we suggest checking out their work!

Citation

@inproceedings{
    melaskyriazi2022deep,
    title={Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization}
    author={Luke Melas-Kyriazi and Christian Rupprecht and Iro Laina and Andrea Vedaldi}
    year={2022}
    booktitle={CVPR}
}

deep-spectral-segmentation's People

Contributors

99991 avatar lukemelas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-spectral-segmentation's Issues

Eigenvector effect

Great work, thank you!

After I use the code, the result of the eigenvector is much worse than that of the API in the hugging face. May I ask whether the model used in the hugging face has a fine tune on other data sets? I'm not too sure about the difference between the PAI provided in the project and that in the hugging face

Size of segmap

I am getting small size of segmaps say 189 bytes, 215 bytes, 139 bytes.
Also, getting variable size segmaps of images present in VOC2012..

How can I increase the size of those segmaps.

Discussion: can the method differentiate the background classes?

Hi, thank you for providing this awsome work!

After reading the paper, I realized that we are highly relying on color, spatial, and the features extracted by the DINO algorithm.
So if we train a DINO model by the dataset that doesn't include background classes like road and sidewalk, and this two different classes shared the similar color and spatial features, can the eigen map still differentiate this two classes?
Thanks again for the discussion!!

image

Kevin

Image matting

Hi, @lukemelas !

Thank you very much to provide your cool work!
I have a question about matting.

In eigenvalue calculation, you do not separate the method between hard and soft decomposition in

elif which_matrix in ['matting_laplacian', 'laplacian']:
.

How do I reproduce your results in Figure 6? Could you teach me?
Actually, the matting method is not implemented in https://github.com/lukemelas/deep-spectral-segmentation/blob/main/object-localization/object_discovery.py#L45 .

Screenshot 2023-02-16 17 00 23

Details regarding baselines (Saliency-DINO-ViT-B and MaskContrast-DINO-ViT-B)

Hi @lukemelas, fascinating work, thank you for your contribution!

While looking at the semantic segmentation results, I got several questions regarding the baselines used.

Additionally, we give results for directly clustering DINO-pretrained features masked with Deep-USPS saliency maps

Can you explain how you obtained the features for clustering?

  1. How you were training Deep-USPS? Were you using BasNet pretrained from Deep-USPS predictions (similar to MaskContrast)?
  2. Were you averaging DINO features corresponding to the resized mask, or you obtain [CLS] features from crop corresponding to the mask?

we also train a version of MaskContrast based on a DINO-pretrained model

  1. Where using training Deep-USPS or using provided by MaskContrast BasNet (pretrained with Deep-USPS supervision) model?
  2. Do you have an intuition why DINO pretrained MaskContrast model is worse than original MaskContrast one (31.2 vs 35)?

Class name of evaluation results

Hi, I have successfully obtained the semantic segmentation evaluation results on PASCAL VOC2012 by running eval.py. However, I would like to know what're the class names corresponding to the list of 'jaccards_all_categs' result ?

Screenshot 2023-04-30 at 2 36 46 AM

Many thanks !!

CRF up-sampled images are blank

Hi,

I am exactly following the steps mentioned for object segmentation. I can see the segmentations in the folder "patch". However, when I up sample them using CRF, everything becomes a big black picture. Do you know what might be wrong?

typo in semantic segmentation example - "dino_vitb16" instead of "dino_vits16"

Hi!
I am trying to run semantic segmentation example, following the readme, and it was failing to find the directory. After debugging a bit, turned out an issue was in the MODEL variable in the very start of the example, where there was "dino_vitb16" instead of "dino_vits16" (b instead of s).

Just in case anyone had the same small issue ;)

Question about extract_eigs

Thank you for sharing great work!!

I have two questions about extract eigenvectors.

  1. python extract.py extract_eigs generate the eigenvectors on the input image path (e.g., /home/naoki/deep-spectral-segmentation/testdata/images/014583.pth) and takes about 10 seconds per a image. Is this normal?
  2. How to get the map of eigenvectors like demo?

Thank you in advance.

non-executable training code for semantic segmentation

Hi! I am trying to run the self-training part of semantic segmentation, but it cannot run successfully... May I ask if it is the defect of the code itself or if there are any tricks of running the code that not mentioned in the readme file? Many thanks!

problem of dino on semantic-segmentation

Hi!
It's a great job for unsupervised detection and segmentation. When I try to reproduce the dino-segmentation result in table-4, I only get 19.55 which is far lower than the value reported in your paper(30.8+-2.7). Did I miss something? Looking forward to your reply.

Object-localization on VOC2007

Hi!
I use ViT-base/16 pretrained with DINO to reproduce the 61.6 result in table-2. But I only get 56.70.
I strictly follow the readme instruction. Do you have any idea?

Looking forward to your prely.

Object Localization Problem

Hi,

Thank you for your beautiful research. I am facing an issue with the data loader when it comes to object localization. The address for the VOC12 dataset seems to be incorrect, and there is no guide available to set it up properly. The code expects VOC dataset to exist in the directory "datasets"; however, in the previous sections, we set it up in "data". Even, when I set up the address to "data" it cannot load it. Would you please guide me on this?

Thanks.

semantic-segmentation produces black images

Hi after running, extract_features, extract_eigs, extract_single_region_segmentations and extract_crf_segmentations. I get a single_region_segmentation that has some segmentation inside but extract_crf_segmentations produces (I guess it is a simple upscaling) black images.

error in voc.py when running train.py for semantic segmentation

Hi
Thanks for sharing your code.

  1. Running the train.py I received a bug as " init() got an unexpected keyword argument, transform_tuple, in voc.py line 159
  • Can you pls help to resolve the bug?
  1. I ran the semantic segmentation code without the train.py step for 5 times with different seeds and the average mIoU is 23.8 , The reported mIoU in paper is 30.8. Can you pls let me know if my result is in the range that you expected ? If not can you pls give me some suggestions on how to improve the result?
    Many thanks

pymatting module missing in requirements.txt

Hi!
Thank you for your work!
I was testing your project on the voc dataset, when it broke at Step 2, because module "pymatting" is missing. I added it to requiremenets and installed.
Very minor thing, but wanted to let you know:)

CRF Segmentations are entirely black

I have been following the instructions for object segmentation and output is as expected until the CRF segmentation step at which point the output images are entirely black. The masks produced in the previous step are correct, and the upscaling the mask also works, however the output from the denseCRF function is a completely black image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.