Giter VIP home page Giter VIP logo

histoformer's Introduction

[ECCV 2024] Restoring Images in Adverse Weather Conditions via Histogram Transformer

paper arXiv poster
Closed Issues Open Issues Hits

Cover figure Network structure

Restoring Images in Adverse Weather Conditions via Histogram Transformer
Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, Xiaochun Cao
European Conference on Computer Vision

Abstract (click to expand) Transformer-based image restoration methods in adverse weather have achieved significant progress. Most of them use self-attention along the channel dimension or within spatially fixed-range blocks to reduce computational load. However, such a compromise results in limitations in capturing long-range spatial features. Inspired by the observation that the weather-induced degradation factors mainly cause similar occlusion and brightness, in this work, we propose an efficient Histogram Transformer (Histoformer) for restoring images affected by adverse weather. It is powered by a mechanism dubbed histogram self-attention, which sorts and segments spatial features into intensity-based bins. Self-attention is then applied across bins or within each bin to selectively focus on spatial features of dynamic range and process similar degraded pixels of the long range together. To boost histogram self-attention, we present a dynamic-range convolution enabling conventional convolution to conduct operation over similar pixels rather than neighbor pixels. We also observe that the common pixel-wise losses neglect linear association and correlation between output and ground-truth. Thus, we propose to leverage the Pearson correlation coefficient as a loss function to enforce the recovered pixels following the identical order as ground-truth. Extensive experiments demonstrate the efficacy and superiority of our proposed method.

📣 Citation

If you use Histoformer, please consider citing:

@article{sun2024restoring,
  title={Restoring Images in Adverse Weather Conditions via Histogram Transformer},
  author={Sun, Shangquan and Ren, Wenqi and Gao, Xinwei and Wang, Rui and Cao, Xiaochun},
  journal={arXiv preprint arXiv:2407.10172},
  year={2024}
}

🚀 News

  • 2024.08.29: Update alternative [Download Links] for Snow100K masks as its original link expires.
  • 2024.08.29: Update alternative [Download Links] for Outdoor-Rain Test1 as its original link expires.
  • 2024.07.23: Update alternative [Download Links] for Snow100K as its original link expires.
  • 2024.07.18: Codes and pre-trained weights are released.
  • 2024.07.17: Visual results are released.
  • 2024.07.14: Arxiv Paper is released.
  • 2024.07.01: Histoformer is accepted by ECCV2024.
  • 2024.03.01: Histoformer is rejected by CVPR2024.

🧩 Datasets

According to [Issue#2] and [Issue#8], many related datasets are unavailable. I have provided them below:

Snow100K Training Snow100K Test Set Snow100K Masks Outdoor-Rain Test1
[Google Drive] [Google Drive] [BaiduYun Disk] [pin: yuia] [Google Drive] [BaiduYun Disk] [pin: hstm] [Google Drive]

😄 Visual Results

All visual results are in Google Drive and Baidu Disk (pin: ps9q). You can also find each of them from the table below.

Examples:

RainDrop Outdoor-Rain
Snow100K-L RealSnow

⚙️ Installation

See INSTALL.md for the installation of dependencies required to run Histoformer.

🛠️ Training

  1. Download Training set or each of them, i.e., Snow100K, Outdoor-Rain, and RainDrop.

Note: The original link for downloading Snow100K has expired, so you could refer to [Issue#2] for alternative download links.

  1. Modify the configurations of dataroot_gt and dataroot_lq for train, val_snow_s, val_snow_l, val_test1 and val_raindrop in Allweather/Options/Allweather_Histoformer.yml

  2. To train Histoformer with default settings, run

cd Histoformer
./train.sh Allweather/Options/Allweather_Histoformer.yml 4321

Note: The above training script uses 4 GPUs by default. To use any other settings, for example 8 GPUs, modify CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 and --nproc_per_node=8 in Histoformer/train.sh and num_gpu: 8 Allweather/Options/Allweather_Histoformer.yml.

⚖️ Evaluation

  1. cd Allweather

  2. Download the pre-trained models and place it in ./pretrained_models/

  3. Download test datasets from each of them, i.e., Snow100K, Outdoor-Rain, and RainDrop.

  4. Test with the replaced argument --input_dir [INPUT_FOLDER]

python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_best.pth --yaml_file Options/Allweather_Histoformer.yml

# for realsnow
python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_real.pth --yaml_file Options/Allweather_Histoformer.yml
  1. Compute PSNR and SSIM by
python compute_psnr.py --path1 [GT-PATH] --path2 [Restored-PATH]

Values may be slightly different because a) the images I upload are in JPG file format for saving space, but the values reported are computed on the PNG-format images; b) some values are reported by previous works like WeatherDiff and may be slightly different from this reproduction.

⚖️ Demo

  1. cd Allweather

  2. Download the pre-trained models and place it in ./pretrained_models/

  3. Test with the replaced argument --input_dir [INPUT_FOLDER]

# for realsnow
python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_real.pth --yaml_file Options/Allweather_Histoformer.yml

📬 Contact

If you have any question, please contact [email protected]

Acknowledgment: This code is based on the WeatherDiff, Restormer, BasicSR toolbox and HINet.

histoformer's People

Contributors

sunshangquan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

histoformer's Issues

Training time cost

Dear @sunshangquan ,

Thanks for your impressive work on adverse weather conditions.

May I ask how long each training costs on one V100 (32G) GPU?

I am looking forward to hearing from you soon!

Best regards,

"NaN or Inf found in input tensor" on customized training dataset

Hi @sunshangquan,

Thank you so much for the fantastic work!
I'm training your model using a customized dataset. However, during the train, I got "NaN or Inf found in input tensor" and the l_pix, l_pear become NaN. I suspect this is something in my data but it's very hard to detect, I did compared it with the training data that you used and at least, they are quite similar in the sizes. Also, I saw in your code you already implemented the check_inf_nan but not yet used it anywhere. I guess you knew this already.

def check_inf_nan(self, x):
        x[x.isnan()] = 0
        x[x.isinf()] = 1e7
        return x

Since this is how you handle the NaN and Inf, does this affect the training result? Can you please give me a guidance for this? Thank you so much!

RainDrop测试集psnr

你好,我下载了RainDrop 链接中的test_b 的测试集(249对图片) 并使用net_g_best.pth进行修复 算得的psnr是26.866 这与文章中的psnr不匹配 我不知道是不是测试集不对还是其他什么原因 。
其他三个没有类似的问题。
期待你的回复。

About Snow100K dataset link

Dear @sunshangquan ,

Thanks for your impressive work on adverse weather conditions.

I found that the download link for the Snow100K dataset in the paper is no longer valid. Could you please provide an alternative download link?

I am looking forward to hearing from you soon!

Best regards,

Error when using direct tests

Thank you for your wonderful work. However, I am having problems with the direct testing process. When using the Demo direct test, it did not run successfully. The following error appears:

File "test_histoformer.py", line 15, in
from basicsr.models.archs.histoformer_arch import Histoformer
File "D:\code\Histoformer-main\basicsr\models_init_.py", line 16, in
model_modules = [
File "D:\code\Histoformer-main\basicsr\models_init
.py", line 17, in
importlib.import_module(f'models.{file_name}')
File "C:\Users\anaconda3\envs\dez\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'models'

Would you be able to advise where the code is having problems, thanks.

Testset of Outdoor-Rain

很棒的工作!
方便的话请问能提供一下Outdoor-Rain的测试集吗?我在我之前下载的测试集上,下载了您提供的visual results,用您提供的evaluation psnr和ssim的code测试Histoformer的性能,结果是31.673093507886737 0.929211313578245,与paper中的结果有较大差异,可能是我的测试集和您用的不一致。希望您能提供使用的测试集,方便我在paper中进行公平对比,感谢!

關於PSNR的計算方式

Shangquan Sun,您好! 感謝您出色的作品!

有兩個問題想跟您確認,
1.請問論文中的psnr分數是只計算在Y channel上嗎?
2.請問計算psnr的方式是使用Restormer中的evaluate_PSNR_SSIM.m,或是在python 中計算呢? 因為這兩種出來的分數會有點不一致

期待收到您的回覆!!

About the dataset

Hello, see the relevant paper data set download snow100k has 50000 for training, snow100K-M16588 for testing, according to the link you gave above to download the picture is not so many, is it convenient to give the complete data set link, thank you for your work

关于Dynamic-range Histogram Self-Attention中bin-wise histogram reshaping和frequency-wise histogram reshaping的代码实现

   尊敬的Shangquan Sun,您好!感谢您的出色工作,我对您提出的Dynamic-range Histogram Self-Attention十分感兴趣,将在最近的工作中进行引用您的这项工作。
   根据您提供的代码,及论文中的描述:To extract both global and local information, we define two types of reshaping, i.e., bin-wise histogram reshaping (BHR) and frequency-wise histogram reshaping (FHR). The first is to assign the number of bins equal to B and each bin contains HW/B elements, while the second is to set the frequency of each bin equal to B and the number of bins is HW/B. 
   想向您请教一下Bin的个数B在代码中如何实现的。代码中:hw = q.shape[-1] // self.factor 行显示您的Bin值设置为了 self.factor ,而 self.factor = num_heads,这与消融实验中Bin=36不太一致。
   此外,想请教您“第一种方法是将bin的个数设为B”和“第二种方法是将每个bin的频率设为B”在代码中是如何体现的呢。您方便的话,麻烦指出代码实现的行数或加以文字的描述。
   期待您的回复。如果上述我的个人理解存在对您代码理解有误,还请多包涵!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.