Giter VIP home page Giter VIP logo

cf-net's Introduction

Hi, I'm Zhang Yutong, nice to see you !๐Ÿ‘‹

I'm a Postgradute Student in MC^2 Lab at Beihang University, working on High Dynamic Range Imaging (HDR), Multi-modal Image Restoration (MIR) and other computer vision tasks.

Official repositories:

  • CF-Net : Achieving multi-exposure fusion and image super-resolution simultaniously by exploring the latent relationship between the two tasks. (TIP 2021)

Paper implementations:

  • U2Fusion: A unified network for multi-modal image fusion. (TPAMI 2021)
  • CU-Net : An interpretable network for multi-modal image restoration and fusion. (TPAMI 2020)
  • NHDRRNet : A non-local network for ghost-free HDR imaging. (TIP 2020)
  • AHDRNet : An attention network for ghost-free HDR imaging. (CVPR 2019)

Academic Collections:

  • Awesome-HDR: A collection of HDR imaging papers and codes read and selected by myself.

zyt's Stats

๐Ÿ˜„ Nice and Passionate !

cf-net's People

Contributors

ytzhang99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cf-net's Issues

Models

Is there a trained model?

Supplied model is not match the code

Please check the model.py and option.py version is same with .pth model
Numerous state_dict names and dimensions is not match!

-------------------------------------------- Wrong message ------------------------------------------------
RuntimeError: Error(s) in loading state_dict for CFNet:
Missing key(s) in state_dict: "srb_1.compress_in.0.weight", "srb_1.compress_in.0.bias", "srb_1.compress_in.1.weight", "srb_1.upBlocks.0.0.weight", "srb_1.upBlocks.0.0.bias", "srb_1.upBlocks.0.1.weight", "srb_1.upBlocks.1.0.weight", "srb_1.upBlocks.1.0.bias", "srb_1.upBlocks.1.1.weight", "srb_1.upBlocks.2.0.weight", "srb_1.upBlocks.2.0.bias", "srb_1.upBlocks.2.1.weight", "srb_1.upBlocks.3.0.weight", "srb_1.upBlocks.3.0.bias", "srb_1.upBlocks.3.1.weight", "srb_1.upBlocks.4.0.weight", "srb_1.upBlocks.4.0.bias", "srb_1.upBlocks.4.1.weight", "srb_1.upBlocks.5.0.weight", "srb_1.upBlocks.5.0.bias", "srb_1.upBlocks.5.1.weight", "srb_1.downBlocks.0.0.weight", "srb_1.downBlocks.0.0.bias", "srb_1.downBlocks.0.1.weight", "srb_1.downBlocks.1.0.weight", "srb_1.downBlocks.1.0.bias", "srb_1.downBlocks.1.1.weight", "srb_1.downBlocks.2.0.weight", "srb_1.downBlocks.2.0.bias", "srb_1.downBlocks.2.1.weight", "srb_1.downBlocks.3.0.weight", "srb_1.downBlocks.3.0.bias", "srb_1.downBlocks.3.1.weight", "srb_1.downBlocks.4.0.weight", "srb_1.downBlocks.4.0.bias", "srb_1.downBlocks.4.1.weight", "srb_1.downBlocks.5.0.weight", "srb_1.downBlocks.5.0.bias", "srb_1.downBlocks.5.1.weight", "srb_1.uptranBlocks.0.0.weight", "srb_1.uptranBlocks.0.0.bias", "srb_1.uptranBlocks.0.1.weight", "srb_1.uptranBlocks.1.0.weight", "srb_1.uptranBlocks.1.0.bias", "srb_1.uptranBlocks.1.1.weight", "srb_1.uptranBlocks.2.0.weight", "srb_1.uptranBlocks.2.0.bias", "srb_1.uptranBlocks.2.1.weight", "srb_1.uptranBlocks.3.0.weight", "srb_1.uptranBlocks.3.0.bias", "srb_1.uptranBlocks.3.1.weight", "srb_1.uptranBlocks.4.0.weight", "srb_1.uptranBlocks.4.0.bias", "srb_1.uptranBlocks.4.1.weight", "srb_1.downtranBlocks.0.0.weight", "srb_1.downtranBlocks.0.0.bias", "srb_1.downtranBlocks.0.1.weight", "srb_1.downtranBlocks.1.0.weight", "srb_1.downtranBlocks.1.0.bias", "srb_1.downtranBlocks.1.1.weight", "srb_1.downtranBlocks.2.0.weight", "srb_1.downtranBlocks.2.0.bias", "srb_1.downtranBlocks.2.1.weight", "srb_1.downtranBlocks.3.0.weight", "srb_1.downtranBlocks.3.0.bias", "srb_1.downtranBlocks.3.1.weight", "srb_1.downtranBlocks.4.0.weight", "srb_1.downtranBlocks.4.0.bias", "srb_1.downtranBlocks.4.1.weight", "srb_1.compress_out.0.weight", "srb_1.compress_out.0.bias", "srb_1.compress_out.1.weight", "srb_2.compress_in.0.weight", "srb_2.compress_in.0.bias", "srb_2.compress_in.1.weight", "srb_2.upBlocks.0.0.weight", "srb_2.upBlocks.0.0.bias", "srb_2.upBlocks.0.1.weight", "srb_2.upBlocks.1.0.weight", "srb_2.upBlocks.1.0.bias", "srb_2.upBlocks.1.1.weight", "srb_2.upBlocks.2.0.weight", "srb_2.upBlocks.2.0.bias", "srb_2.upBlocks.2.1.weight", "srb_2.upBlocks.3.0.weight", "srb_2.upBlocks.3.0.bias", "srb_2.upBlocks.3.1.weight", "srb_2.upBlocks.4.0.weight", "srb_2.upBlocks.4.0.bias", "srb_2.upBlocks.4.1.weight", "srb_2.upBlocks.5.0.weight", "srb_2.upBlocks.5.0.bias", "srb_2.upBlocks.5.1.weight", "srb_2.downBlocks.0.0.weight", "srb_2.downBlocks.0.0.bias", "srb_2.downBlocks.0.1.weight", "srb_2.downBlocks.1.0.weight", "srb_2.downBlocks.1.0.bias", "srb_2.downBlocks.1.1.weight", "srb_2.downBlocks.2.0.weight", "srb_2.downBlocks.2.0.bias", "srb_2.downBlocks.2.1.weight", "srb_2.downBlocks.3.0.weight", "srb_2.downBlocks.3.0.bias", "srb_2.downBlocks.3.1.weight", "srb_2.downBlocks.4.0.weight", "srb_2.downBlocks.4.0.bias", "srb_2.downBlocks.4.1.weight", "srb_2.downBlocks.5.0.weight", "srb_2.downBlocks.5.0.bias", "srb_2.downBlocks.5.1.weight", "srb_2.uptranBlocks.0.0.weight", "srb_2.uptranBlocks.0.0.bias", "srb_2.uptranBlocks.0.1.weight", "srb_2.uptranBlocks.1.0.weight", "srb_2.uptranBlocks.1.0.bias", "srb_2.uptranBlocks.1.1.weight", "srb_2.uptranBlocks.2.0.weight", "srb_2.uptranBlocks.2.0.bias", "srb_2.uptranBlocks.2.1.weight", "srb_2.uptranBlocks.3.0.weight", "srb_2.uptranBlocks.3.0.bias", "srb_2.uptranBlocks.3.1.weight", "srb_2.uptranBlocks.4.0.weight", "srb_2.uptranBlocks.4.0.bias", "srb_2.uptranBlocks.4.1.weight", "srb_2.downtranBlocks.0.0.weight", "srb_2.downtranBlocks.0.0.bias", "srb_2.downtranBlocks.0.1.weight", "srb_2.downtranBlocks.1.0.weight", "srb_2.downtranBlocks.1.0.bias", "srb_2.downtranBlocks.1.1.weight", "srb_2.downtranBlocks.2.0.weight", "srb_2.downtranBlocks.2.0.bias", "srb_2.downtranBlocks.2.1.weight", "srb_2.downtranBlocks.3.0.weight", "srb_2.downtranBlocks.3.0.bias", "srb_2.downtranBlocks.3.1.weight", "srb_2.downtranBlocks.4.0.weight", "srb_2.downtranBlocks.4.0.bias", "srb_2.downtranBlocks.4.1.weight", "srb_2.compress_out.0.weight", "srb_2.compress_out.0.bias", "srb_2.compress_out.1.weight", "out_1.0.0.weight", "out_1.0.0.bias", "out_1.0.1.weight", "out_1.1.0.weight", "out_1.1.0.bias", "out_1.1.1.weight", "out_1.2.0.weight", "out_1.2.0.bias", "out_1.2.1.weight", "conv_out_1.0.0.weight", "conv_out_1.0.0.bias", "conv_out_1.1.0.weight", "conv_out_1.1.0.bias", "conv_out_1.2.0.weight", "conv_out_1.2.0.bias", "out_2.0.0.weight", "out_2.0.0.bias", "out_2.0.1.weight", "out_2.1.0.weight", "out_2.1.0.bias", "out_2.1.1.weight", "out_2.2.0.weight", "out_2.2.0.bias", "out_2.2.1.weight", "conv_out_2.0.0.weight", "conv_out_2.0.0.bias", "conv_out_2.1.0.weight", "conv_out_2.1.0.bias", "conv_out_2.2.0.weight", "conv_out_2.2.0.bias", "block_over_0.re_guide.0.weight", "block_over_0.re_guide.0.bias", "block_over_0.re_guide.1.weight", "block_under_0.re_guide.0.weight", "block_under_0.re_guide.0.bias", "block_under_0.re_guide.1.weight", "block_over_1.re_guide.0.weight", "block_over_1.re_guide.0.bias", "block_over_1.re_guide.1.weight", "block_under_1.re_guide.0.weight", "block_under_1.re_guide.0.bias", "block_under_1.re_guide.1.weight", "block_over_2.re_guide.0.weight", "block_over_2.re_guide.0.bias", "block_over_2.re_guide.1.weight", "block_under_2.re_guide.0.weight", "block_under_2.re_guide.0.bias", "block_under_2.re_guide.1.weight".
Unexpected key(s) in state_dict: "conv_in_over_1.0.weight", "conv_in_over_1.0.bias", "conv_in_over_1.1.weight", "feat_in_over_1.0.weight", "feat_in_over_1.0.bias", "feat_in_over_1.1.weight", "out_over_1.0.weight", "out_over_1.0.bias", "out_over_1.1.weight", "conv_out_over_1.0.weight", "conv_out_over_1.0.bias", "conv_in_under_1.0.weight", "conv_in_under_1.0.bias", "conv_in_under_1.1.weight", "feat_in_under_1.0.weight", "feat_in_under_1.0.bias", "feat_in_under_1.1.weight", "out_under_1.0.weight", "out_under_1.0.bias", "out_under_1.1.weight", "conv_out_under_1.0.weight", "conv_out_under_1.0.bias", "conv_in_over_2.0.weight", "conv_in_over_2.0.bias", "conv_in_over_2.1.weight", "feat_in_over_2.0.weight", "feat_in_over_2.0.bias", "feat_in_over_2.1.weight", "out_over_2.0.weight", "out_over_2.0.bias", "out_over_2.1.weight", "conv_out_over_2.0.weight", "conv_out_over_2.0.bias", "conv_in_under_2.0.weight", "conv_in_under_2.0.bias", "conv_in_under_2.1.weight", "feat_in_under_2.0.weight", "feat_in_under_2.0.bias", "feat_in_under_2.1.weight", "out_under_2.0.weight", "out_under_2.0.bias", "out_under_2.1.weight", "conv_out_under_2.0.weight", "conv_out_under_2.0.bias", "conv_in_over_3.0.weight", "conv_in_over_3.0.bias", "conv_in_over_3.1.weight", "feat_in_over_3.0.weight", "feat_in_over_3.0.bias", "feat_in_over_3.1.weight", "block_over_3.compress_in.0.weight", "block_over_3.compress_in.0.bias", "block_over_3.compress_in.1.weight", "block_over_3.upBlocks.0.0.weight", "block_over_3.upBlocks.0.0.bias", "block_over_3.upBlocks.0.1.weight", "block_over_3.upBlocks.1.0.weight", "block_over_3.upBlocks.1.0.bias", "block_over_3.upBlocks.1.1.weight", "block_over_3.upBlocks.2.0.weight", "block_over_3.upBlocks.2.0.bias", "block_over_3.upBlocks.2.1.weight", "block_over_3.upBlocks.3.0.weight", "block_over_3.upBlocks.3.0.bias", "block_over_3.upBlocks.3.1.weight", "block_over_3.upBlocks.4.0.weight", "block_over_3.upBlocks.4.0.bias", "block_over_3.upBlocks.4.1.weight", "block_over_3.upBlocks.5.0.weight", "block_over_3.upBlocks.5.0.bias", "block_over_3.upBlocks.5.1.weight", "block_over_3.downBlocks.0.0.weight", "block_over_3.downBlocks.0.0.bias", "block_over_3.downBlocks.0.1.weight", "block_over_3.downBlocks.1.0.weight", "block_over_3.downBlocks.1.0.bias", "block_over_3.downBlocks.1.1.weight", "block_over_3.downBlocks.2.0.weight", "block_over_3.downBlocks.2.0.bias", "block_over_3.downBlocks.2.1.weight", "block_over_3.downBlocks.3.0.weight", "block_over_3.downBlocks.3.0.bias", "block_over_3.downBlocks.3.1.weight", "block_over_3.downBlocks.4.0.weight", "block_over_3.downBlocks.4.0.bias", "block_over_3.downBlocks.4.1.weight", "block_over_3.downBlocks.5.0.weight", "block_over_3.downBlocks.5.0.bias", "block_over_3.downBlocks.5.1.weight", "block_over_3.uptranBlocks.0.0.weight", "block_over_3.uptranBlocks.0.0.bias", "block_over_3.uptranBlocks.0.1.weight", "block_over_3.uptranBlocks.1.0.weight", "block_over_3.uptranBlocks.1.0.bias", "block_over_3.uptranBlocks.1.1.weight", "block_over_3.uptranBlocks.2.0.weight", "block_over_3.uptranBlocks.2.0.bias", "block_over_3.uptranBlocks.2.1.weight", "block_over_3.uptranBlocks.3.0.weight", "block_over_3.uptranBlocks.3.0.bias", "block_over_3.uptranBlocks.3.1.weight", "block_over_3.uptranBlocks.4.0.weight", "block_over_3.uptranBlocks.4.0.bias", "block_over_3.uptranBlocks.4.1.weight", "block_over_3.downtranBlocks.0.0.weight", "block_over_3.downtranBlocks.0.0.bias", "block_over_3.downtranBlocks.0.1.weight", "block_over_3.downtranBlocks.1.0.weight", "block_over_3.downtranBlocks.1.0.bias", "block_over_3.downtranBlocks.1.1.weight", "block_over_3.downtranBlocks.2.0.weight", "block_over_3.downtranBlocks.2.0.bias", "block_over_3.downtranBlocks.2.1.weight", "block_over_3.downtranBlocks.3.0.weight", "block_over_3.downtranBlocks.3.0.bias", "block_over_3.downtranBlocks.3.1.weight", "block_over_3.downtranBlocks.4.0.weight", "block_over_3.downtranBlocks.4.0.bias", "block_over_3.downtranBlocks.4.1.weight", "block_over_3.compress_out.0.weight", "block_over_3.compress_out.0.bias", "block_over_3.compress_out.1.weight", "out_over_3.0.weight", "out_over_3.0.bias", "out_over_3.1.weight", "conv_out_over_3.0.weight", "conv_out_over_3.0.bias", "conv_in_under_3.0.weight", "conv_in_under_3.0.bias", "conv_in_under_3.1.weight", "feat_in_under_3.0.weight", "feat_in_under_3.0.bias", "feat_in_under_3.1.weight", "block_under_3.compress_in.0.weight", "block_under_3.compress_in.0.bias", "block_under_3.compress_in.1.weight", "block_under_3.upBlocks.0.0.weight", "block_under_3.upBlocks.0.0.bias", "block_under_3.upBlocks.0.1.weight", "block_under_3.upBlocks.1.0.weight", "block_under_3.upBlocks.1.0.bias", "block_under_3.upBlocks.1.1.weight", "block_under_3.upBlocks.2.0.weight", "block_under_3.upBlocks.2.0.bias", "block_under_3.upBlocks.2.1.weight", "block_under_3.upBlocks.3.0.weight", "block_under_3.upBlocks.3.0.bias", "block_under_3.upBlocks.3.1.weight", "block_under_3.upBlocks.4.0.weight", "block_under_3.upBlocks.4.0.bias", "block_under_3.upBlocks.4.1.weight", "block_under_3.upBlocks.5.0.weight", "block_under_3.upBlocks.5.0.bias", "block_under_3.upBlocks.5.1.weight", "block_under_3.downBlocks.0.0.weight", "block_under_3.downBlocks.0.0.bias", "block_under_3.downBlocks.0.1.weight", "block_under_3.downBlocks.1.0.weight", "block_under_3.downBlocks.1.0.bias", "block_under_3.downBlocks.1.1.weight", "block_under_3.downBlocks.2.0.weight", "block_under_3.downBlocks.2.0.bias", "block_under_3.downBlocks.2.1.weight", "block_under_3.downBlocks.3.0.weight", "block_under_3.downBlocks.3.0.bias", "block_under_3.downBlocks.3.1.weight", "block_under_3.downBlocks.4.0.weight", "block_under_3.downBlocks.4.0.bias", "block_under_3.downBlocks.4.1.weight", "block_under_3.downBlocks.5.0.weight", "block_under_3.downBlocks.5.0.bias", "block_under_3.downBlocks.5.1.weight", "block_under_3.uptranBlocks.0.0.weight", "block_under_3.uptranBlocks.0.0.bias", "block_under_3.uptranBlocks.0.1.weight", "block_under_3.uptranBlocks.1.0.weight", "block_under_3.uptranBlocks.1.0.bias", "block_under_3.uptranBlocks.1.1.weight", "block_under_3.uptranBlocks.2.0.weight", "block_under_3.uptranBlocks.2.0.bias", "block_under_3.uptranBlocks.2.1.weight", "block_under_3.uptranBlocks.3.0.weight", "block_under_3.uptranBlocks.3.0.bias", "block_under_3.uptranBlocks.3.1.weight", "block_under_3.uptranBlocks.4.0.weight", "block_under_3.uptranBlocks.4.0.bias", "block_under_3.uptranBlocks.4.1.weight", "block_under_3.downtranBlocks.0.0.weight", "block_under_3.downtranBlocks.0.0.bias", "block_under_3.downtranBlocks.0.1.weight", "block_under_3.downtranBlocks.1.0.weight", "block_under_3.downtranBlocks.1.0.bias", "block_under_3.downtranBlocks.1.1.weight", "block_under_3.downtranBlocks.2.0.weight", "block_under_3.downtranBlocks.2.0.bias", "block_under_3.downtranBlocks.2.1.weight", "block_under_3.downtranBlocks.3.0.weight", "block_under_3.downtranBlocks.3.0.bias", "block_under_3.downtranBlocks.3.1.weight", "block_under_3.downtranBlocks.4.0.weight", "block_under_3.downtranBlocks.4.0.bias", "block_under_3.downtranBlocks.4.1.weight", "block_under_3.compress_out.0.weight", "block_under_3.compress_out.0.bias", "block_under_3.compress_out.1.weight", "out_under_3.0.weight", "out_under_3.0.bias", "out_under_3.1.weight", "conv_out_under_3.0.weight", "conv_out_under_3.0.bias", "conv_in_under_0.1.weight".
size mismatch for block_over_0.compress_in.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for block_under_0.compress_in.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).

mistakes

Please check the model.py and option.py version is same with .pth model
Numerous state_dict names and dimensions is not match!

-------------------------------------------- Wrong message ------------------------------------------------
RuntimeError: Error(s) in loading state_dict for CFNet:
size mismatch for srb_1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_over.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_under.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).

quantitative performance

Hi, Zhang!
Excellent work. But I wonder what the function to calculate PSNR and SSIM. The result obtained by the function in test.py seems not to match the score in Table V (e.g. 4x, the re-evaluated result is about 20.5db)

RuntimeError

RuntimeError: stack expects each tensor to be equal size, but got [3, 128, 128] at entry 0 and [3, 28, 128] at entry 1
I have processed the data as requested. So is there any other needed process?

File "D:\CF-Net\CF-Net-master\CF-Net-master\train.py", line 57, in train
for l_over, l_under, h_over, h_under, h in self.train_loader:
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 128, 128] at entry 0 and [3, 28, 128] at entry 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.