Giter VIP home page Giter VIP logo

enet-real-time-semantic-segmentation's Introduction

ENet - Real Time Semantic Segmentation

A Neural Net Architecture for real time Semantic Segmentation.
In this repository we have reproduced the ENet Paper - Which can be used on mobile devices for real time semantic segmentattion. The link to the paper can be found here: ENet

How to use?

  1. This repository comes in with a handy notebook which you can use with Colab.
    You can find a link to the notebook here: ENet - Real Time Semantic Segmentation
    Open it in colab: Open in Colab

  1. Clone the repository and cd into it
git clone https://github.com/iArunava/ENet-Real-Time-Semantic-Segmentation.git
cd ENet-Real-Time-Semantic-Segmentation/
  1. Use this command to train the model
python3 init.py --mode train -iptr path/to/train/input/set/ -lptr /path/to/label/set/
  1. Use this command to test the model
python3 init.py --mode test -m /path/to/the/pretrained/model.pth -i /path/to/image/to/infer.png
  1. Use --help to get more commands
python3 init.py --help

Some results

enet infer 1 enet infer 4 enet infer 6 enet infer 5 enet infer 2

References

  1. A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.

Citations

@inproceedings{ BrostowSFC:ECCV08,
  author    = {Gabriel J. Brostow and Jamie Shotton and Julien Fauqueur and Roberto Cipolla},
  title     = {Segmentation and Recognition Using Structure from Motion Point Clouds},
  booktitle = {ECCV (1)},
  year      = {2008},
  pages     = {44-57}
}

@article{ BrostowFC:PRL2008,
    author = "Gabriel J. Brostow and Julien Fauqueur and Roberto Cipolla",
    title = "Semantic Object Classes in Video: A High-Definition Ground Truth Database",
    journal = "Pattern Recognition Letters",
    volume = "xx",
    number = "x",   
    pages = "xx-xx",
    year = "2008"
}

License

The code in this repository is distributed under the BSD v3 Licemse.
Feel free to fork and enjoy :)

enet-real-time-semantic-segmentation's People

Contributors

avivsham avatar iarunava avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enet-real-time-semantic-segmentation's Issues

@AvivSham Thank you for your reply! Not the test image. The image I am using is in jpg format and the image size is 266*200.I didn't train the new model, I used the model you provided.I modified your code.

@AvivSham Thank you for your reply! Not the test image. The image I am using is in jpg format and the image size is 266*200.I didn't train the new model, I used the model you provided.I modified your code.
Like this:
image

I just used the images in the Camvid dataset, which is the result.
image
But the class is not right, from the color point of view

Originally posted by @zhouzhubin in #3 (comment)

getting unrecognised issue

python init.py --mode test -m /datasets/CamVid/ckpt-enet.pth -i img.jpg

Traceback (most recent call last):
File "init.py", line 153, in
test(FLAGS)
File "/home/vikram/OBJECT DETECTION & TRACKING/ENet-Real-Time-Semantic-Segmentation/test.py", line 12, in test
if not FLAGS.model_path.endswith('.pth'):
AttributeError: 'Namespace' object has no attribute 'model_path'

More a question, than an issue

Hi Arunava,

I find your work very interesting, because I need to use segmentation on nvidia Jetson - and popular models are too slow for me. I red your article and the original paper, and I can't find the information about the number of epochs needed to train model with good performance. I used your notebook to train pedestrian segmentation (2 classes - pedestrian and background). After 100 epochs model couldn't segment anything at all - empty masks. Then I used your pretrained weights and fine-tuned the model for 2 classes, 100 epochs again. This time I got segmentation masks, but they were completely inaccurate. How long should I train Enet to get accurate results? Or maybe I should already get some reasonable results, and it seems I'm doing something wrong (perhaps with data)?
Could you also give me some hint about the losses that are printed during training? What are the sensible values? I'm getting very big number or sometimes negatives, I feel like something is wrong.

size mismatch

this error occurred when I ran the test

C:\Users\vcvis\Desktop\ENet-Real-Time-Semantic-Segmentation-master>python init.py --mode test -m ckpt-enet-10-379.48720532655716.pth -i 0006R0_f01320.png
Traceback (most recent call last):
File "init.py", line 153, in
test(FLAGS)
File "C:\Users\vcvis\Desktop\ENet-Real-Time-Semantic-Segmentation-master\test.py", line 24, in test
enet.load_state_dict(checkpoint['state_dict'])
File "D:\miniconda\lib\site-packages\torch\nn\modules\module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ENet:
size mismatch for fullconv.weight: copying a param with shape torch.Size([16, 102, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 12, 3, 3]).

guide me
thank you

Missing keys with vanilla repository

The repository with the pretrained weights throw the following errors:

Missing key(s) in state_dict: "b10.batchnorm1.weight", "b10.batchnorm1.bias", "b10.batchnorm1.running_mean", "b10.batchnorm1.running_var", "b10.batchnorm3.weight", "b10.batchnorm3.bias", "b10.batchnorm3.running_mean", "b10.batchnorm3.running_var", "b11.batchnorm1.weight", "b11.batchnorm1.bias", "b11.batchnorm1.running_mean", "b11.batchnorm1.running_var", "b11.batchnorm3.weight", "b11.batchnorm3.bias", "b11.batchnorm3.running_mean", "b11.batchnorm3.running_var", "b12.batchnorm1.weight", "b12.batchnorm1.bias", "b12.batchnorm1.running_mean", "b12.batchnorm1.running_var", "b12.batchnorm3.weight", "b12.batchnorm3.bias", "b12.batchnorm3.running_mean", "b12.batchnorm3.running_var", "b13.batchnorm1.weight", "b13.batchnorm1.bias", "b13.batchnorm1.running_mean", "b13.batchnorm1.running_var", "b13.batchnorm3.weight", "b13.batchnorm3.bias", "b13.batchnorm3.running_mean", "b13.batchnorm3.running_var", "b14.batchnorm1.weight", "b14.batchnorm1.bias", "b14.batchnorm1.running_mean", "b14.batchnorm1.running_var", "b14.batchnorm3.weight", "b14.batchnorm3.bias", "b14.batchnorm3.running_mean", "b14.batchnorm3.running_var", "b20.batchnorm1.weight", "b20.batchnorm1.bias", "b20.batchnorm1.running_mean", "b20.batchnorm1.running_var", "b20.batchnorm3.weight", "b20.batchnorm3.bias", "b20.batchnorm3.running_mean", "b20.batchnorm3.running_var", "b21.batchnorm1.weight", "b21.batchnorm1.bias", "b21.batchnorm1.running_mean", "b21.batchnorm1.running_var", "b21.batchnorm3.weight", "b21.batchnorm3.bias", "b21.batchnorm3.running_mean", "b21.batchnorm3.running_var", "b22.batchnorm1.weight", "b22.batchnorm1.bias", "b22.batchnorm1.running_mean", "b22.batchnorm1.running_var", "b22.batchnorm3.weight", "b22.batchnorm3.bias", "b22.batchnorm3.running_mean", "b22.batchnorm3.running_var", "b23.batchnorm1.weight", "b23.batchnorm1.bias", "b23.batchnorm1.running_mean", "b23.batchnorm1.running_var", "b23.batchnorm3.weight", "b23.batchnorm3.bias", "b23.batchnorm3.running_mean", "b23.batchnorm3.running_var", "b24.batchnorm1.weight", "b24.batchnorm1.bias", "b24.batchnorm1.running_mean", "b24.batchnorm1.running_var", "b24.batchnorm3.weight", "b24.batchnorm3.bias", "b24.batchnorm3.running_mean", "b24.batchnorm3.running_var", "b25.batchnorm1.weight", "b25.batchnorm1.bias", "b25.batchnorm1.running_mean", "b25.batchnorm1.running_var", "b25.batchnorm3.weight", "b25.batchnorm3.bias", "b25.batchnorm3.running_mean", "b25.batchnorm3.running_var", "b26.batchnorm1.weight", "b26.batchnorm1.bias", "b26.batchnorm1.running_mean", "b26.batchnorm1.running_var", "b26.batchnorm3.weight", "b26.batchnorm3.bias", "b26.batchnorm3.running_mean", "b26.batchnorm3.running_var", "b27.batchnorm1.weight", "b27.batchnorm1.bias", "b27.batchnorm1.running_mean", "b27.batchnorm1.running_var", "b27.batchnorm3.weight", "b27.batchnorm3.bias", "b27.batchnorm3.running_mean", "b27.batchnorm3.running_var", "b28.batchnorm1.weight", "b28.batchnorm1.bias", "b28.batchnorm1.running_mean", "b28.batchnorm1.running_var", "b28.batchnorm3.weight", "b28.batchnorm3.bias", "b28.batchnorm3.running_mean",
"b28.batchnorm3.running_var", "b31.batchnorm1.weight", "b31.batchnorm1.bias", "b31.batchnorm1.running_mean", "b31.batchnorm1.running_var", "b31.batchnorm3.weight", "b31.batchnorm3.bias", "b31.batchnorm3.running_mean", "b31.batchnorm3.running_var", "b32.batchnorm1.weight", "b32.batchnorm1.bias", "b32.batchnorm1.running_mean", "b32.batchnorm1.running_var", "b32.batchnorm3.weight", "b32.batchnorm3.bias", "b32.batchnorm3.running_mean", "b32.batchnorm3.running_var", "b33.batchnorm1.weight", "b33.batchnorm1.bias", "b33.batchnorm1.running_mean", "b33.batchnorm1.running_var", "b33.batchnorm3.weight", "b33.batchnorm3.bias", "b33.batchnorm3.running_mean", "b33.batchnorm3.running_var", "b34.batchnorm1.weight", "b34.batchnorm1.bias", "b34.batchnorm1.running_mean", "b34.batchnorm1.running_var", "b34.batchnorm3.weight", "b34.batchnorm3.bias", "b34.batchnorm3.running_mean", "b34.batchnorm3.running_var", "b35.batchnorm1.weight", "b35.batchnorm1.bias", "b35.batchnorm1.running_mean", "b35.batchnorm1.running_var", "b35.batchnorm3.weight", "b35.batchnorm3.bias", "b35.batchnorm3.running_mean", "b35.batchnorm3.running_var", "b36.batchnorm1.weight", "b36.batchnorm1.bias", "b36.batchnorm1.running_mean", "b36.batchnorm1.running_var", "b36.batchnorm3.weight", "b36.batchnorm3.bias", "b36.batchnorm3.running_mean", "b36.batchnorm3.running_var", "b37.batchnorm1.weight", "b37.batchnorm1.bias", "b37.batchnorm1.running_mean", "b37.batchnorm1.running_var", "b37.batchnorm3.weight", "b37.batchnorm3.bias", "b37.batchnorm3.running_mean", "b37.batchnorm3.running_var", "b38.batchnorm1.weight", "b38.batchnorm1.bias", "b38.batchnorm1.running_mean", "b38.batchnorm1.running_var", "b38.batchnorm3.weight", "b38.batchnorm3.bias", "b38.batchnorm3.running_mean", "b38.batchnorm3.running_var", "b40.batchnorm1.weight", "b40.batchnorm1.bias", "b40.batchnorm1.running_mean", "b40.batchnorm1.running_var", "b40.batchnorm3.weight", "b40.batchnorm3.bias", "b40.batchnorm3.running_mean", "b40.batchnorm3.running_var", "b41.batchnorm1.weight", "b41.batchnorm1.bias",
"b41.batchnorm1.running_mean", "b41.batchnorm1.running_var", "b41.batchnorm3.weight", "b41.batchnorm3.bias", "b41.batchnorm3.running_mean", "b41.batchnorm3.running_var",
"b42.batchnorm1.weight", "b42.batchnorm1.bias", "b42.batchnorm1.running_mean", "b42.batchnorm1.running_var", "b42.batchnorm3.weight", "b42.batchnorm3.bias", "b42.batchnorm3.running_mean", "b42.batchnorm3.running_var", "b50.batchnorm1.weight", "b50.batchnorm1.bias", "b50.batchnorm1.running_mean", "b50.batchnorm1.running_var", "b50.batchnorm3.weight", "b50.batchnorm3.bias", "b50.batchnorm3.running_mean", "b50.batchnorm3.running_var", "b51.batchnorm1.weight", "b51.batchnorm1.bias", "b51.batchnorm1.running_mean", "b51.batchnorm1.running_var", "b51.batchnorm3.weight", "b51.batchnorm3.bias", "b51.batchnorm3.running_mean", "b51.batchnorm3.running_var".
Unexpected key(s) in state_dict: "b10.batchnorm.weight", "b10.batchnorm.bias", "b10.batchnorm.running_mean", "b10.batchnorm.running_var", "b10.batchnorm.num_batches_tracked", "b11.batchnorm.weight", "b11.batchnorm.bias", "b11.batchnorm.running_mean", "b11.batchnorm.running_var", "b11.batchnorm.num_batches_tracked", "b12.batchnorm.weight", "b12.batchnorm.bias", "b12.batchnorm.running_mean", "b12.batchnorm.running_var", "b12.batchnorm.num_batches_tracked", "b13.batchnorm.weight", "b13.batchnorm.bias", "b13.batchnorm.running_mean", "b13.batchnorm.running_var", "b13.batchnorm.num_batches_tracked", "b14.batchnorm.weight", "b14.batchnorm.bias", "b14.batchnorm.running_mean", "b14.batchnorm.running_var", "b14.batchnorm.num_batches_tracked", "b20.batchnorm.weight", "b20.batchnorm.bias", "b20.batchnorm.running_mean", "b20.batchnorm.running_var", "b20.batchnorm.num_batches_tracked", "b21.batchnorm.weight", "b21.batchnorm.bias", "b21.batchnorm.running_mean", "b21.batchnorm.running_var", "b21.batchnorm.num_batches_tracked", "b22.batchnorm.weight", "b22.batchnorm.bias", "b22.batchnorm.running_mean", "b22.batchnorm.running_var", "b22.batchnorm.num_batches_tracked", "b23.batchnorm.weight", "b23.batchnorm.bias", "b23.batchnorm.running_mean", "b23.batchnorm.running_var", "b23.batchnorm.num_batches_tracked", "b24.batchnorm.weight", "b24.batchnorm.bias", "b24.batchnorm.running_mean", "b24.batchnorm.running_var", "b24.batchnorm.num_batches_tracked", "b25.batchnorm.weight", "b25.batchnorm.bias", "b25.batchnorm.running_mean", "b25.batchnorm.running_var", "b25.batchnorm.num_batches_tracked", "b26.batchnorm.weight", "b26.batchnorm.bias", "b26.batchnorm.running_mean", "b26.batchnorm.running_var", "b26.batchnorm.num_batches_tracked", "b27.batchnorm.weight", "b27.batchnorm.bias", "b27.batchnorm.running_mean", "b27.batchnorm.running_var", "b27.batchnorm.num_batches_tracked", "b28.batchnorm.weight", "b28.batchnorm.bias", "b28.batchnorm.running_mean", "b28.batchnorm.running_var", "b28.batchnorm.num_batches_tracked", "b31.batchnorm.weight", "b31.batchnorm.bias", "b31.batchnorm.running_mean", "b31.batchnorm.running_var", "b31.batchnorm.num_batches_tracked", "b32.batchnorm.weight", "b32.batchnorm.bias", "b32.batchnorm.running_mean", "b32.batchnorm.running_var", "b32.batchnorm.num_batches_tracked", "b33.batchnorm.weight", "b33.batchnorm.bias", "b33.batchnorm.running_mean", "b33.batchnorm.running_var", "b33.batchnorm.num_batches_tracked", "b34.batchnorm.weight", "b34.batchnorm.bias", "b34.batchnorm.running_mean", "b34.batchnorm.running_var", "b34.batchnorm.num_batches_tracked", "b35.batchnorm.weight", "b35.batchnorm.bias", "b35.batchnorm.running_mean", "b35.batchnorm.running_var", "b35.batchnorm.num_batches_tracked", "b36.batchnorm.weight", "b36.batchnorm.bias", "b36.batchnorm.running_mean", "b36.batchnorm.running_var", "b36.batchnorm.num_batches_tracked", "b37.batchnorm.weight", "b37.batchnorm.bias", "b37.batchnorm.running_mean", "b37.batchnorm.running_var", "b37.batchnorm.num_batches_tracked", "b38.batchnorm.weight", "b38.batchnorm.bias", "b38.batchnorm.running_mean", "b38.batchnorm.running_var", "b38.batchnorm.num_batches_tracked", "b40.batchnorm.weight", "b40.batchnorm.bias", "b40.batchnorm.running_mean", "b40.batchnorm.running_var", "b40.batchnorm.num_batches_tracked", "b41.batchnorm.weight", "b41.batchnorm.bias", "b41.batchnorm.running_mean", "b41.batchnorm.running_var", "b41.batchnorm.num_batches_tracked", "b42.batchnorm.weight", "b42.batchnorm.bias", "b42.batchnorm.running_mean", "b42.batchnorm.running_var", "b42.batchnorm.num_batches_tracked", "b50.batchnorm.weight", "b50.batchnorm.bias", "b50.batchnorm.running_mean", "b50.batchnorm.running_var", "b50.batchnorm.num_batches_tracked", "b51.batchnorm.weight", "b51.batchnorm.bias", "b51.batchnorm.running_mean", "b51.batchnorm.running_var", "b51.batchnorm.num_batches_tracked".
size mismatch for b10.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b10.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b10.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b10.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b11.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b11.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b11.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b11.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b12.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b12.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b12.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b12.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b13.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b13.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b13.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b13.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b14.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b14.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b14.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b14.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b20.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b20.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b20.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b20.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b21.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b21.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b21.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b21.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b22.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b22.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b22.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b22.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b23.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b23.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b23.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b23.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b24.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b24.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b24.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b24.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b25.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b25.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b25.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b25.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b26.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b26.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b26.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b26.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b27.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b27.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b27.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b27.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b28.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b28.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b28.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b28.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b31.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b31.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b31.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b31.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b32.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b32.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b32.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b32.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b33.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b33.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b33.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b33.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b34.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b34.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b34.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b34.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b35.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b35.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b35.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b35.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b36.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b36.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b36.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b36.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b37.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b37.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b37.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b37.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b38.batchnorm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b38.batchnorm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b38.batchnorm2.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b38.batchnorm2.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b40.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b40.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b40.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b40.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for b41.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b41.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b41.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b41.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b42.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b42.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b42.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b42.batchnorm2.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for b51.batchnorm2.weight: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b51.batchnorm2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b51.batchnorm2.running_mean: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b51.batchnorm2.running_var: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([4]).

Not sure whats going on.

Size mismatch for fullconv.weight

Hi,

When testing a model I trained myself it gives me the following error:
Traceback (most recent call last): File "init.py", line 153, in <module> test(FLAGS) File "/home/alpasfly/tfg/ENet-Real-Time-Semantic-Segmentation/test.py", line 24, in test enet.load_state_dict(checkpoint['state_dict']) File "/home/alpasfly/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ENet: size mismatch for fullconv.weight: copying a param with shape torch.Size([16, 102, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 12, 3, 3]).
I saw a closed Issue with the same problem and it appeared to be solved, but appearently it isn't. Any suggestions?

Thanks in advance.

Doing transfer learning

I'm trying to do transfer learning with this NN. I have tried loading already known parameters first and then set the shallowest layers parameters to be frozen like this:

`# Get an instance of the model
enet = ENet(nc)
print ('[INFO]Model Instantiated!')

# Move the model to cuda if available
enet = enet.to(device)

# Transfer learnt weights
pretrained_dict = torch.load('./datasets/CamVid/ckpt-enet.pth')['state_dict']
model_dict = enet.state_dict()

# 1. filter out unnecessary keys
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict) 
# 3. load the new state dict
enet.load_state_dict(model_dict)


# Choose frozen layers
count=0
for child in enet.children():
    if count<frozen_layers:
        for param in child.parameters():
            param.requires_grad=False
            count+=1

`
But I get an error in load_state_dict(model_dict) saying that parameters do not match, not even the ones from the very first layers:

[INFO]Defined all the hyperparameters successfully!
[INFO]Model Instantiated!
Traceback (most recent call last):
File "Transfer_learning.py", line 153, in
train(FLAGS,27) #This 27 is the number of layers to freeze
File "/home/javier/Documents/Segmentation/ENet/ENet-Real-Time-Semantic-Segmentation/new_train.py", line 45, in train
enet.load_state_dict(model_dict)
File "/home/javier/anaconda3/envs/ENet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ENet:
size mismatch for b10.batchnorm2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b10.batchnorm2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for b10.batchnorm2.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([4]).
...
size mismatch for b51.batchnorm2.running_var: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([4]).

Error testing own model

Hey,

testing your model works without issues. If I want to apply my own trained model I get the following error:

bryan@bryan:~/Desktop/ENet-Real-Time-Semantic-Segmentation$ python3 init.py --mode test -m /home/bryan/Desktop/ENet-Real-Time-Semantic-Segmentation/ckpt-enet-90-23.889332741498947.pth -i /home/bryan/Desktop/ENet-Real-Time-Semantic-Segmentation/training/image_2/000000_10.png
Traceback (most recent call last):
File "init.py", line 153, in
test(FLAGS)
File "/home/bryan/Desktop/ENet-Real-Time-Semantic-Segmentation/test.py", line 24, in test
enet.load_state_dict(checkpoint['state_dict'])
File "/home/bryan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ENet:
size mismatch for fullconv.weight: copying a param with shape torch.Size([16, 102, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 12, 3, 3]).

This even happens if I test the model on the trained data, so it cant be any format issues. Do you know whats the problem?

Best

getting Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

I am using google colab with GPU support (tesla K80).
I used this command

!python3 init.py --mode train -iptr /content/ENet-Real-Time-Semantic-Segmentation/datasets/dataset_enet/train/ -lptr /content/ENet-Real-Time-Semantic-Segmentation/datasets/dataset_enet/trainannot/ --cuda False

Here is my log:
Traceback (most recent call last):
File "init.py", line 119, in
train(FLAGS)
File "/content/ENet-Real-Time-Semantic-Segmentation/train.py", line 75, in train
out = enet(X_batch.float())
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/ENet-Real-Time-Semantic-Segmentation/models/ENet.py", line 191, in forward
x = self.init(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/ENet-Real-Time-Semantic-Segmentation/models/InitialBlock.py", line 37, in forward
main = self.conv(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

RuntimeError: Error(s) in loading state_dict for ENet:

Traceback (most recent call last):
File "init.py", line 153, in
test(FLAGS)
File "/media/ayushman/Seagate Expansion Drive/DeepLearning - Repos/TESTING/ENet-Real-Time-Semantic-Segmentation/test.py", line 27, in test
enet.load_state_dict(checkpoint['state_dict'])
File "/home/ayushman/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ENet:
Missing key(s) in state_dict: "b10.batchnorm1.weight", "b10.batchnorm1.bias", "b10.batchnorm1.running_mean", "b10.batchnorm1.running_var", "b10.batchnorm3.weight", "b10.batchnorm3.bias", "b10.batchnorm3.running_mean", "b10.batchnorm3.running_var", "b11.batchnorm1.weight", "b11.batchnorm1.bias", "b11.batchnorm1.running_mean", "b11.batchnorm1.running_var", "b11.batchnorm3.weight", "b11.batchnorm3.bias", "b11.batchnorm3.running_mean", "b11.batchnorm3.running_var", "b12.batchnorm1.weight", "b12.batchnorm1.bias", "b12.batchnorm1.running_mean", "b12.batchnorm1.running_var", "b12.batchnorm3.weight", "b12.batchnorm3.bias", "b12.batchnorm3.running_mean", "b12.batchnorm3.running_var", "b13.batchnorm1.weight", "b13.batchnorm1.bias", "b13.batchnorm1.running_mean", "b13.batchnorm1.running_var", "b13.batchnorm3.weight", "b13.batchnorm3.bias", "b13.batchnorm3.running_mean", "b13.batchnorm3.running_var", "b14.batchnorm1.weight", "b14.batchnorm1.bias", "b14.batchnorm1.running_me....

"Expected CUDA backend but got backend CPU"

Hi,
I'm quite new to pyTorch, so please forgive me if I'm going to quickly on asking explanations.
I tried to launch the inference of your repo on one image, here's what i got :
(pytorch) C:\Users\marisna\ENet-RT>py init.py --mode test -i "seq1.jpg" --cuda True Traceback (most recent call last): File "init.py", line 153, in <module> test(FLAGS) File "C:\Users\marisna\ENet-RT\test.py", line 32, in test out1 = enet(tmg.float()).squeeze(0) File "C:\Users\marisna\Envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\marisna\ENet-RT\models\ENet.py", line 194, in forward x, i1 = self.b10(x) File "C:\Users\marisna\Envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\marisna\ENet-RT\models\RDDNeck.py", line 110, in forward x_copy = torch.cat((x_copy, extras), dim = 1) RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 0 in sequence argument at position #1 'tensors'

(I got torch 1.4.0+cu92)

Thank you in advance for any help !

Overfitting traning data.

I was training the model with cityscapes dataset with 28 classes.
Training images - 2975
Testing - 500
Epoch - 300 (broke at 62 as not getting empirical data to go ahead)

mIOU for various epochs are as follows.

1st Epoch - 32.65
7th Epoch - 34.06
15th Epoch - 13.81
25th Epoch - 12.68
35th Epoch - 12.91
45th Epoch - 17.25
55th Epoch - 20.57
60th Epoch - 17.36
62th Epoch - 20.57

When I tried to run the inference on the model was clearly able to see that it was trying to overfit the training image. I was continuously able to see the structure of the Mercedes logo.

Colab Notebook missing

run init.py test have a problem

Thank you for your code sharing! When test a picture have a problem:

RuntimeError: Expected a Tensor of type torch.FloatTensor but found a type torch.cuda.FloatTensor for sequence element 1 in sequence argument at position #1 'tensors'

I don't know why it will happen like this.pytorch is 0.4

undefined variable

python init.py --mode test -i img.jpg -m datasets/CamVid/ckpt-enet.pth

Throws

Traceback (most recent call last):
  File "init.py", line 153, in <module>
    test(FLAGS)
  File ".../ENet-Real-Time-Semantic-Segmentation/test.py", line 32, in test
    out1 = enet(tmg.float()).squeeze(0)
  File ".../.conda/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/Prog/PycharmProjects/ENet-Real-Time-Semantic-Segmentation/models/ENet.py", line 194, in forward
    x, i1 = self.b10(x)
  File ".../.conda/envs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File ".../ENet-Real-Time-Semantic-Segmentation/models/RDDNeck.py", line 108, in forward
    extras = extras.to(device)
NameError: name 'device' is not defined

Validation and Training Loss swapped?

When plotting both train_loss and eval_loss it's clear the training loss is much higher than evaluation loss. Shouldn't this be the other way around? files
ENet

Using with other datasets

Hi, I'm trying to train your model with the Cityscapes dataset, however the function get_class_weights returns an array of size 34 (so, 34 class weights), while the dataset only contains 19 classes. Are we supposed to change anything to make this work with other datasets?

Thanks in advance

What about a pretrained model?

Hi,
Thanks a lot for your work!
I was just wondering : what about a pretrained model to be able to faster run inferences and get a preview of the performances?
Best wishes for your work !

CUDA out of memory

Hi, I'm trying to run this model with the CamVid dataset in a GTX 1060 with 6GB of memory and it gives me this error (this is the whole code output):

[INFO]Defined all the hyperparameters successfully!
[INFO]Starting to define the class weights...
[INFO]Fetched all class weights successfully!
[INFO]Model Instantiated!
[INFO]Defined the loss function and the optimizer
[INFO]Staring Training...

--------------- Epoch 1 ---------------
here
  0%|                                                                       | 0/36 [00:03<?, ?it/s]
Traceback (most recent call last):
  File "init.py", line 151, in <module>
    train(FLAGS)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\train.py", line 81, in train
    out = enet(X_batch.float())
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\ENet.py", line 231, in forward
    x = self.fullconv(x)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\conv.py", line 776, in forward
    return F.conv_transpose2d(
RuntimeError: CUDA out of memory. Tried to allocate 1020.00 MiB (GPU 0; 6.00 GiB total capacity; 3.68 GiB already allocated; 932.14 MiB free; 3.69 GiB reserved in total by PyTorch)

I think I should have enough memory, since this is not an excessively large dataset. Is there anything I might be doing wrong?

Mobile Device

Hello Arunava,

Congrats for the work.

Can you please tell me how can I deploy this model for real time segmentation on mobile?

Thank you
Regards

Error while validating

[INFO]Defined all the hyperparameters successfully!
[INFO]Starting to define the class weights...
[INFO]Fetched all class weights successfully!
[INFO]Model Instantiated!
[INFO]Defined the loss function and the optimizer
[INFO]Staring Training...
--------------- Epoch 1 ---------------
100%|███████████████████████████████████████████| 36/36 [08:39<00:00, 12.16s/it]

Epoch 1/102... Loss 80.269591
0%| | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "init2.py", line 68, in
train(FLAGS)
File "/home/vadmin/Documents/Semantic_Segmentation/prebuilt_model/ENet-Real-Time-Semantic-Segmentation/train.py", line 105, in train
loss = criterion(out, labels.long())
File "/home/vadmin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/vadmin/Documents/Semantic_Segmentation/prebuilt_model/ENet-Real-Time-Semantic-Segmentation/models/ENet.py", line 191, in forward
x = self.init(x)
File "/home/vadmin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/vadmin/Documents/Semantic_Segmentation/prebuilt_model/ENet-Real-Time-Semantic-Segmentation/models/InitialBlock.py", line 37, in forward
main = self.conv(x)
File "/home/vadmin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/vadmin/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: _thnn_conv2d_forward is not implemented for type torch.ByteTensor

Unable to test models

Hi, if I try to test a model I run into one of these two issues:

  1. If I try to test the model provided at the repository in datasets/CamVid/ckpt-enet.pth, independently of using or not CUDA, I get the following error message:
Traceback (most recent call last):
  File "init.py", line 153, in <module>
    test(FLAGS)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\test.py", line 32, in test
    out1 = enet(tmg.float()).squeeze(0)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\ENet.py", line 194, in forward
    x, i1 = self.b10(x)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\RDDNeck.py", line 110, in forward
    x_copy = torch.cat((x_copy, extras), dim = 1)
RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 0 in sequence argument at position #1 'tensors'
  1. If I try to run the model I've trained, I get the following error message:
Traceback (most recent call last):
  File "init.py", line 153, in <module>
    test(FLAGS)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\test.py", line 24, in test
    enet.load_state_dict(checkpoint['state_dict'])
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 829, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ENet:
        size mismatch for fullconv.weight: copying a param with shape torch.Size([16, 102, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 12, 3, 3]).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.