Comments (9)
Tried the same code on a different machine and am getting similar error
ValueError: not enough values to unpack (expected 3, got 2)
from fast-neural-style-pytorch.
I have tested it and it's working on my end.
The error means that image_loader only returns 2 variables instead of 3.
Could you print and post here the values of these 2 returned these 2 returned variables?
from fast-neural-style-pytorch.
Sorry, I'm new to programming. How do I do this?
from fast-neural-style-pytorch.
Change this
for content_batch, _, path in image_loader:
to
for value1, value2 in image_loader:
print(value1)
print(value2)
Then send here a picture of the console output
from fast-neural-style-pytorch.
I got something like this
[tensor([[[[146., 147., 149., ..., 2., 2., 2.],
[146., 147., 149., ..., 2., 2., 2.],
[146., 147., 150., ..., 2., 2., 2.],
...,
[159., 159., 160., ..., 3., 3., 3.],
[159., 159., 160., ..., 3., 3., 3.],
[159., 159., 160., ..., 3., 3., 2.]],
[[147., 148., 150., ..., 2., 2., 2.],
[147., 148., 150., ..., 2., 2., 2.],
[147., 148., 151., ..., 2., 2., 2.],
...,
[178., 178., 179., ..., 5., 5., 5.],
[178., 178., 179., ..., 5., 5., 5.],
[178., 178., 179., ..., 5., 5., 4.]],
[[139., 140., 142., ..., 0., 0., 0.],
[139., 140., 142., ..., 0., 0., 0.],
[139., 140., 143., ..., 0., 0., 0.],
...,
[184., 184., 185., ..., 4., 4., 4.],
[184., 184., 185., ..., 4., 4., 4.],
[184., 184., 185., ..., 4., 4., 3.]]],
[[[146., 147., 150., ..., 2., 2., 2.],
[146., 147., 150., ..., 2., 2., 2.],
[146., 147., 151., ..., 2., 2., 2.],
...,
[162., 161., 161., ..., 0., 0., 0.],
[162., 161., 159., ..., 0., 0., 0.],
[162., 160., 159., ..., 0., 0., 0.]],
[[147., 148., 151., ..., 2., 2., 2.],
[147., 148., 151., ..., 2., 2., 2.],
[147., 148., 152., ..., 2., 2., 2.],
...,
[181., 180., 180., ..., 0., 0., 0.],
[181., 180., 178., ..., 0., 0., 0.],
[181., 179., 178., ..., 0., 0., 0.]],
[[139., 140., 143., ..., 0., 0., 0.],
[139., 140., 143., ..., 0., 0., 0.],
[139., 140., 144., ..., 0., 0., 0.],
...,
[187., 186., 186., ..., 0., 0., 0.],
[187., 186., 184., ..., 0., 0., 0.],
[187., 185., 184., ..., 0., 0., 0.]]],
[[[171., 171., 172., ..., 26., 44., 47.],
[170., 169., 170., ..., 48., 54., 54.],
[167., 167., 168., ..., 54., 58., 62.],
...,
[191., 193., 195., ..., 80., 80., 81.],
[198., 195., 193., ..., 75., 76., 77.],
[201., 196., 197., ..., 72., 73., 75.]],
[[171., 171., 172., ..., 22., 40., 43.],
[170., 169., 170., ..., 44., 50., 50.],
[166., 166., 167., ..., 53., 54., 58.],
...,
[ 10., 12., 14., ..., 76., 76., 77.],
[ 17., 14., 12., ..., 71., 72., 73.],
[ 20., 15., 16., ..., 68., 69., 71.]],
[[171., 171., 172., ..., 21., 39., 44.],
[170., 169., 170., ..., 43., 51., 51.],
[164., 164., 165., ..., 51., 55., 59.],
...,
[ 79., 81., 83., ..., 77., 77., 78.],
[ 84., 83., 81., ..., 72., 73., 74.],
[ 87., 82., 83., ..., 69., 70., 72.]]],
...,
[[[158., 158., 157., ..., 4., 5., 5.],
[157., 157., 155., ..., 4., 4., 5.],
[158., 157., 155., ..., 4., 4., 4.],
...,
[165., 164., 163., ..., 0., 0., 1.],
[164., 163., 163., ..., 0., 0., 1.],
[165., 164., 164., ..., 0., 0., 1.]],
[[159., 159., 158., ..., 5., 6., 6.],
[158., 158., 156., ..., 5., 5., 6.],
[159., 158., 156., ..., 5., 5., 5.],
...,
[184., 183., 182., ..., 0., 0., 1.],
[183., 182., 182., ..., 0., 0., 1.],
[184., 183., 183., ..., 0., 0., 1.]],
[[151., 151., 150., ..., 0., 1., 1.],
[150., 150., 148., ..., 0., 0., 1.],
[151., 150., 148., ..., 0., 0., 0.],
...,
[191., 190., 189., ..., 0., 0., 1.],
[190., 189., 189., ..., 0., 0., 1.],
[191., 190., 190., ..., 0., 0., 1.]]],
[[[162., 161., 158., ..., 4., 5., 5.],
[162., 161., 158., ..., 4., 5., 5.],
[160., 159., 157., ..., 4., 4., 5.],
...,
[173., 172., 171., ..., 9., 9., 8.],
[173., 172., 172., ..., 9., 9., 8.],
[174., 173., 172., ..., 9., 9., 7.]],
[[158., 157., 154., ..., 3., 4., 4.],
[158., 157., 154., ..., 3., 4., 4.],
[156., 155., 153., ..., 3., 3., 4.],
...,
[184., 183., 182., ..., 9., 9., 8.],
[184., 183., 183., ..., 9., 9., 8.],
[185., 184., 183., ..., 9., 9., 7.]],
[[147., 146., 143., ..., 1., 2., 2.],
[147., 146., 143., ..., 1., 2., 2.],
[145., 144., 142., ..., 1., 1., 2.],
...,
[190., 189., 188., ..., 9., 9., 8.],
[190., 189., 189., ..., 9., 9., 8.],
[191., 190., 189., ..., 9., 9., 7.]]],
[[[162., 159., 156., ..., 5., 5., 5.],
[159., 157., 155., ..., 5., 5., 5.],
[156., 156., 155., ..., 4., 4., 4.],
...,
[173., 171., 170., ..., 5., 0., 0.],
[174., 172., 171., ..., 5., 1., 0.],
[175., 172., 171., ..., 5., 1., 1.]],
[[158., 155., 152., ..., 4., 4., 4.],
[155., 153., 151., ..., 4., 4., 4.],
[152., 152., 151., ..., 3., 3., 3.],
...,
[184., 182., 181., ..., 5., 0., 0.],
[185., 183., 182., ..., 5., 1., 0.],
[186., 183., 182., ..., 5., 1., 1.]],
[[147., 144., 141., ..., 2., 2., 2.],
[144., 142., 140., ..., 2., 2., 2.],
[141., 141., 140., ..., 1., 1., 1.],
...,
[188., 186., 185., ..., 5., 0., 0.],
[189., 187., 186., ..., 5., 1., 0.],
[190., 187., 186., ..., 5., 1., 1.]]]]), tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])]
('frames/content_folder\frame6958.jpg', 'frames/content_folder\frame6959.jpg', 'frames/content_folder\frame696.jpg', 'frames/content_folder\frame6960.jpg', 'frames/content_folder\frame6961.jpg', 'frames/content_folder\frame6962.jpg', 'frames/content_folder\frame6963.jpg', 'frames/content_folder\frame6964.jpg', 'frames/content_folder\frame6965.jpg', 'frames/content_folder\frame6966.jpg', 'frames/content_folder\frame6967.jpg', 'frames/content_folder\frame6968.jpg', 'frames/content_folder\frame6969.jpg', 'frames/content_folder\frame697.jpg', 'frames/content_folder\frame6970.jpg', 'frames/content_folder\frame6971.jpg', 'frames/content_folder\frame6972.jpg', 'frames/content_folder\frame6973.jpg', 'frames/content_folder\frame6974.jpg', 'frames/content_folder\frame6975.jpg')
[tensor([[[[164., 162., 160., ..., 5., 5., 5.],
[162., 161., 160., ..., 5., 5., 5.],
[160., 160., 159., ..., 4., 4., 4.],
...,
[173., 171., 170., ..., 0., 0., 0.],
[174., 172., 171., ..., 0., 0., 0.],
[175., 172., 171., ..., 0., 0., 0.]],
from fast-neural-style-pytorch.
I'm getting the same error when I try to run the project on google colab too
from fast-neural-style-pytorch.
It's not clear which of the two are value1 and value2. Could you try printing a string between them? like
for value1, value2 in image_loader:
print(value1)
print("=========================")
print(value2)
break
from fast-neural-style-pytorch.
I think instead of
for content_batch, _, path in image_loader:
you can do
for content, path in image_loader:
content_batch =content[0]
from fast-neural-style-pytorch.
Thank you so much.
for content, path in image_loader:
content_batch =content[0]
solved it
from fast-neural-style-pytorch.
Related Issues (20)
- vgg16_features.load_state_dict(torch.load(vgg_path), strict=False) HOT 4
- "AttributeError: can't set attribute" HOT 2
- VGG weight file not available on AWS HOT 4
- How to run stylize.py HOT 3
- IndentationError
- Custom model HOT 1
- train my own style image HOT 2
- Share style images? HOT 1
- video.py wrong line
- Trying TransformerNetworkTanh and error when running train.py HOT 1
- What is the License schema ? HOT 2
- Difference between this implementation and PyTorch team's version? HOT 1
- Train Notebook: Request for Improvements HOT 3
- Fixing the save/load generated images HOT 3
- About content loss HOT 4
- Can you share the pre-trained model? HOT 3
- webcam.py: can't adjust resolution? HOT 3
- new style image HOT 2
- I cannot find 'USE_FFMPEG' in video.py. HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast-neural-style-pytorch.