rrmina / fast-neural-style-pytorch Goto Github PK
View Code? Open in Web Editor NEWFast Neural Style Transfer implementation in PyTorch :art: :art: :art:
Fast Neural Style Transfer implementation in PyTorch :art: :art: :art:
I'm on ubuntu 18.04 and using "droidcam" so I can use my phone as a webcam.
running "python webcam.py" Produces this error:
Traceback (most recent call last):
File "webcam.py", line 4, in
import utils
File "/home/al/fast-neural-style-pytorch/utils.py", line 121
tuple_with_path = (*original_tuple, path)
^
SyntaxError: invalid syntax
However, I'm able to work around this by using "python3". What really has me confused is that I seem unable to adjust the window size/resolution. I've tried editing the two lines in webcam.py, and I've tried adding the command line options, but I always seem to wind up at 640x480.
Furthermore, I can change which style_transform_path in webcam.py. Which only makes this weirder.
Any guidance would be greatly appreciated.
I have no issues with stylize.py (with pretrained models). My problem is with train.py. I think I followed the instructions: downloaded vgg16-00b39a1b.pth, downloaded train2014, installed pytorch, opencv, etc (installed with conda).
I have Debian 11 stable, Python 3.7.4, pytorch 1.4.0, torchvision 0.5.0, cudatoolkit 10.1.243.
$ python train.py
Traceback (most recent call last):
File "train.py", line 168, in <module>
train()
File "train.py", line 50, in train
VGG = vgg.VGG16().to(device)
File "/home/mrfabiolo/style_transfer/fast-neural-style-pytorch/vgg.py", line 33, in __init__
vgg16_features.load_state_dict(torch.load(vgg_path), strict=False)
File "/home/mrfabiolo/miniconda3/lib/python3.7/site-packages/torch/serialization.py", line 529, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/mrfabiolo/miniconda3/lib/python3.7/site-packages/torch/serialization.py", line 692, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
Same problem with device = "cpu"
or device = "cuda"
. I got a GeForce GTX 750 card.
Can you share the original style images that were used to generate each of the pretrained weights in the /transforms folder? Currently the repo only has the original images for a few of the styles. (Not super urgent, but it would be helpful to see where the styles come from and to have the source images in case we want to train our own networks.)
What is the difference of this implementation (regarding just static image style transfer, not for videos) vs the PyTorch teams implementation: the PyTorch TeamPyTorch Examples: fast-neural-style?
can you tell me how to train my style image with coco datasets? and how to adjust to the params? thank you
Getting following error when trying. Hope you can help resolve this?
Device: cuda Traceback (most recent call last): File "C:\Temp\fast-neural-style-pytorch-master\train.py", line 172, in <module> train() File "C:\Temp\fast-neural-style-pytorch-master\train.py", line 52, in train TransformerNetwork = transformer.TransformerNetworkTanh.to(device) File "C:\Users\eform\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 673, in to return self._apply(convert) AttributeError: 'str' object has no attribute '_apply'
Hello, very good work, I was able to make it work without problem.
I want to try creating a model, using a different style, some other painting or artist.
How can I create my own model / vgg16-00b39a1b.pth?
If you have any guide to experiment with creating models it would be of great help, I will continue investigating, thanks
This still needs a lot of changes to make it a one-click-run-all interface.
Some of the changes requested are:
show_iter
This notebook can be used a reference for the first 4 bullets. Can you do it @p-rit ?
got a FORBIDDEN
response for the AWS file. switched to the weights file which is now included with torchvision
# !wget https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg16-00b39a1b.pth
!wget https://download.pytorch.org/models/vgg16-397923af.pth
!cp vgg16-397923af.pth vgg16-00b39a1b.pth
print("Elapsed Time: {}".format(time.time()-starttime))
tor ---->wrong
Thanks,
Forgive me because I'm a noob at python but when I try to run the video.py script I get Indentation error: unexpected indent
File "/home/harish/fast-neural-style-pytorch/stylize.py", line 117
torch.cuda.empty_cache()
^
IndentationError: unexpected indent.
When I tried removing the line I get same error at the next line,
File "/content/stylize.py", line 119
generated_tensor = net(content_batch.to(device)).detach()
^
IndentationError: unexpected indent
can you help me with this please?
This error (in Colaboratory) is fixed by upgrading the Pillow package, skip Pillow==4.1.1
/usr/local/lib/python3.6/dist-packages/PIL/JpegImagePlugin.py in SOF(self, marker)
144 n = i16(self.fp.read(2))-2
145 s = ImageFile._safe_read(self.fp, n)
--> 146 self.size = i16(s[3:]), i16(s[1:])
147
148 self.bits = i8(s[0])
I don't know where to put the address ๏ผ--content_dir )
Hello :)
I was wondering if I could use this for commercial use ? so far I have been using it for demo purposes and would like to work on it more but wanted to check with you first. What is the license schema for this project before commiting more time into it.
Hi,
Thanks for this project.
I don't completely understand why the whitening is happening on the generated image, but I don't think it is anything to do with the VideoCapture.
I don't have a real fix, but I think this is slightly better than writing and reading the file:
# old code
#utils.saveimg(generated_image, str(count+1) + ".png")
#img2 = cv2.imread(str(count+1) + ".png", cv2.IMREAD_UNCHANGED)
# don't really understand why this is required
img2 = cv2.imdecode(cv2.imencode(".png", generated_image)[1], cv2.IMREAD_UNCHANGED)
While trying to run video.py, I get the following error. What am I doing wrong?
Traceback (most recent call last):
File "video.py", line 84, in
video_transfer(VIDEO_NAME, STYLE_PATH)
File "video.py", line 34, in video_transfer
stylize_folder(style_path, FRAME_SAVE_PATH, STYLE_FRAME_SAVE_PATH, batch_size=BATCH_SIZE)
File "/home/user/fast-neural-style-pytorch/stylize.py", line 115, in stylize_folder
for content_batch, _, path in image_loader:
ValueError: need more than 2 values to unpack
Hi,
Thanks for this project.
In your code, when you compute content loss, your code is :
content_loss = CONTENT_WEIGHT * MSELoss(content_features['relu2_2'], generated_features['relu2_2'])
I think this might be wrong, it should be:
content_loss = CONTENT_WEIGHT * MSELoss(generated_features['relu2_2'], content_features['relu2_2'])
Please correct me if I am wrong.
How can I use that function?
Hello
Thank you for sharing your source codes.
I want use my own style image when training network.
Could you tell me how i can do this?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.