Giter VIP home page Giter VIP logo

stylized-neural-painting's Issues

Just a little suggestion

Is that technically possible that makes every stroke filled with only one single color?
I think it’s excellent for human to imitate and learn which would make this project more valuable.

Since I have not read the code yet, please forgive me for my whimsical dumb idea if it‘s totally impracticable.

How to get video?

I've been through the documentation and I'm having a hard time finding how do I get the video of the model doing live painting? this.

Is there any google colab script that can help me with this?

About the training

Nice job !, i wonder that what is your training datasets? I cant find in your paper and code

package version

Can you please give the exact version of each package on you envirnment?

RuntimeError: Error(s) in loading state_dict for ZouFCNFusionLight:

python=3.6.2
torch=1.2.0
nvidia 2080ti 11G
cuda=10.0
cudnn=7..6.4.38
python demo_prog.py --img_path ./test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush
initialize network with normal
loading renderer from pre-trained checkpoint...
Traceback (most recent call last):
File "demo_prog.py", line 113, in
optimize_x(pt)
File "demo_prog.py", line 49, in optimize_x
pt._load_checkpoint()
File "/home/banana/GAN/stylized-neural-painting-main12/painter.py", line 71, in _load_checkpoint
self.net_G.load_state_dict(checkpoint['model_G_state_dict'])
File "/home/banana/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ZouFCNFusionLight:
Unexpected key(s) in state_dict: "huangnet.fc4.weight", "huangnet.fc4.bias", "huangnet.conv3.weight", "huangnet.conv3.bias", "huangnet.conv4.weight", "huangnet.conv4.bias", "huangnet.conv5.weight", "huangnet.conv5.bias", "huangnet.conv6.weight", "huangnet.conv6.bias", "dcgan.main.10.weight", "dcgan.main.10.bias", "dcgan.main.10.running_mean", "dcgan.main.10.running_var", "dcgan.main.10.num_batches_tracked", "dcgan.main.12.weight", "dcgan.main.13.weight", "dcgan.main.13.bias", "dcgan.main.13.running_mean", "dcgan.main.13.running_var", "dcgan.main.13.num_batches_tracked", "dcgan.main.15.weight".
size mismatch for huangnet.conv1.weight: copying a param with shape torch.Size([32, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 8, 3, 3]).
size mismatch for huangnet.conv1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for huangnet.conv2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([12, 64, 3, 3]).
size mismatch for huangnet.conv2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([12]).
size mismatch for dcgan.main.3.weight: copying a param with shape torch.Size([512, 512, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 256, 4, 4]).
size mismatch for dcgan.main.4.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.6.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 128, 4, 4]).
size mismatch for dcgan.main.7.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.9.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([128, 6, 4, 4]).

Error with Notebook

It throws the following error,

initialize network with normal
pre-trained renderer does not exist...
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-11-a13d7e15e5e1> in <module>()
      1 pt = Painter(args=args)
----> 2 optimize_x(pt)

1 frames
/content/stylized-neural-painting/painter.py in _load_checkpoint(self)
     74         else:
     75             print('pre-trained renderer does not exist...')
---> 76             exit()
     77 
     78 

NameError: name 'exit' is not defined

Some problem with pre-trained render file

Hi, I am new here. I have a few problems with the pre-trained render file. I had download the pre-trained render file from the google drive and clone the repo. When I put the file into the repo and run the demo_prog file, it said pre-trained render file doesn't exist.

Error:CUDA out of memory

This project is so cool and I want to run it on my PC, I downloaded the repo,configured and the error(CUDA out of memory) occurred.I change max_m_strokes ,max_divide two parameters, it works correctly, but the outpu is not meet my demand. I know nothing about the torch framework, can you help me to solve this problem, please ?

Advice on combining CLIP with the neural renderer

Hi, this is an very interesting and impressive work in using AI for art. Now, as CLIP has created more and more astounding painting from text, I just want to ask some advice, if possible, on combining CLIP and your work together which can be very interesting. Thanks

update render.py to use lightweight version

currently the light weight version models are not supported
RuntimeError: Error(s) in loading state_dict for ZouFCNFusion:
Missing key(s) in state_dict: "huangnet.fc4.weight", "huangnet.fc4.bias", "huangnet.conv3.weight", "huangnet.conv3.bias", "huangnet.conv4.weight", "huangnet.conv4.bias", "huangnet.conv5.weight", "huangnet.conv5.bias", "huangnet.conv6.weight", "huangnet.conv6.bias", "dcgan.main.10.weight", "dcgan.main.10.bias", "dcgan.main.10.running_mean", "dcgan.main.10.running_var", "dcgan.main.12.weight", "dcgan.main.13.weight", "dcgan.main.13.bias", "dcgan.main.13.running_mean", "dcgan.main.13.running_var", "dcgan.main.15.weight".
size mismatch for huangnet.conv1.weight: copying a param with shape torch.Size([64, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for huangnet.conv1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for huangnet.conv2.weight: copying a param with shape torch.Size([12, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for huangnet.conv2.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for dcgan.main.3.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 512, 4, 4]).
size mismatch for dcgan.main.4.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.6.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 256, 4, 4]).
size mismatch for dcgan.main.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.9.weight: copying a param with shape torch.Size([128, 6, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 128, 4, 4]).

intersection brushstroke question

Hello, Sir. I am trying to physically recreate the "sketch" that made the model (markerpen). I chose the markerpene because in the process it will be convenient to decompose it into vectors/points (pic 1). I am going to paint with acrylic/oil, but this type of paint(acrylic/oil) cannot physically replicate it (pic 2, I called effect - intersection brushstroke). I cannot ignore it because it will affect the final result of the drawn (I mean, because of these intersections, a picture is formed)

Can I teach a model to rule it out? Perhaps changing it to a new smear? Perhaps I missed something and there is a ready-made solution? Thanks!
image

image
Pic 1

image
Pic 2

MacBook

AssertionError: Torch not compiled with CUDA enabled

Assertion failed error

Hi @jiupinjia

I am using Google Colab to run your light versions (specifically oilpaintbrush light) and everything ran fine till today - now I get an error like so disregard this thing about the error, Im an idiot, it was because I had not copied the source file into the correct directory.

Input:
!python3 demo_prog.py --img_path ./test_images/papa2.jpg --canvas_color 'black' --max_m_strokes 1280 --max_divide 8 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush_light --net_G zou-fusion-net-light

edit: This was originally an error report because I got something about OpenCV assertion failed. Turns out it was because I had not copied the source file properly into the colab folder. Just putting this in here for transparency.

Color block drifts

After a piece of color block is drawn, it always drift in a little range.
Is there any way to fix the drawn color block and make it never move?

Style Transfer Colab Notebook error

The first part of the image-to-painting-nst.ipynb works fine since this is creating a painting
through brush strokes and then a video.

The second part is the style transfer code, and I am getting an error here:
Screen Shot 2021-12-12 at 4 57 40 PM

Error while rendering

Hi,

Your project really looks stunning and I want to try it out on google colab. This is the code i run:

!python /content/stylized-neural-painting/demo_prog.py
--img_path /content/stylized-neural-painting/test_images/apple.jpg
--canvas_color 'white'
--canvas_size 512
--max_m_strokes 500
--max_divide 5
--with_ot_loss
--renderer "oilpaintbrush"
--renderer_checkpoint_dir "checkpoints_G_oilpaintbrush_light"
--net_G "zou-fusion-net-light"
--disable_preview

This is the error i get back. I do not know enough programming to fix it myself.

rendering canvas...
Traceback (most recent call last):
File "/content/stylized-neural-painting/demo_prog.py", line 113, in
optimize_x(pt)
File "/content/stylized-neural-painting/demo_prog.py", line 102, in optimize_x
CANVAS_tmp = pt._render(PARAMS, save_jpgs=False, save_video=False)
File "/content/stylized-neural-painting/painter.py", line 137, in _render
self.rderr.draw_stroke()
File "/content/stylized-neural-painting/renderer.py", line 148, in draw_stroke
return self._draw_oilpaintbrush()
File "/content/stylized-neural-painting/renderer.py", line 326, in _draw_oilpaintbrush
x0, y0, w, h, theta, R0, G0, B0, R2, G2, B2)
File "/content/stylized-neural-painting/utils.py", line 286, in create_transformed_brush
brush_alpha = (brush_alpha > 0).astype(np.float32)
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Thanks for your help!

About training new painting styles

Hello, your work is very creative, but I found that you didn't have train dataset, so how did you train the strokes of different painting styles? If I want to train a Chinese style painting stroke, what should I do? Hope your reply. Thanks so much!

Rendering Vector To Image

I've been digging around the code a bit trying to figure out how to take the npz files and render them out. Any ideas / solutions?

pre-trained renderer does not exist...

this is thrown when I run the command
python demo_prog.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --max_divide 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen --net_G zou-fusion-net --disable_preview

`AttributeError: 'ProgressivePainter' object has no attribute 'x'`

Hello Zhengxia,

Thank you for creating this project, very interesting!

I spent some time playing around with the colab runtime 1, I tried to increase the default hyper-parameters by multiplying 2, so far so good.

However,

# settings
args.canvas_size = 1024 # size of the canvas for stroke rendering'
args.max_m_strokes = 1000 # max number of strokes
args.max_divide = 15 # divide an image up-to max_divide x max_divide patches
...

I have updated the args.canvas_size, args.max_m_strokes by multiplying 2, while args.max_divide from 10 to 15, then in the "drawing" phase:

pt = ProgressivePainter(args=args)
final_rendered_image = optimize_x(pt)

It throws error like this:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-30-8f14c24057d5> in <module>()
      1 pt = ProgressivePainter(args=args)
----> 2 final_rendered_image = optimize_x(pt)

<ipython-input-23-25a3da86e7fa> in optimize_x(pt)
     51                 pt.step_id += 1
     52 
---> 53         v = pt._normalize_strokes(pt.x)
     54         v = pt._shuffle_strokes_and_reshape(v)
     55         PARAMS = np.concatenate([PARAMS, v], axis=1)

AttributeError: 'ProgressivePainter' object has no attribute 'x'

Do you know why this error is thrown? Personally, I think maybe this is because that there should be some "math formula" behind these three (or more) parameters ?

What do you think?

Thanks!

ImportError: cannot import name 'PILLOW_VERSION'

Trying to run the following line

python demo_prog.py --img_path [MY_PICTURE] --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net

Receiving this error:

Traceback (most recent call last):
  File "demo_prog.py", line 5, in <module>
    from painter import *
  File "C:\Users\Tony\Documents\Projects\Personal Projects\stylized-neural-painting\painter.py", line 5, in <module>
    import utils
  File "C:\Users\Tony\Documents\Projects\Personal Projects\stylized-neural-painting\utils.py", line 12, in <module>
    import torchvision.transforms.functional as TF
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\__init__.py", line 2, in <module>
    from torchvision import datasets
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\datasets\__init__.py", line 9, in <module>
    from .fakedata import FakeData
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\datasets\fakedata.py", line 3, in <module>
    from .. import transforms
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\__init__.py", line 1, in <module>
    from .transforms import *
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\transforms.py", line 17, in <module>
    from . import functional as F
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\functional.py", line 5, in <module>
    from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
ImportError: cannot import name 'PILLOW_VERSION'

cannot to connect to x window error on google colab

Great project!!!

I'm trying to train my own strokes on google colab. However, when I try to run the demo:

!python demo.py --img_path ./test_images/sunflowers.jpg --canvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --output_dir ./output

I get this error:
initialize network with normal
loading renderer from pre-trained checkpoint...
begin to draw...
iteration step 0, G_loss: 0.00000, step_psnr: 4.52816, strokes: 25 / 500
: cannot connect to X server

Guessing that it is some kind of image show function(s) somewhere...

Using Cpu instead of GPU?

Hi, when i run the program it uses 80 percent of my cpu and almost no gpu. Renders take forever too. Is there a way that i can check to see if it's using my 2080ti? Can i force it to use my gpu? Thanks.

Application for cooperation

Dear developer. I am a student of the University of the Chinese Academy of Sciences. My school held a public welfare competition with the theme of the integration of science and technology and art. Can Iuse the works drawn by this project to participate in a competition for the integration of science, technology and art? I want to make a comparison between AI painting and human painting . I won't use your AI project to compete. I just want to use the works created by it to compete and show the high painting level of AI at present

If you want to know more about the compitition , you can visit this link

https://kexie.ucas.ac.cn/index.php/zh/tgg/122-2021-07-21-02-10-49

If you have more information, please contact me as soon as possible, otherwise we will miss the registration time

Sharing my interesting results with rectangle renderer

Hi, thanks for open-sourcing the great work, and I just wanted to share my own results.
(I think it'll be great if there's a place for users to post their own results, which will be some kind of "open gallery")

The input was my profile picture (drawn with LuaTeX+TikZ before, which imitated the JOI logo - https://git.io/vQwmS)

veydpz_input

The progressive output was interesting; the network preferred to draw a tilted rectangle, rather than using an axis-aligned rectangle (can you think of a reason behind this?):
image

The final result looked like this:
veydpz_rendered_stroke_0495

ask for advice

Hello, What an interesting display!
Can the final rendered image be written into SVG?

brush_alpha erro

Hi, The program I run returned an error message:

File "utils.py" line 286, in create_transformed_brush
brush_alpha = (brush_alpha > 0).astype(np.float32)
TypeError: '>' not supported between instances of 'NoneType' and 'int'

command:
python ../stylized-neural-painting/demo_prog.py --img_path ../stylized-neural-painting/test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir ../stylized-neural-painting/checkpoints_G_oilpaintbrush --net_G zou-fusion-net

how can i run this program on GPU?

My GPU is gtx1070.
Can I run this program on GPU?
When I run this program.
The CPU is 100% but GPU only 5%,and also rendering is very slowly
Is that right?

Brush stroke suggestion

Instead of having a fully opaque brush stroke, would you consider a semi-transparent one, basically obtained by grayscaling a real brush stroke. Similar to what's being done here:

http://3dstereophoto.blogspot.com/2018/07/non-photorealistic-rendering-software.html

This a classic stroke based renderer which simulates oil painting/digital painting. Since not AI based, it probably creates too many strokes but the use of a semi-transparent brush stroke makes the rendered image more realistic/natural.

i can not use GPU

hello
I tested this project on both win10 powershell and WSL, but it always uses my poor cpu, but I don't know how to check the cause of this problem.
I tried to execute the following statement in python:
"import torch"
"print(torch.cuda.is_available())"
It returns true in Powershell and false in WSL.
Of course, I managed to execute the example, but it took quite a while.

So I need some help, thank you.
please forgive my terrible english.

how to solve this problem

D:\BaiduNetdiskDownload\Stylized Neural Painting Main\Stylized Neural Painting>python demo.py --img_path ./test_images/sunflowers.jpg --c
anvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --outpu
t_dir ./output
Traceback (most recent call last):
  File "demo.py", line 4, in <module>
    torch.cuda.current_device()
  File "C:\Users\12100\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda\__init__.py", line 366, in current_device
    _lazy_init()
  File "C:\Users\12100\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Help runnig in CPU

(SNP2) C:\Users\Administrator\Nova pasta\SNP>python demo_prog.py --img_path ./test_images/1.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net --disable_preview
Traceback (most recent call last):
File "demo_prog.py", line 4, in
torch.cuda.current_device()
File "C:\ProgramData\Anaconda3\envs\SNP2\lib\site-packages\torch\cuda_init_.py", line 366, in current_device
lazy_init()
File "C:\ProgramData\Anaconda3\envs\SNP2\lib\site-packages\torch\cuda_init
.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Some thing about CUDA

HI:

When I execute demo.py

I meet the bug as below:

RuntimeError: CUDA out of memory. Tried to allocate 626.00 MiB (GPU 0; 8.00 GiB total capacity; 3.71 GiB already allocated; 469.62 MiB free; 5.58 GiB reserved in total by PyTorch)

The model I choose is bigger one: ./checkpoints_G_oilpaintbrush

The net_G I choose is zou-fusion-net

Hope your answer.

Thank you.

About input photo pixel?

Can I use photo pixel like 1280 * 720?
When I use --canvas_size 720
The photo size will be forced to be 720 * 720.
How to change the output photo size to 1280*720?

Error running demo script

So when I tried to run the demo script from the github code, I got an error saying no module named torch. I tried installing torch using the command pip install torch, but got ERROR: Command errored out with exit status 1.

Error for running Demo Script:
python demo_prog.py --img_path ./test_images/images.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net Traceback (most recent call last): File "C:\Users\20ben\stylized-neural-painting\demo_prog.py", line 3, in <module> import torch ModuleNotFoundError: No module named 'torch'

Error for installing torch:
ERROR: Command errored out with exit status 1: command: 'c:\users\20ben\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-eydivo47\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\20ben\python\Include\torch' cwd: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\ Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 225, in <module> setup(name="torch", version="0.1.2.post2", File "c:\users\20ben\python\lib\site-packages\setuptools\__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "c:\users\20ben\python\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\users\20ben\python\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "c:\users\20ben\python\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 99, in run self.run_command('build_deps') File "c:\users\20ben\python\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\users\20ben\python\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\20ben\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-eydivo47\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\20ben\python\Include\torch' Check the logs for full command output.

style transfer in demo_nst.py

When I run demo_nst.py for style transfer as described in the readme.md, I get incorrect results, what should I do? Thanks
sunflowers_style_transfer_fire

No GPU

Hi,

Thanks indeed for your work.
I wanted to run a quick test on one of our virtual machines and got stuck as there is no GPU on the VM.
Is there a way I can run it without a GPU?

Many tanks in deed!
DF

Unable run demo on remote machine without desktop environment

Hi @jiupinjia, good job! the result of your repo seems fantastic!
I try to run your demo on my remote GPU server which doesn't have a desktop environment(no x11 installed). and get the following error

initialize network with normal
loading renderer from pre-trained checkpoint...
begin to draw...
iteration step 0, G_loss: 0.00000, step_psnr: 3.38946, strokes: 25 / 500
qt.qpa.xcb: could not connect to display 
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/shir/miniconda3/envs/tf1.15/lib/python3.6/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)

Is there any way to solve this problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.