Giter VIP home page Giter VIP logo

stylized-neural-painting's Introduction

Stylized Neural Painting

Open in RunwayML Badge

Preprint | Project Page | Colab Runtime 1 | Colab Runtime 2 | Demo and Docker image on Replicate

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

We propose an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles. Different from previous image-to-image translation methods that formulate the translation as pixel-wise prediction, we deal with such an artistic creation process in a vectorized environment and produce a sequence of physically meaningful stroke parameters that can be further used for rendering. Since a typical vector render is not differentiable, we design a novel neural renderer which imitates the behavior of the vector renderer and then frame the stroke prediction as a parameter searching process that maximizes the similarity between the input and the rendering output. Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures. Our method can be also jointly optimized with neural style transfer that further transfers visual style from other images.

In this repository, we implement the complete training/inference pipeline of our paper based on Pytorch and provide several demos that can be used for reproducing the results reported in our paper. With the code, you can also try on your own data by following the instructions below.

The implementation of the sinkhorn loss in our code is partially adapted from the project SinkhornAutoDiff.

License

Creative Commons License Stylized Neural Painting by Zhengxia Zou is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

One-min video result

IMAGE ALT TEXT HERE

**Updates on CPU mode (Nov 29, 2020)

PyTorch-CPU mode is now supported! You can try out on your local machine without any GPU cards.

**Updates on lightweight renderers (Nov 26, 2020)

We have provided some lightweight renderers where users now can easily generate high resolution paintings with much more stroke details. With the lightweight renders, the rendering speed also improves a lot (x3 faster). This update also solves the out-of-memory problem when running our demo on a GPU card with limited memory (e.g. 4GB).

Please check out the following for more details.

Requirements

See Requirements.txt.

Setup

  1. Clone this repo:
git clone https://github.com/jiupinjia/stylized-neural-painting.git 
cd stylized-neural-painting
  1. Download one of the pretrained neural renderers from Google Drive (1. oil-paint brush, 2. watercolor ink, 3. marker pen, 4. color tapes), and unzip them to the repo directory.
unzip checkpoints_G_oilpaintbrush.zip
unzip checkpoints_G_rectangle.zip
unzip checkpoints_G_markerpen.zip
unzip checkpoints_G_watercolor.zip
  1. We have also provided some lightweight renderers where users can generate high-resolution paintings on their local machine with limited GPU memory. Please feel free to download and unzip them to your repo directory. (1. oil-paint brush (lightweight), 2. watercolor ink (lightweight), 3. marker pen (lightweight), 4. color tapes (lightweight)).
unzip checkpoints_G_oilpaintbrush_light.zip
unzip checkpoints_G_rectangle_light.zip
unzip checkpoints_G_markerpen_light.zip
unzip checkpoints_G_watercolor_light.zip

To produce our results

Photo to oil painting

  • Progressive rendering
python demo_prog.py --img_path ./test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net
  • Progressive rendering with lightweight renderer (with lower GPU memory consumption and faster speed)
python demo_prog.py --img_path ./test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush_light --net_G zou-fusion-net-light
  • Rendering directly from mxm image grids
python demo.py --img_path ./test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net

Photo to marker-pen painting

  • Progressive rendering
python demo_prog.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --max_divide 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen --net_G zou-fusion-net
  • Progressive rendering with lightweight renderer (with lower GPU memory consumption and faster speed)
python demo_prog.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --max_divide 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen_light --net_G zou-fusion-net-light
  • Rendering directly from mxm image grids
python demo.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --m_grid 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen --net_G zou-fusion-net

Style transfer

  • First, you need to generate painting and save stroke parameters to output dir
python demo.py --img_path ./test_images/sunflowers.jpg --canvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net --output_dir ./output
  • Then, choose a style image and run style transfer on the generated stroke parameters
python demo_nst.py --renderer oilpaintbrush --vector_file ./output/sunflowers_strokes.npz --style_img_path ./style_images/fire.jpg --content_img_path ./test_images/sunflowers.jpg --canvas_color 'white' --net_G zou-fusion-net --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --transfer_mode 1

You may also specify the --transfer_mode (0: transfer color only, 1: transfer both color and texture)

Also, please note that in the current version, the style transfer are not supported by the progressive rendering mode. We will be working on this feature in the near future.

Generate 8-bit graphic artworks

python demo_8bitart.py --img_path ./test_images/monalisa.jpg --canvas_color 'black' --max_m_strokes 300 --max_divide 4

Running through SSH

If you would like to run remotely through ssh and do not have something like X-display installed, you will need --disable_preview to turn off cv2.imshow on the run.

python demo_prog.py --disable_preview

Google Colab

Here we also provide a minimal working example of the inference runtime of our method. Check out the following runtimes and see your result on Colab.

Colab Runtime 1 : Image to painting translation (progressive rendering)

Colab Runtime 2 : Image to painting translation with image style transfer

To retrain your neural renderer

You can also choose a brush type and train the stroke renderer from scratch. The only thing to do is to run the following common. During the training, the ground truth strokes are generated on-the-fly, so you don't need to download any external dataset.

python train_imitator.py --renderer oilpaintbrush --net_G zou-fusion-net --checkpoint_dir ./checkpoints_G --vis_dir val_out --max_num_epochs 400 --lr 2e-4 --batch_size 64

Citation

If you use our code for your research, please cite the following paper:

@inproceedings{zou2020stylized,
    title={Stylized Neural Painting},
      author={Zhengxia Zou and Tianyang Shi and Shuang Qiu and Yi Yuan and Zhenwei Shi},
      year={2020},
      eprint={2011.08114},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

stylized-neural-painting's People

Contributors

ak9250 avatar bfirsh avatar jiupinjia avatar slipknottn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stylized-neural-painting's Issues

About the training

Nice job !, i wonder that what is your training datasets? I cant find in your paper and code

Error with Notebook

It throws the following error,

initialize network with normal
pre-trained renderer does not exist...
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-11-a13d7e15e5e1> in <module>()
      1 pt = Painter(args=args)
----> 2 optimize_x(pt)

1 frames
/content/stylized-neural-painting/painter.py in _load_checkpoint(self)
     74         else:
     75             print('pre-trained renderer does not exist...')
---> 76             exit()
     77 
     78 

NameError: name 'exit' is not defined

Error:CUDA out of memory

This project is so cool and I want to run it on my PC, I downloaded the repo,configured and the error(CUDA out of memory) occurred.I change max_m_strokes ,max_divide two parameters, it works correctly, but the outpu is not meet my demand. I know nothing about the torch framework, can you help me to solve this problem, please ?

Some problem with pre-trained render file

Hi, I am new here. I have a few problems with the pre-trained render file. I had download the pre-trained render file from the google drive and clone the repo. When I put the file into the repo and run the demo_prog file, it said pre-trained render file doesn't exist.

package version

Can you please give the exact version of each package on you envirnment?

how can i run this program on GPU?

My GPU is gtx1070.
Can I run this program on GPU?
When I run this program.
The CPU is 100% but GPU only 5%,and also rendering is very slowly
Is that right?

update render.py to use lightweight version

currently the light weight version models are not supported
RuntimeError: Error(s) in loading state_dict for ZouFCNFusion:
Missing key(s) in state_dict: "huangnet.fc4.weight", "huangnet.fc4.bias", "huangnet.conv3.weight", "huangnet.conv3.bias", "huangnet.conv4.weight", "huangnet.conv4.bias", "huangnet.conv5.weight", "huangnet.conv5.bias", "huangnet.conv6.weight", "huangnet.conv6.bias", "dcgan.main.10.weight", "dcgan.main.10.bias", "dcgan.main.10.running_mean", "dcgan.main.10.running_var", "dcgan.main.12.weight", "dcgan.main.13.weight", "dcgan.main.13.bias", "dcgan.main.13.running_mean", "dcgan.main.13.running_var", "dcgan.main.15.weight".
size mismatch for huangnet.conv1.weight: copying a param with shape torch.Size([64, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for huangnet.conv1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for huangnet.conv2.weight: copying a param with shape torch.Size([12, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for huangnet.conv2.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for dcgan.main.3.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 512, 4, 4]).
size mismatch for dcgan.main.4.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.4.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for dcgan.main.6.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 256, 4, 4]).
size mismatch for dcgan.main.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.9.weight: copying a param with shape torch.Size([128, 6, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 128, 4, 4]).

Color block drifts

After a piece of color block is drawn, it always drift in a little range.
Is there any way to fix the drawn color block and make it never move?

`AttributeError: 'ProgressivePainter' object has no attribute 'x'`

Hello Zhengxia,

Thank you for creating this project, very interesting!

I spent some time playing around with the colab runtime 1, I tried to increase the default hyper-parameters by multiplying 2, so far so good.

However,

# settings
args.canvas_size = 1024 # size of the canvas for stroke rendering'
args.max_m_strokes = 1000 # max number of strokes
args.max_divide = 15 # divide an image up-to max_divide x max_divide patches
...

I have updated the args.canvas_size, args.max_m_strokes by multiplying 2, while args.max_divide from 10 to 15, then in the "drawing" phase:

pt = ProgressivePainter(args=args)
final_rendered_image = optimize_x(pt)

It throws error like this:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-30-8f14c24057d5> in <module>()
      1 pt = ProgressivePainter(args=args)
----> 2 final_rendered_image = optimize_x(pt)

<ipython-input-23-25a3da86e7fa> in optimize_x(pt)
     51                 pt.step_id += 1
     52 
---> 53         v = pt._normalize_strokes(pt.x)
     54         v = pt._shuffle_strokes_and_reshape(v)
     55         PARAMS = np.concatenate([PARAMS, v], axis=1)

AttributeError: 'ProgressivePainter' object has no attribute 'x'

Do you know why this error is thrown? Personally, I think maybe this is because that there should be some "math formula" behind these three (or more) parameters ?

What do you think?

Thanks!

MacBook

AssertionError: Torch not compiled with CUDA enabled

Rendering Vector To Image

I've been digging around the code a bit trying to figure out how to take the npz files and render them out. Any ideas / solutions?

cannot to connect to x window error on google colab

Great project!!!

I'm trying to train my own strokes on google colab. However, when I try to run the demo:

!python demo.py --img_path ./test_images/sunflowers.jpg --canvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --output_dir ./output

I get this error:
initialize network with normal
loading renderer from pre-trained checkpoint...
begin to draw...
iteration step 0, G_loss: 0.00000, step_psnr: 4.52816, strokes: 25 / 500
: cannot connect to X server

Guessing that it is some kind of image show function(s) somewhere...

Unable run demo on remote machine without desktop environment

Hi @jiupinjia, good job! the result of your repo seems fantastic!
I try to run your demo on my remote GPU server which doesn't have a desktop environment(no x11 installed). and get the following error

initialize network with normal
loading renderer from pre-trained checkpoint...
begin to draw...
iteration step 0, G_loss: 0.00000, step_psnr: 3.38946, strokes: 25 / 500
qt.qpa.xcb: could not connect to display 
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/shir/miniconda3/envs/tf1.15/lib/python3.6/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)

Is there any way to solve this problem?

Brush stroke suggestion

Instead of having a fully opaque brush stroke, would you consider a semi-transparent one, basically obtained by grayscaling a real brush stroke. Similar to what's being done here:

http://3dstereophoto.blogspot.com/2018/07/non-photorealistic-rendering-software.html

This a classic stroke based renderer which simulates oil painting/digital painting. Since not AI based, it probably creates too many strokes but the use of a semi-transparent brush stroke makes the rendered image more realistic/natural.

How to get video?

I've been through the documentation and I'm having a hard time finding how do I get the video of the model doing live painting? this.

Is there any google colab script that can help me with this?

Error while rendering

Hi,

Your project really looks stunning and I want to try it out on google colab. This is the code i run:

!python /content/stylized-neural-painting/demo_prog.py
--img_path /content/stylized-neural-painting/test_images/apple.jpg
--canvas_color 'white'
--canvas_size 512
--max_m_strokes 500
--max_divide 5
--with_ot_loss
--renderer "oilpaintbrush"
--renderer_checkpoint_dir "checkpoints_G_oilpaintbrush_light"
--net_G "zou-fusion-net-light"
--disable_preview

This is the error i get back. I do not know enough programming to fix it myself.

rendering canvas...
Traceback (most recent call last):
File "/content/stylized-neural-painting/demo_prog.py", line 113, in
optimize_x(pt)
File "/content/stylized-neural-painting/demo_prog.py", line 102, in optimize_x
CANVAS_tmp = pt._render(PARAMS, save_jpgs=False, save_video=False)
File "/content/stylized-neural-painting/painter.py", line 137, in _render
self.rderr.draw_stroke()
File "/content/stylized-neural-painting/renderer.py", line 148, in draw_stroke
return self._draw_oilpaintbrush()
File "/content/stylized-neural-painting/renderer.py", line 326, in _draw_oilpaintbrush
x0, y0, w, h, theta, R0, G0, B0, R2, G2, B2)
File "/content/stylized-neural-painting/utils.py", line 286, in create_transformed_brush
brush_alpha = (brush_alpha > 0).astype(np.float32)
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Thanks for your help!

how to solve this problem

D:\BaiduNetdiskDownload\Stylized Neural Painting Main\Stylized Neural Painting>python demo.py --img_path ./test_images/sunflowers.jpg --c
anvas_color 'white' --max_m_strokes 500 --m_grid 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --outpu
t_dir ./output
Traceback (most recent call last):
  File "demo.py", line 4, in <module>
    torch.cuda.current_device()
  File "C:\Users\12100\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda\__init__.py", line 366, in current_device
    _lazy_init()
  File "C:\Users\12100\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

ask for advice

Hello, What an interesting display!
Can the final rendered image be written into SVG?

Advice on combining CLIP with the neural renderer

Hi, this is an very interesting and impressive work in using AI for art. Now, as CLIP has created more and more astounding painting from text, I just want to ask some advice, if possible, on combining CLIP and your work together which can be very interesting. Thanks

Error running demo script

So when I tried to run the demo script from the github code, I got an error saying no module named torch. I tried installing torch using the command pip install torch, but got ERROR: Command errored out with exit status 1.

Error for running Demo Script:
python demo_prog.py --img_path ./test_images/images.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net Traceback (most recent call last): File "C:\Users\20ben\stylized-neural-painting\demo_prog.py", line 3, in <module> import torch ModuleNotFoundError: No module named 'torch'

Error for installing torch:
ERROR: Command errored out with exit status 1: command: 'c:\users\20ben\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-eydivo47\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\20ben\python\Include\torch' cwd: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\ Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 225, in <module> setup(name="torch", version="0.1.2.post2", File "c:\users\20ben\python\lib\site-packages\setuptools\__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "c:\users\20ben\python\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\users\20ben\python\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "c:\users\20ben\python\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 99, in run self.run_command('build_deps') File "c:\users\20ben\python\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\users\20ben\python\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-d_ot0_7g\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\20ben\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-d_ot0_7g\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-eydivo47\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\20ben\python\Include\torch' Check the logs for full command output.

i can not use GPU

hello
I tested this project on both win10 powershell and WSL, but it always uses my poor cpu, but I don't know how to check the cause of this problem.
I tried to execute the following statement in python:
"import torch"
"print(torch.cuda.is_available())"
It returns true in Powershell and false in WSL.
Of course, I managed to execute the example, but it took quite a while.

So I need some help, thank you.
please forgive my terrible english.

brush_alpha erro

Hi, The program I run returned an error message:

File "utils.py" line 286, in create_transformed_brush
brush_alpha = (brush_alpha > 0).astype(np.float32)
TypeError: '>' not supported between instances of 'NoneType' and 'int'

command:
python ../stylized-neural-painting/demo_prog.py --img_path ../stylized-neural-painting/test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir ../stylized-neural-painting/checkpoints_G_oilpaintbrush --net_G zou-fusion-net

intersection brushstroke question

Hello, Sir. I am trying to physically recreate the "sketch" that made the model (markerpen). I chose the markerpene because in the process it will be convenient to decompose it into vectors/points (pic 1). I am going to paint with acrylic/oil, but this type of paint(acrylic/oil) cannot physically replicate it (pic 2, I called effect - intersection brushstroke). I cannot ignore it because it will affect the final result of the drawn (I mean, because of these intersections, a picture is formed)

Can I teach a model to rule it out? Perhaps changing it to a new smear? Perhaps I missed something and there is a ready-made solution? Thanks!
image

image
Pic 1

image
Pic 2

pre-trained renderer does not exist...

this is thrown when I run the command
python demo_prog.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --max_divide 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen --net_G zou-fusion-net --disable_preview

Using Cpu instead of GPU?

Hi, when i run the program it uses 80 percent of my cpu and almost no gpu. Renders take forever too. Is there a way that i can check to see if it's using my 2080ti? Can i force it to use my gpu? Thanks.

Some thing about CUDA

HI:

When I execute demo.py

I meet the bug as below:

RuntimeError: CUDA out of memory. Tried to allocate 626.00 MiB (GPU 0; 8.00 GiB total capacity; 3.71 GiB already allocated; 469.62 MiB free; 5.58 GiB reserved in total by PyTorch)

The model I choose is bigger one: ./checkpoints_G_oilpaintbrush

The net_G I choose is zou-fusion-net

Hope your answer.

Thank you.

Sharing my interesting results with rectangle renderer

Hi, thanks for open-sourcing the great work, and I just wanted to share my own results.
(I think it'll be great if there's a place for users to post their own results, which will be some kind of "open gallery")

The input was my profile picture (drawn with LuaTeX+TikZ before, which imitated the JOI logo - https://git.io/vQwmS)

veydpz_input

The progressive output was interesting; the network preferred to draw a tilted rectangle, rather than using an axis-aligned rectangle (can you think of a reason behind this?):
image

The final result looked like this:
veydpz_rendered_stroke_0495

No GPU

Hi,

Thanks indeed for your work.
I wanted to run a quick test on one of our virtual machines and got stuck as there is no GPU on the VM.
Is there a way I can run it without a GPU?

Many tanks in deed!
DF

ImportError: cannot import name 'PILLOW_VERSION'

Trying to run the following line

python demo_prog.py --img_path [MY_PICTURE] --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net

Receiving this error:

Traceback (most recent call last):
  File "demo_prog.py", line 5, in <module>
    from painter import *
  File "C:\Users\Tony\Documents\Projects\Personal Projects\stylized-neural-painting\painter.py", line 5, in <module>
    import utils
  File "C:\Users\Tony\Documents\Projects\Personal Projects\stylized-neural-painting\utils.py", line 12, in <module>
    import torchvision.transforms.functional as TF
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\__init__.py", line 2, in <module>
    from torchvision import datasets
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\datasets\__init__.py", line 9, in <module>
    from .fakedata import FakeData
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\datasets\fakedata.py", line 3, in <module>
    from .. import transforms
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\__init__.py", line 1, in <module>
    from .transforms import *
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\transforms.py", line 17, in <module>
    from . import functional as F
  File "C:\Users\Tony\AppData\Local\Programs\Python\Python36\lib\site-packages\torchvision\transforms\functional.py", line 5, in <module>
    from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
ImportError: cannot import name 'PILLOW_VERSION'

About input photo pixel?

Can I use photo pixel like 1280 * 720?
When I use --canvas_size 720
The photo size will be forced to be 720 * 720.
How to change the output photo size to 1280*720?

Help runnig in CPU

(SNP2) C:\Users\Administrator\Nova pasta\SNP>python demo_prog.py --img_path ./test_images/1.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush --net_G zou-fusion-net --disable_preview
Traceback (most recent call last):
File "demo_prog.py", line 4, in
torch.cuda.current_device()
File "C:\ProgramData\Anaconda3\envs\SNP2\lib\site-packages\torch\cuda_init_.py", line 366, in current_device
lazy_init()
File "C:\ProgramData\Anaconda3\envs\SNP2\lib\site-packages\torch\cuda_init
.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

About training new painting styles

Hello, your work is very creative, but I found that you didn't have train dataset, so how did you train the strokes of different painting styles? If I want to train a Chinese style painting stroke, what should I do? Hope your reply. Thanks so much!

Style Transfer Colab Notebook error

The first part of the image-to-painting-nst.ipynb works fine since this is creating a painting
through brush strokes and then a video.

The second part is the style transfer code, and I am getting an error here:
Screen Shot 2021-12-12 at 4 57 40 PM

style transfer in demo_nst.py

When I run demo_nst.py for style transfer as described in the readme.md, I get incorrect results, what should I do? Thanks
sunflowers_style_transfer_fire

Assertion failed error

Hi @jiupinjia

I am using Google Colab to run your light versions (specifically oilpaintbrush light) and everything ran fine till today - now I get an error like so disregard this thing about the error, Im an idiot, it was because I had not copied the source file into the correct directory.

Input:
!python3 demo_prog.py --img_path ./test_images/papa2.jpg --canvas_color 'black' --max_m_strokes 1280 --max_divide 8 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush_light --net_G zou-fusion-net-light

edit: This was originally an error report because I got something about OpenCV assertion failed. Turns out it was because I had not copied the source file properly into the colab folder. Just putting this in here for transparency.

Just a little suggestion

Is that technically possible that makes every stroke filled with only one single color?
I think it’s excellent for human to imitate and learn which would make this project more valuable.

Since I have not read the code yet, please forgive me for my whimsical dumb idea if it‘s totally impracticable.

Application for cooperation

Dear developer. I am a student of the University of the Chinese Academy of Sciences. My school held a public welfare competition with the theme of the integration of science and technology and art. Can Iuse the works drawn by this project to participate in a competition for the integration of science, technology and art? I want to make a comparison between AI painting and human painting . I won't use your AI project to compete. I just want to use the works created by it to compete and show the high painting level of AI at present

If you want to know more about the compitition , you can visit this link

https://kexie.ucas.ac.cn/index.php/zh/tgg/122-2021-07-21-02-10-49

If you have more information, please contact me as soon as possible, otherwise we will miss the registration time

RuntimeError: Error(s) in loading state_dict for ZouFCNFusionLight:

python=3.6.2
torch=1.2.0
nvidia 2080ti 11G
cuda=10.0
cudnn=7..6.4.38
python demo_prog.py --img_path ./test_images/apple.jpg --canvas_color 'white' --max_m_strokes 500 --max_divide 5 --renderer oilpaintbrush --renderer_checkpoint_dir checkpoints_G_oilpaintbrush
initialize network with normal
loading renderer from pre-trained checkpoint...
Traceback (most recent call last):
File "demo_prog.py", line 113, in
optimize_x(pt)
File "demo_prog.py", line 49, in optimize_x
pt._load_checkpoint()
File "/home/banana/GAN/stylized-neural-painting-main12/painter.py", line 71, in _load_checkpoint
self.net_G.load_state_dict(checkpoint['model_G_state_dict'])
File "/home/banana/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ZouFCNFusionLight:
Unexpected key(s) in state_dict: "huangnet.fc4.weight", "huangnet.fc4.bias", "huangnet.conv3.weight", "huangnet.conv3.bias", "huangnet.conv4.weight", "huangnet.conv4.bias", "huangnet.conv5.weight", "huangnet.conv5.bias", "huangnet.conv6.weight", "huangnet.conv6.bias", "dcgan.main.10.weight", "dcgan.main.10.bias", "dcgan.main.10.running_mean", "dcgan.main.10.running_var", "dcgan.main.10.num_batches_tracked", "dcgan.main.12.weight", "dcgan.main.13.weight", "dcgan.main.13.bias", "dcgan.main.13.running_mean", "dcgan.main.13.running_var", "dcgan.main.13.num_batches_tracked", "dcgan.main.15.weight".
size mismatch for huangnet.conv1.weight: copying a param with shape torch.Size([32, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 8, 3, 3]).
size mismatch for huangnet.conv1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for huangnet.conv2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([12, 64, 3, 3]).
size mismatch for huangnet.conv2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([12]).
size mismatch for dcgan.main.3.weight: copying a param with shape torch.Size([512, 512, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 256, 4, 4]).
size mismatch for dcgan.main.4.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.4.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for dcgan.main.6.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 128, 4, 4]).
size mismatch for dcgan.main.7.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.7.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for dcgan.main.9.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([128, 6, 4, 4]).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.