Giter VIP home page Giter VIP logo

waifu2x-chainer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

waifu2x-chainer's Issues

cannot downscale when using GPU for '-S' or '-L' command

Usually when downscale image size using CPU is not a problem. But, when using GPU, image enlargement does not reach the size target.

Example:
Enlarge the image with the smallest side of 900px targeting 3000px.

  • When using CPU the process becomes 900x2 = 1800x2 = 3600px downscaled to 3000px.
  • When using GPU the process only becomes 900x2 = 1800px.

Is there something wrong with it?

photoモデルのサポート

photoモデルを対応する計画がありますか?
ResNet10で、photoモデルを使ってみたいです。

[Bug] cannot write file as WebP. Wait, what?

Hai, I found this bug.

Traceback (most recent call last):
  File "waifu2x.py", line 224, in <module>
    icc_profile=icc_profile)
  File "/home/ucok66/.local/lib/python2.7/site-packages/PIL/Image.py", line 2007, in save
    save_handler(self, fp, filename)
  File "/home/ucok66/.local/lib/python2.7/site-packages/PIL/WebPImagePlugin.py", line 342, in _save
    raise IOError("cannot write file as WebP (encoder returned None)")
IOError: cannot write file as WebP (encoder returned None)

training custom model, is this accuracy reasonable?

!python /content/waifu2x-chainer/train.py --gpu 0 --dataset_dir ./jpg --patches 32 --epoch 10 --model_name reference_scale_rgb --downsampling_filters box lanczos --lr_decay_interval 3 --arch UpConv7
/usr/local/lib/python3.6/dist-packages/chainer/_environment_check.py:91: UserWarning:


Multiple installations of CuPy package has been detected.
You should select only one package from from ['cupy-cuda102', 'cupy-cuda101', 'cupy-cuda100', 'cupy-cuda92', 'cupy-cuda91', 'cupy-cuda90', 'cupy-cuda80', 'cupy'].
Follow these steps to resolve this issue:

  1. pip list to list CuPy packages installed
  2. pip uninstall <package name> to uninstall all CuPy packages
  3. pip install <package name> to install the proper one

'''.format(name=name, pkgs=pkgs))

  • loading filelist... done
  • setup model... done
  • check forward path... done
  • starting processes of dataset sampler... done

epoch: 0

inner epoch: 0

* best loss on training dataset: 0.002381
* best score on validation dataset: PSNR 22.072546 dB
* elapsed time: 881.055754 sec

inner epoch: 1

* best loss on training dataset: 0.001941
* best score on validation dataset: PSNR 22.579106 dB
* elapsed time: 103.186740 sec

inner epoch: 2

* best loss on training dataset: 0.001810
* best score on validation dataset: PSNR 22.679653 dB
* elapsed time: 97.545877 sec

inner epoch: 3

* best loss on training dataset: 0.001732
* best score on validation dataset: PSNR 23.013222 dB
* elapsed time: 91.364418 sec

epoch: 1

inner epoch: 0

* best score on validation dataset: PSNR 23.169959 dB
* elapsed time: 106.030671 sec

inner epoch: 1

* best loss on training dataset: 0.001652
* best score on validation dataset: PSNR 23.181128 dB
* elapsed time: 102.384226 sec

inner epoch: 2

* best loss on training dataset: 0.001593
* best score on validation dataset: PSNR 23.337347 dB
* elapsed time: 99.998622 sec

inner epoch: 3

* best loss on training dataset: 0.001545
* elapsed time: 91.267800 sec

epoch: 2

inner epoch: 0

* best score on validation dataset: PSNR 23.427455 dB
* elapsed time: 106.444332 sec

inner epoch: 1

* elapsed time: 103.891084 sec

inner epoch: 2

* best loss on training dataset: 0.001530
* elapsed time: 98.481055 sec

inner epoch: 3

* best loss on training dataset: 0.001489
* best score on validation dataset: PSNR 23.479888 dB
* elapsed time: 91.023230 sec

epoch: 3

inner epoch: 0

* best score on validation dataset: PSNR 23.528935 dB
* elapsed time: 106.155661 sec

inner epoch: 1

* elapsed time: 103.248413 sec

inner epoch: 2

* best loss on training dataset: 0.001455
* elapsed time: 98.362952 sec

inner epoch: 3

* best loss on training dataset: 0.001421
* learning rate decay: 0.000225
* elapsed time: 90.741462 sec

epoch: 4

inner epoch: 0

* best score on validation dataset: PSNR 23.533943 dB
* elapsed time: 105.805502 sec

inner epoch: 1

* best score on validation dataset: PSNR 23.573726 dB
* elapsed time: 103.978917 sec

inner epoch: 2

* best score on validation dataset: PSNR 23.623575 dB
* elapsed time: 99.085195 sec

inner epoch: 3

* best loss on training dataset: 0.001405
* best score on validation dataset: PSNR 23.658209 dB
* elapsed time: 91.353107 sec

epoch: 5

inner epoch: 0

* best score on validation dataset: PSNR 23.710338 dB
* elapsed time: 106.360211 sec

inner epoch: 1

* best score on validation dataset: PSNR 23.714681 dB
* elapsed time: 104.392316 sec

inner epoch: 2

* elapsed time: 99.168070 sec

inner epoch: 3

* best loss on training dataset: 0.001383
* elapsed time: 91.488422 sec

epoch: 6

inner epoch: 0

* best score on validation dataset: PSNR 23.732231 dB
* elapsed time: 107.400784 sec

inner epoch: 1

* best score on validation dataset: PSNR 23.761270 dB
* elapsed time: 105.255023 sec

inner epoch: 2

* elapsed time: 99.553206 sec

inner epoch: 3

* elapsed time: 91.538299 sec

epoch: 7

inner epoch: 0

* best score on validation dataset: PSNR 23.764517 dB
* elapsed time: 106.978499 sec

inner epoch: 1

* elapsed time: 104.257074 sec

inner epoch: 2

* elapsed time: 99.090099 sec

inner epoch: 3

* best loss on training dataset: 0.001361
* learning rate decay: 0.000203
* elapsed time: 91.054147 sec

epoch: 8

inner epoch: 0

* best score on validation dataset: PSNR 23.816657 dB
* elapsed time: 105.482888 sec

inner epoch: 1

* best score on validation dataset: PSNR 23.883949 dB
* elapsed time: 104.894173 sec

inner epoch: 2

* elapsed time: 99.114435 sec

inner epoch: 3

* elapsed time: 90.749769 sec

epoch: 9

inner epoch: 0

* learning rate decay: 0.000182
* elapsed time: 92.891063 sec

inner epoch: 1

* elapsed time: 90.760490 sec

inner epoch: 2

* elapsed time: 90.826175 sec

inner epoch: 3

* best loss on training dataset: 0.001345
* learning rate decay: 0.000164
* elapsed time: 90.922554 sec`

I read that the PSNR for the pretrained model is 30+, is 23dB too low? Maybe my training set is bad? My training set consists of mostly black and white images, specifically Japanese comic book pages, if that matters.

出力先に画像を指定した場合の挙動

出力先に画像を指定した場合の挙動に関して要望があります。
以下のようなコマンドを実行した場合、

python waifu2x.py -m scale -s 2 -i "C:\waifu2x\001.png" -a 2 -o "C:\waifu2x\001_hoge.png"

画像の出力先は次のようになってしまいます。
C:\waifu2x\001_hoge.png\001_(scale2.0x)(resnet10_rgb).png

この動作は仕様なのかもしれませんが-oで指定した場所に出力出来るようになるとありがたいです。
ご検討いただけないでしょうか?

RuntimeError: CUDA environment is not correctly set up

I have macOS and when I execute :

python3 waifu2x.py --method noise --noise_level 1 --input imgsd2.jpg --arch VGG7 --gpu 0

I have this error:

Traceback (most recent call last):
File "waifu2x.py", line 160, in
models = load_models(args)
File "waifu2x.py", line 120, in load_models
chainer.backends.cuda.check_cuda_available()
File "/Users/pauperales/anaconda3/lib/python3.7/site-packages/chainer/backends/cuda.py", line 93, in check_cuda_available
raise RuntimeError(msg)
RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).cannot import name 'cuda' from 'cupy' (unknown location)

argument '-e' is not functional

Honestly, this might be a bug that I like because the input format and output format are the same, but this makes the '-e' argument not work. If after this bug is fixed, can you make an option so that the input format and output format are the same?

Output file overwrites input file

In previous versions, the program's output filename would be a name based on the input filename and the options you chose. However, at the end of the procedure, no matter what parameter you used, the output filename will be the same as the input's and it'll overwrite.

I do not think this is intended behavior looking at the code.

[Question] About reproducing same performance of Upconv7

Hi,

Thanks for sharing the code with us. I'm new to this model, Waifu2x.
I tried to reproduce the quality(or maybe PSNR) of the output result with UpConv7.
I use the same procedure of pre-train & fine-tuning in appendix with "DIV2K dataset(.png )".

  1. python train.py --gpu 0 --dataset_dir png_dataset --patches 32 --epoch 10 --model_name reference_scale_rgb --downsampling_filters box lanczos --lr_decay_interval 3 --arch UpConv7
  2. python train.py --gpu 0 --dataset_dir png_dataset --finetune reference_scale_rgb.npz --downsampling_filters box lanczos --arch UpConv7

But the PSNR is just 29.xxx & the output image quality is not as good as your model (anime_style_scale_rgb.npz in upcon7).

Would you give me some advice for reproduce the result ? Which image dataset should
I use ? Or maybe which parts i did something wrong ?

Thanks a lot~~~

MemoryError

Level 3 denoising... Traceback (most recent call last):
File "waifu2x.py", line 133, in
dst = denoise_image(dst, models['noise'], args)
File "waifu2x.py", line 21, in denoise_image
dst = reconstruct.image(src, model, cfg.block_size, cfg.batch_size)
File "/home/lib/reconstruct.py", line 137, in image
dst = blockwise(src, model, block_size, batch_size)
File "/home/lib/reconstruct.py", line 53, in blockwise
batch_y = model(batch_x)
File "/home/lib/srcnn.py", line 26, in call
h = F.leaky_relu(self.conv3(h), 0.1)
File "/usr/local/lib/python3.5/dist-packages/chainer/links/connection/convolution_2d.py", line 154, in call
x, self.W, self.b, self.stride, self.pad)
File "/usr/local/lib/python3.5/dist-packages/chainer/functions/connection/convolution_2d.py", line 439, in convolution_2d
return func(x, W, b)
File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 200, in call
outputs = self.forward(in_data)
File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 329, in forward
return self.forward_cpu(inputs)
File "/usr/local/lib/python3.5/dist-packages/chainer/functions/connection/convolution_2d.py", line 87, in forward_cpu
cover_all=self.cover_all)
File "/usr/local/lib/python3.5/dist-packages/chainer/utils/conv.py", line 33, in im2col_cpu
col = numpy.ndarray((n, c, kh, kw, out_h, out_w), dtype=img.dtype)
MemoryError

Is there any way to increase upscale to more than 2x?

[edit] I just noticed the option --scale -s . I will close the topic.

The project already works perfectly fine with the following command:
python waifu2x.py -m noise_scale -n 2 -i /content/frames -a 1 -g 0

But i would like to increase the resolution by 4x, is this possible with the code as it stands?

I use the project with Google Colab because I can access files from Google Drive and my computer. To use in the latter case, simply run scritp 2 times, because the first one generates an error. Link of notebook:
https://colab.research.google.com/drive/1yIHdA4kUmB9RFy7-DDKXayatk0M0-ADU#forceEdit=true&sandboxMode=true

[Question] Use without gpu

First, congratulations on the code!
Is it possible to work without graphics cards? How can I do this?

How to make noise reduction result smooth ?

I'm wonder how does this code make results looks so smooth that have no noise mostly.
below is my image to image network without any postprocess and preprocess, noise making is like yours. But i have more noise. noise level is 2. Please tell me what's more need to do. Thanks.
First is mine, second is produced by yours.
300x300_4x_gen_300_300_v6_sub_init

300x300_waifu2x_photo_noise3_tta_1

training problems

Hello m8, when I try to train a custom model, this error appears and I do not know how to solve it.

  • loading filelist... done
  • setup model... done
  • check forward path... done
  • starting processes of dataset sampler... done

epoch: 0

inner epoch: 0

Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/content/waifu2x-chainer/lib/dataset_sampler.py", line 85, in _worker
xc_batch, yc_batch = pairwise_transform(img, cfg)
File "/content/waifu2x-chainer/lib/pairwise_transform.py", line 148, in pairwise_transform
cfg.nr_rate, cfg.chroma_subsampling_rate, cfg.noise_level)
File "/content/waifu2x-chainer/lib/pairwise_transform.py", line 66, in noise_scale
with iproc.array_to_wand(src) as tmp:
File "/content/waifu2x-chainer/lib/iproc.py", line 75, in array_to_wand
dst = wand.image.Image(blob=buf.getvalue())
NameError: name 'wand' is not defined
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/content/waifu2x-chainer/lib/dataset_sampler.py", line 85, in _worker
xc_batch, yc_batch = pairwise_transform(img, cfg)
File "/content/waifu2x-chainer/lib/pairwise_transform.py", line 148, in pairwise_transform
cfg.nr_rate, cfg.chroma_subsampling_rate, cfg.noise_level)
File "/content/waifu2x-chainer/lib/pairwise_transform.py", line 66, in noise_scale
with iproc.array_to_wand(src) as tmp:
File "/content/waifu2x-chainer/lib/iproc.py", line 75, in array_to_wand
dst = wand.image.Image(blob=buf.getvalue())
NameError: name 'wand' is not defined

Wand is the cause of the problem.
I use ubuntu 18.04 and cuda 10.0

AttributeError: 'NoneType' object has no attribute 'image'

In iproc.py I canged try: import wand.image except ImportError: wand = None pass because pycharm said 'wand' in try block with 'except ImportError' should also be defined in except block and it make AttributeError: 'NoneType' object has no attribute 'image' this error. How can i solve this?

Train a model with 2 imputs.

Hi m8, I have prepared a extensive dataset divided into 2 folders, in the "A" folder with low-resolution pictures and the "B" folder where are the high-clean-resolution ones.
I want to train my own model to achieve a super-resolution task.

Is it possible to do that feat?

ResNet10を使って訓練する途中エラー発生

エラーが発生したコード

C:\Users\Kenin\waifu2x-chainer>python train.py --dataset_dir (dir) --method noise_scale --noise_level 1 --finetune test.npz --arch 2

結果

* loading filelist... done
* loading model... done
* starting processes of dataset sampler... done
### epoch: 0 ###
  # inner epoch: 0
Process Process-1:
Traceback (most recent call last):
  File "C:\Users\Kenin\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 258, in _bootstrap
    self.run()
  File "C:\Users\Kenin\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Kenin\waifu2x-chainer\lib\dataset_sampler.py", line 85, in _worker
    xc_batch, yc_batch = pairwise_transform(img, cfg)
  File "C:\Users\Kenin\waifu2x-chainer\lib\pairwise_transform.py", line 151, in pairwise_transform
    raise ValueError('inner_scale must be > 1')
ValueError: inner_scale must be > 1
Process Process-2:
Traceback (most recent call last):
  File "C:\Users\Kenin\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 258, in _bootstrap
    self.run()
  File "C:\Users\Kenin\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Kenin\waifu2x-chainer\lib\dataset_sampler.py", line 85, in _worker
    xc_batch, yc_batch = pairwise_transform(img, cfg)
  File "C:\Users\Kenin\waifu2x-chainer\lib\pairwise_transform.py", line 151, in pairwise_transform
    raise ValueError('inner_scale must be > 1')
ValueError: inner_scale must be > 1

--method noise_scaleじゃなく--method noiseを使用すると、エラーが発生しませんでした。

an error message in the middle of processing

when I tried to enlarge my 132 images with this command
python waifu2x.py -a 3 -e png -q 8 -b 8 -l 128 -m noise_scale -s 4 -n 3 -i /home/ucok66/material/4 -o /home/ucok66/waifu
i got this message

Traceback (most recent call last):
  File "waifu2x.py", line 175, in <module>
    src = Image.open(path)
  File "/usr/lib/python3.7/site-packages/PIL/Image.py", line 2687, in open
    % (filename if filename else fp))
OSError: cannot identify image file '/home/ucok66/material/4/.directory'

did i did something wrong?

Release to pypi.org

Hi it'd be great to see this on pypi.org if you have the time to upload it

Thanks for the project! <3

--model_dirオプション使用時の挙動

連日申し訳ありません。
--model_dirでRGBモデル以外のディレクトリを指定すると以下のようなエラーが出ます。
作業フォルダはC:\Users***\Desktop\動作テストです。

python "C:\Users*\waifu2x-chainer\waifu2x.py" -i "C:\Users*\Desktop\動作テスト\001.png" -m scale -s 2 -d "C:\Users***\waifu2x-chainer\models\resnet10" -l 64

Traceback (most recent call last):
File "C:\Users*\waifu2x-chainer\waifu2x.py", line 159, in
models = load_models(args)
File "C:\Users*
\waifu2x-chainer\waifu2x.py", line 108, in load_models
chainer.serializers.load_npz(model_path, models['scale'])
File "C:\Users*\Anaconda3\lib\site-packages\chainer\serializers\npz.py", line 153, in load_npz
d.load(obj)
File "C:\Users*
\Anaconda3\lib\site-packages\chainer\serializer.py", line 83, in load
obj.serialize(self)
File "C:\Users*\Anaconda3\lib\site-packages\chainer\link.py", line 795, in serialize
d[name].serialize(serializer[name])
File "C:\Users*
\Anaconda3\lib\site-packages\chainer\link.py", line 551, in serialize
data = serializer(name, param.data)
File "C:\Users*\Anaconda3\lib\site-packages\chainer\serializers\npz.py", line 116, in call
dataset = self.npz[key]
File "C:\Users*
\Anaconda3\lib\site-packages\numpy\lib\npyio.py", line 237, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'conv4/b is not a file in the archive'

また--model_dirと--archを同時に指定した場合は変換自体は成功しているようなのですが指定モデルの種類に関わらず出力ファイル名が全てmodel_rgbになってしまいます。

The upresnet10 model is not exported as a caffe model

I tried to train the upresnet10 model. I want to export it as a caffe model. On the gui, it is used by waifux2x-caffe. I tried to run py .\convert_models.py, which prompted me to export errors:

C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: Node () has input size 1 not in range [min=3, max=5].

==> Context: Bad node spec for node. Name: OpType: Slice

Need cunet model

I will appreciate it if cunet model can be added to this project.
or teach me how to convert original .t7 model to .npz files, and i will do it myself.

[Feature Request] CUnet model?

wiafu2x-caffe recently updated with the CUnet model. It's really good. Unfortunately, waifu2x-caffe still handles large images as poorly as ever, which leads me to wanting a chainer implementation of it. Is there one available anywhere, and if not, would you make one or convert the caffe one for the project? Thanks.

How can it run on GPU?

When I use the pretrained model, the result is really good. But it costs 45 min to make a image (3450x2230) to 2x. So should I try it with GPU? Because it runs on CPU default. Thanks a lot!

[Question] how to build Waifu2x-chainer on Gcloud VM instance correctly?

Before I ask, thank you to Tsurumeso who has solved my Issues.
I tried to build Waifu2x-chainer on Gcloud, but it does not work. Does anyone ever try to make it on Gcloud?

here my attempt:

  • VM instance configuration:
    • Machine type: 8vcpu, 20GB memory
    • Boot disk: Intel® optimized Deep Learning Image: Base m22 (with Intel® MKL and CUDA 10.0) with 50 GB SSD persistent disk
    • the rest, I leave it to default (i didn't use GPU)
  • on the terminal:
ucok66@ucok66 Linux 5.0.7-1-MANJARO x86_64 18.0.4 Illyria
~ >>> gcloud compute --project "vigilant-guru-237109" ssh --zone "us-east1-b" "instance-1"                                                                        [130]
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
This tool needs to create the directory [/home/ucok66/.ssh] before 
being able to generate SSH keys.

Do you want to continue (Y/n)?  y

Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ucok66/.ssh/google_compute_engine.
Your public key has been saved in /home/ucok66/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:q2WW2wutVquuYd++yOkP7b4+dkapywu7LdRo+R8I1ho ucok66@ucok66
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                 |
|         .       |
|        E+.  .   |
|       .=Oo.o    |
|      oo@o++.    |
|     . Oo/* o.   |
|      o=&X^@.    |
+----[SHA256]-----+
Updating project ssh metadata...⠛Updated [https://www.googleapis.com/compute/v1/projects/vigilant-guru-237109].                                                        
Updating project ssh metadata...done.                                                                                                                                  
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.4483393792698769101' (ECDSA) to the list of known hosts.
Enter passphrase for key '/home/ucok66/.ssh/google_compute_engine': 
Enter passphrase for key '/home/ucok66/.ssh/google_compute_engine': 
======================================
Welcome to the Google Deep Learning VM
======================================

Version: m22
Based on: Debian GNU/Linux 9.8 (stretch) (GNU/Linux 4.9.0-8-amd64 x86_64\n)

Resources:
 * Google Deep Learning Platform StackOverflow: https://stackoverflow.com/questions/tagged/google-dl-platform
 * Google Cloud Documentation: https://cloud.google.com/deep-learning-vm
 * Google Group: https://groups.google.com/forum/#!forum/google-dl-platform

To reinstall Nvidia driver (if needed) run:
sudo /opt/deeplearning/install-driver.sh
Linux instance-1 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
ucok66@instance-1:~$ pws
-bash: pws: command not found
ucok66@instance-1:~$ pwd
/home/ucok66
ucok66@instance-1:~$ pip install chainer
Collecting chainer
  Downloading https://files.pythonhosted.org/packages/12/ed/8b923bc28345c5b3e53358ba7e5e09b02142fc612378fd90986cf40073ef/chainer-5.4.0.tar.gz (525kB)
    100% |████████████████████████████████| 532kB 2.3MB/s 
Collecting filelock (from chainer)
  Downloading https://files.pythonhosted.org/packages/2a/bd/6a87635dba4906ae56377b22f64805b2f00d8cafb26e411caaf3559a5475/filelock-3.0.10.tar.gz
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python2.7/site-packages (from chainer)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from chainer)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python2.7/dist-packages (from chainer)
Requirement already satisfied: mkl-random in /usr/local/lib/python2.7/dist-packages (from numpy>=1.9.0->chainer)
Requirement already satisfied: mkl-fft in /usr/local/lib/python2.7/dist-packages (from numpy>=1.9.0->chainer)
Requirement already satisfied: mkl in /usr/local/lib/python2.7/dist-packages (from numpy>=1.9.0->chainer)
Requirement already satisfied: tbb4py in /usr/local/lib/python2.7/dist-packages (from numpy>=1.9.0->chainer)
Requirement already satisfied: icc-rt in /usr/local/lib/python2.7/dist-packages (from numpy>=1.9.0->chainer)
Requirement already satisfied: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf>=3.0.0->chainer)
Requirement already satisfied: intel-numpy in /usr/local/lib/python2.7/dist-packages (from mkl-random->numpy>=1.9.0->chainer)
Requirement already satisfied: intel-openmp in /usr/local/lib/python2.7/dist-packages (from mkl->numpy>=1.9.0->chainer)
Requirement already satisfied: tbb==2019.* in /usr/local/lib/python2.7/dist-packages (from tbb4py->numpy>=1.9.0->chainer)
Building wheels for collected packages: chainer, filelock
  Running setup.py bdist_wheel for chainer ... done
  Stored in directory: /home/ucok66/.cache/pip/wheels/eb/18/d2/5e85cbd7f32026e5e72cc466a5a17fd1939e99ffeeaaea267b
  Running setup.py bdist_wheel for filelock ... done
  Stored in directory: /home/ucok66/.cache/pip/wheels/46/b3/26/8803692ec1f1729fcf201583b5de74f112da1f1488f36e47b0
Successfully built chainer filelock
Installing collected packages: filelock, chainer
Successfully installed chainer-5.4.0 filelock-3.0.10
ucok66@instance-1:~$ pip install pillow
Collecting pillow
  Downloading https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl (2.0MB)
    100% |████████████████████████████████| 2.0MB 599kB/s 
Installing collected packages: pillow
Successfully installed pillow-6.0.0
ucok66@instance-1:~$ git clone https://github.com/tsurumeso/waifu2x-chainer.git
Cloning into 'waifu2x-chainer'...
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 1416 (delta 2), reused 3 (delta 1), pack-reused 1410
Receiving objects: 100% (1416/1416), 183.44 MiB | 37.27 MiB/s, done.
Resolving deltas: 100% (795/795), done.
ucok66@instance-1:~$ mkdir material1
ucok66@instance-1:~$ mkdir waifu1
ucok66@instance-1:~$ ls
material1  waifu1  waifu2x-chainer
ucok66@instance-1:~$ pwd
/home/ucok66
ucok66@instance-1:~$ ls
material1  waifu1  waifu2x-chainer
ucok66@instance-1:~$ pwd
/home/ucok66
ucok66@instance-1:~$ cd waifu2x-chainer
ucok66@instance-1:~/waifu2x-chainer$ python waifu2x.py -a 3 -e webp -t -T 4 -m noise_scale -s 4 -n 2 -i /home/ucok66/material1 -o /home/ucok66/waifu1
ucok66@instance-1:~/waifu2x-chainer$ python waifu2x.py -a 3 -e webp -t -T 4 -m noise_scale -s 4 -n 2 -i ~/material1 -o ~/waifu1

and this is how I upload the images:

~ >>> gcloud compute scp --recurse /home/ucok66/material/1/ ucok66@instance-1:/home/ucok66/material1                                     [1]
No zone specified. Using zone [us-east1-b] for instance: [instance-1].
Enter passphrase for key '/home/ucok66/.ssh/google_compute_engine': 
1128308.jpg                                                                                                                       100% 2165KB 358.4KB/s   00:06    
1482332048.png                                                                                                                100%  248KB 266.5KB/s   00:00    
1038214.jpg                                                                                                                       100% 1030KB 340.8KB/s   00:03    
1038220.jpg                                                                                                                       100% 1058KB 348.4KB/s   00:03        
1038219.jpg                                                                                                                       100%  546KB 335.8KB/s   00:01    

See, nothing happens when I enter the command. Does anyone has been successful when tried to build Waifu2x-chainer on Gcloud?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.