Giter VIP home page Giter VIP logo

yuval-alaluf / sam Goto Github PK

View Code? Open in Web Editor NEW
596.0 596.0 141.0 24.62 MB

Official Implementation for "Only a Matter of Style: Age Transformation Using a Style-Based Regression Model" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02754

Home Page: https://yuval-alaluf.github.io/SAM/

License: MIT License

Python 73.63% C++ 1.14% Cuda 9.21% Jupyter Notebook 16.02%
age-transformation aging generative-adversarial-networks stylegan

sam's People

Contributors

amrzv avatar bfirsh avatar chenxwh avatar yuval-alaluf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sam's Issues

Batch align

It took me a while to realize I have to use pre-aligned images.
So how do I use align_all_parallel.py to batch align a bunch before running inference?

How the loss lambdas are set?

Hi @yuval-alaluf , thanks for the amazing work!
As a research beginner, I am trying to implement a network structure similar to this one. The training of my model makes me struggle as I can't even get the training loss to go down. I have tried different hyper parameters and suspect that the problem is with my loss weights. May I know how did you search for the loss weights to train the model successfully?

how to load my latent vectors?

how to load my latent vectors?

I have latent vectors obtained from another image inverter,
my latent codes are (18,512). How do I load those vectors instead of images?

Cuda error on training the model

After upgrading of GPU from GTX-1080-8GB to RTX-3090-24GB, I am getting following errors:

get_cuda_arch_flags
raise ValueError("Unknown CUDA arch ({}) or GPU not supported".format(arch))
ValueError: Unknown CUDA arch (8.6) or GPU not supported

currently, I have cuda version 10.1.
Do I need to upgrade it also?
If yes, then what version of the cuda would be best to run the train model commands?

No module named 'models.fused_act'

Traceback (most recent call last):
File "scripts/inference.py", line 19, in
from models.psp import pSp
File ".\models_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu

Question about the faces generated at the early steps of training

Thank you very much for opening source such a cool project, I'm very interested in it! 🆒

Then I used the FFHQ512×512 dataset to run the training again (use your pretrained SAM model to initialize the network weight), and used Adam optimizer with learning rate set the to 0.0001.

However, the faces generated during the early 15,000 steps are similar to a female, no matter the input faces are male or female, the face is round or square. Later steps, the generating faces seem to more like the input faces.

In the early:

image
image

Later:

image

What's the reason? At first I even thought there was problem reading the input images in the code. Is it the reason about styleGAN2 decoder or the pSp encoder?

Thanks.

Inference results as a test results issue

I have run successfully the command for training model. The issue I am getting is that the result for the inference on the trained images is not coming while training the model. Mean, I am getting training outputs but not test outputs.

I used the following paths:
'ffhq': '/images1024x1024',
'celeba_test': '/to'

I placed an image of an user in the 'celeba_test' folder to test while training the model but no response on the inference side.

and following command:
python3 scripts/train.py
--dataset_type=ffhq_aging
--exp_dir=/to/experiment
--workers=6
--batch_size=6
--test_batch_size=6
--workers=6
--test_workers=6
--val_interval=2000
--save_interval=10000
--start_from_encoded_w_plus
--id_lambda=0.1
--lpips_lambda=0.1
--lpips_lambda_aging=0.1
--lpips_lambda_crop=0.6
--l2_lambda=0.25
--l2_lambda_aging=0.25
--l2_lambda_crop=1
--w_norm_lambda=0.005
--aging_lambda=5
--cycle_lambda=1
--input_nc=4
--max_steps=30000
--output_size=1024
--target_age=uniform_random
--use_weighted_id_loss
--checkpoint_path=trained/sam_ffhq_aging.pt

[code] About the coach.py

Hi , thanks for your great work! There are tow loss.backward() in coach.py, is there any need for loss.backward(retain_graph=True)?
Because the Calculation graph will be cleared after backward() by default

Inference cannot be run

Hello, thank you for sharing.
I downloaded the pretrained Sam model and installed the environment through annaconda.
When I execute
python scripts/inference.py \ --exp_dir=path/to/experiment \ --checkpoint_path=pretrained_models/sam_ffhq_aging .pt\ --data_path=path/to/test_data \ --test_batch_size=4 \ --test_workers=4 \ --target_age=0,10,20,30,40,50,60,70,80
No image is output and no error is reported

What is the difference between source and target?

Hi @yuval-alaluf , thank you for the amazing work! It inspired me a lot.
When I read your code, I found the source and target points to the same directory, thus same data. I wonder why we need source and target since they are the same?
For the id loss, why we even need diff_view since it ought to be zero?

train.py script showing me an error

I am trying to execute following command:

python3 scripts/train.py
--dataset_type=ffhq_aging
--exp_dir=/path/to/experiment
--workers=2
--batch_size=2
--test_batch_size=2
--test_workers=2
--val_interval=2500
--save_interval=10000
--start_from_encoded_w_plus
--id_lambda=0.1
--lpips_lambda=0.1
--lpips_lambda_aging=0.1
--lpips_lambda_crop=0.6
--l2_lambda=0.25
--l2_lambda_aging=0.25
--l2_lambda_crop=1
--w_norm_lambda=0.005
--aging_lambda=5
--cycle_lambda=1
--input_nc=4
--target_age=uniform_random
--use_weighted_id_loss
--checkpoint_path=/usr/local/SAM/trained/sam_ffhq_aging.pt

and getting following error:


Loading SAM from checkpoint: /usr/local/SAM/trained/sam_ffhq_aging.pt
Loading ResNet ArcFace
Loading dataset for ffhq_aging
Number of training samples: 11
Number of test samples: 1
2021-07-14 11:50:06.332677: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/root/anaconda3/envs/sam_envs/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/usr/local/SAM/criteria/aging_loss.py:24: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
predict_age_pb = F.softmax(age_pb)
Traceback (most recent call last):
File "scripts/train.py", line 30, in
main()
File "scripts/train.py", line 26, in main
coach.train()
File "/usr/local/SAM/training/coach_aging.py", line 120, in train
y_recovered, latent_cycle = self.perform_forward_pass(y_hat_inverse)
File "/usr/local/SAM/training/coach_aging.py", line 78, in perform_forward_pass
y_hat, latent = self.net.forward(x, return_latents=True)
File "/usr/local/SAM/models/psp.py", line 90, in forward
return_latents=return_latents)
File "/root/anaconda3/envs/sam_envs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/SAM/models/stylegan2/model.py", line 529, in forward
out = conv1(out, latent[:, i], noise=noise1)
File "/root/anaconda3/envs/sam_envs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/SAM/models/stylegan2/model.py", line 332, in forward
out = self.conv(input, style)
File "/root/anaconda3/envs/sam_envs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/SAM/models/stylegan2/model.py", line 257, in forward
out = self.blur(out)
File "/root/anaconda3/envs/sam_envs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/SAM/models/stylegan2/model.py", line 85, in forward
out = upfirdn2d(input, self.kernel, pad=self.pad)
File "/usr/local/SAM/models/stylegan2/op/upfirdn2d.py", line 144, in upfirdn2d
input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
File "/usr/local/SAM/models/stylegan2/op/upfirdn2d.py", line 116, in forward
input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 7.93 GiB total capacity; 6.95 GiB already allocated; 192.44 MiB free; 7.13 GiB reserved in total by PyTorch)


About input images

Thanks for your great work. I would like to ask some questions.

For the training, do input facial images need to be aligned and cropped before they are input the network? For example, according to FFHQ dataset, there are in-the-wild-images (955GB) version and Aligned and Cropped Images1024x1024 (89.1GB) version. Which version did you download and resize its images to 256x256 input images in your paper?

As for the testing and your demo, is alignment and cropping pre-processing also indispensable? Otherwise it will affect the final performance?

Thanks.

[Error] [Win] INFO: Could not find files for the given pattern(s).

I ran scripts/inference.py with --exp_dir=output/ --checkpoint_path=saved_model/best_model.pt --data_path=input/ --test_batch_size=1 --test_workers=1 --target_age=0,10,20,30,40,50,60,70,80 arguments and got an error. Can you please help me resolve it?

dependency versions that I'm using on windows:
cmake==3.22.4
dlib==19.24.0
ninja==1.10.2.3
numpy==1.21.6
scipy==1.7.3
torch==1.9.0+cu111
torchaudio==0.9.0
torchvision==0.10.0+cu111

C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
INFO: Could not find files for the given pattern(s).
Traceback (most recent call last):
File "C:/Work/SAM/scripts/inference.py", line 20, in
from models.psp import pSp
File "C:\Work\SAM\models\psp.py", line 12, in
from models.encoders import psp_encoders
File "C:\Work\SAM\models\encoders\psp_encoders.py", line 8, in
from models.stylegan2.model import EqualLinear
File "C:\Work\SAM\models\stylegan2\model.py", line 7, in
from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "C:\Work\SAM\models\stylegan2\op_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "C:\Work\SAM\models\stylegan2\op\fused_act.py", line 13, in
os.path.join(module_path, 'fused_bias_act_kernel.cu'),
File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1092, in load
keep_intermediates=keep_intermediates)
File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1303, in _jit_compile
is_standalone=is_standalone)
File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1401, in _write_ninja_file_and_build_library
is_standalone=is_standalone)
File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1834, in _write_ninja_file_to_build_library
with_cuda=with_cuda)
File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1950, in _write_ninja_file
'cl']).decode().split('\r\n')
File "C:\Users\affan\AppData\Local\Programs\Python\Python37\lib\subprocess.py", line 411, in check_output
**kwargs).stdout
File "C:\Users\affan\AppData\Local\Programs\Python\Python37\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

Gender in inference

Hi sir,

I have a question about inference procedure.
The model seems to be very good at age transformation. However, in terms of gender, there are some problems in my case.

I filter and get all Asian woman from FFHQ dataset (I call this sub-data). Then, I use all your pre-trained models (psp encode, sam age, ...) to inference on my sub-data. The age transformation seems to be very good. However, the gender of original image seems not to be preserved. For example:

40_09289

The origin is female but it transforms into male. This issue affects much on target_ages = 30,40,50. So my idea to fix this is:

  • Firstly, I will train my own psp encode using pSp repo with my sub-data only (not full FFHQ, only Asian woman).
  • Then, use that pre-trained psp encode in SAM. Train SAM again with my sub-data (also only Asian woman).

Could you give me your perspectives and feedback for this idea to preserve gender? Thank you, sir.

Details about latent vectors

There are 17 latent masks in the code. Could you please refer any document from where I could read them in details?

Could you please share more details about PCA?

Hi @yuval-alaluf , thank you for the amazing work!
Regarding the PCA performed in the paper, could you please share more details? For the 5 different images in Fig.9, did you use their corresponding latent codes as a dataset (a dataset with shape [50, 512] or [50, 18, 512] in this case) and performed PCA on the dataset, or you performed PCA on each individual image (5 datasets with shape [10, 512] or [10, 18, 512] in this case), draw individual paths and then put them in the same figure? Please bear with me if I was wrong.

GPU parallel

Excuese me,can you tell me how to set up multiple GPUs in parallel?

Purpose of parameters for inference.py script

Could you please guide us what is the purpose of following two parameters? I check details of Readme context but could not find relevant.

--test_batch_size=4
--test_workers=4 \

Also, how can we improve quality of old face effects?

Cuda out of memory

I am getting following error if I run train.py.

RuntimeError: CUDA out of memory. Tried to allocate 54.00 MiB (GPU 0; 7.93 GiB total capacity; 6.86 GiB already allocated; 18.44 MiB free; 7.31 GiB reserved in total by PyTorch)

Also, if I run following commands the I get output:
import torch
print(torch.rand(1, device="cuda"))

output: tensor([0.4547], device='cuda:0')

Could you please help me to fix it?

inconsistent results

Hi Yuval, Thanks for sharing this cool implementation
I've played with it, and noticed i'm getting bad results when inputing non-square non-cropped face images
see example below.

When compared your project to "stylegan2 distillation" I notice they deal with it better
Any idea how I can improve this to deal with real world images better?

Reference image below:

margi-test

Getting error in compiling code

I am trying to execute following command:

python scripts/inference.py \

--exp_dir=/to
--checkpoint_path=/trained/sam2.pt
--data_path=/from
--test_batch_size=4
--test_workers=4
--couple_outputs
--target_age=0,10,20,30,40,50,60,70,80

and I got following error:


Traceback (most recent call last):
File "scripts/inference.py", line 19, in
from models.psp import pSp
File "./models/psp.py", line 12, in
from models.encoders import psp_encoders
File "./models/encoders/psp_encoders.py", line 8, in
from models.stylegan2.model import EqualLinear
File "./models/stylegan2/model.py", line 7, in
from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "./models/stylegan2/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "./models/stylegan2/op/fused_act.py", line 13, in
os.path.join(module_path, 'fused_bias_act_kernel.cu'),
File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 974, in load
keep_intermediates=keep_intermediates)
File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1179, in _jit_compile
with_cuda=with_cuda)
File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1251, in _write_ninja_file_and_build_library
check_compiler_abi_compatibility(compiler)
File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 248, in check_compiler_abi_compatibility
if not check_compiler_ok_for_platform(compiler):
File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 208, in check_compiler_ok_for_platform
which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT)
File "/root/anaconda3/envs/sam_env/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/root/anaconda3/envs/sam_env/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['which', 'c++']' returned non-zero exit status 1.


Could you help me to fix this error?

Tflite

Can i convert model to run on mobile device demo using Tflite

Eye color is changing

We trained model by using FFHQ images. Now, while applying SAM age effects on the image, it is changing the eye retina color. For more clarity, please see the image below:

1a

4a

Could you please guide me how to fix this issue i.e. to keep the color same after SAM age filters?

Also, the outer part of the face i.e. background is also changing. How can we fix the outer part of the face after applying SAM age filters?

Getting more details

Hi, I want to know more details on the followings parameters:

--id_lambda=0.5
--lpips_lambda=0.5
--lpips_lambda_aging=0.5
--lpips_lambda_crop=0.6
--l2_lambda=0.25
--l2_lambda_aging=0.25
--l2_lambda_crop=1
--w_norm_lambda=0.005
--aging_lambda=5
--cycle_lambda=1

I visited train_options.py but those details are not enough.
How can I get more details on the above parameters?
Could you elaborate more about the above parameters?

Latent vector

I am trying to execute style_mixing.py but I am getting zero response. Following is the command,

python scripts/style_mixing.py
--exp_dir=to
--checkpoint_path=trained/sam_ffhq_aging.pt
--data_path=from
--test_batch_size=4
--test_workers=4
--latent_mask=8,9
--target_age=50

What does it means i.e. (--latent_mask=8,9)?
What is 8,9 in the latent mask as in the directory there are no files of these name?
How can I get latent mask files from cloud?

Cuda and cudnn

What should be minimum version of "Cuda" and "cudnn" require to install SAM configuration on Linux system?

The result when using encoder_type "BackboneEncoderUsingLastLayerIntoWPlus" is almost identical with the result when using "GradualStyleEncoder"

Hi.
I tried training using encoder_type "BackboneEncoderUsingLastLayerIntoWPlus". The other parameters are kept the same with the old settings. As a result, the loss function on the validation set is almost equal. And testing on some real images also gives almost identical results.
So is it necessary to use encoder_type "GradualStyleEncoder"? When it requires significantly more computation.
Thank you!

ninja error when testing on Windows 10

Hi, I'm trying to run inference.py on windows 10, but met an error with build.ninja:

Traceback (most recent call last):
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1515, in _run_ninja_build
env=env)
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "scripts/inference.py", line 19, in
from models.psp import pSp
File ".\models\psp.py", line 12, in
from models.encoders import psp_encoders
File ".\models\encoders\psp_encoders.py", line 8, in
from models.stylegan2.model import EqualLinear
File ".\models\stylegan2\model.py", line 7, in
from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File ".\models\stylegan2\op_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File ".\models\stylegan2\op\fused_act.py", line 13, in
os.path.join(module_path, 'fused_bias_act_kernel.cu'),
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 974, in load
keep_intermediates=keep_intermediates)
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1179, in _jit_compile
with_cuda=with_cuda)
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1279, in _write_ninja_file_and_build_library
error_prefix="Error building extension '{}'".format(name))
File "C:\Users\v-sunzhe\Anaconda3\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1529, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error building extension 'fused': ninja: error: build.ninja:3: lexing error

I used the following command to start inference.py:
python scripts/inference.py
--exp_dir=./experiment
--checkpoint_path=./experiment/checkpoints/sam_ffhq_aging.pt
--data_path=./test_data
--test_batch_size=1
--test_workers=1
--target_age=40,50,60,70,80

I followed steps in README to set up the conda environment. However, because of win10, I couldn't use "conda env create" to create right from the yaml file. As an alternative, I manually use conda to install those packages one by one. Here's my conda list:

Name Version Build Channel

_libgcc_mutex 0.1 main
absl-py 0.13.0 py36haa95532_0
aiohttp 3.7.4 py36h2bbff1b_1
async-timeout 3.0.1 py36haa95532_0
attrs 21.2.0 pyhd3eb1b0_0
blas 1.0 mkl
blinker 1.4 py36haa95532_0
brotlipy 0.7.0 py36h2bbff1b_1003
ca-certificates 2020.10.14 0 anaconda
cachetools 4.2.2 pyhd3eb1b0_0
certifi 2020.6.20 py36_0 anaconda
cffi 1.14.6 py36h2bbff1b_0
chardet 3.0.4 py36haa95532_1003
click 8.0.1 pyhd3eb1b0_0
coverage 5.5 py36h2bbff1b_2
cryptography 3.4.7 py36h71e12ea_0
cudatoolkit 10.1.243 h74a9793_0 anaconda
cudnn 7.6.5 cuda10.1_0 anaconda
cycler 0.10.0 py36haa95532_0
cython 0.29.24 py36hd77b12b_0
freetype 2.10.4 hd328e21_0
google-auth 1.33.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.4 pyhd3eb1b0_0
grpcio 1.36.1 py36hc60d5dd_1
icc_rt 2019.0.0 h0cc432a_1
icu 64.2 he025d50_1 conda-forge
idna 2.10 pyhd3eb1b0_0
idna_ssl 1.1.0 py36haa95532_0
importlib-metadata 3.10.0 py36haa95532_0
intel-openmp 2021.3.0 haa95532_3372
jpeg 9d h8ffe710_0 conda-forge
kiwisolver 1.3.1 py36hd77b12b_0
libblas 3.8.0 14_mkl conda-forge
libcblas 3.8.0 14_mkl conda-forge
libclang 9.0.1 default_hf44288c_0
libffi 3.2.1 ha925a31_1007 conda-forge
liblapack 3.8.0 14_mkl conda-forge
liblapacke 3.8.0 14_mkl conda-forge
libopencv 4.2.0 py36_7 conda-forge
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.17.2 h23ce68f_1
libtiff 4.2.0 hd0e1b90_0
libwebp 1.2.0 h2bbff1b_0
libwebp-base 1.2.0 h2bbff1b_0
lz4-c 1.9.3 h2bbff1b_0
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
markdown 3.3.4 py36haa95532_0
matplotlib 3.2.1 0 conda-forge
matplotlib-base 3.2.1 py36h64f37c6_0
mkl 2019.4 245
mkl-service 2.3.0 py36h196d8e1_0
mkl_fft 1.3.0 py36h46781fe_0
mkl_random 1.1.0 py36h675688f_0
msys2-conda-epoch 20160418 1 conda-forge
multidict 5.1.0 py36h2bbff1b_2
ninja 1.10.0 h1ad3211_0 conda-forge
numpy 1.18.5 py36h6530119_0 anaconda
numpy-base 1.18.5 py36hc3f5095_0
oauthlib 3.1.1 pyhd3eb1b0_0
olefile 0.46 py36_0
opencv 4.2.0 py36_7 conda-forge
openssl 1.1.1k h2bbff1b_0
pillow 7.1.2 py36hcc1f983_0 anaconda
pip 20.0.2 py36_3
protobuf 3.17.2 py36hd77b12b_0
py-opencv 4.2.0 py36h95af2a2_7 conda-forge
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.8 py_0
pycparser 2.20 py_2
pyjwt 2.1.0 py36haa95532_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.12.3 py36h6538335_1 conda-forge
pyqt5-sip 4.19.18 pypi_0 pypi
pyqtwebengine 5.12.1 pypi_0 pypi
pysocks 1.7.1 py36haa95532_0
python 3.6.7 he025d50_1008_cpython conda-forge
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.6 1_cp36m conda-forge
pytorch 1.6.0 py3.6_cuda101_cudnn7_0 pytorch
qt 5.12.5 h7ef1ec2_0 conda-forge
requests 2.25.1 pyhd3eb1b0_0
requests-oauthlib 1.3.0 py_0
rsa 4.7.2 pyhd3eb1b0_1
scipy 1.4.1 py36h9439919_0 anaconda
setuptools 46.4.0 py36_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.31.1 h2a8f88b_1
tensorboard 2.2.1 pyh532a8cf_0
tensorboard-plugin-wit 1.6.0 py_0
tk 8.6.8 hfa6e2cd_1000 conda-forge
torchvision 0.7.0 py36_cu101 pytorch
tornado 6.1 py36h2bbff1b_0
tqdm 4.46.0 pyh9f0ad1d_0 conda-forge
typing-extensions 3.10.0.0 hd3eb1b0_0
typing_extensions 3.10.0.0 pyh06a4308_0
urllib3 1.26.6 pyhd3eb1b0_1
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.34.2 py36_0 conda-forge
win_inet_pton 1.1.0 py36haa95532_0
wincertstore 0.2 py36h7fe50ca_0
xz 5.2.5 h62dcd97_1 conda-forge
yarl 1.6.3 py36h2bbff1b_0
zipp 3.5.0 pyhd3eb1b0_0
zlib 1.2.11 h62dcd97_1010 conda-forge
zstd 1.4.9 h19a0ad4_0

P.S.: I have intalled pytroch 1.6, cudatoolkit 10.1, cudnn 7.6 with conda,and have one GPU on this device.

Downloading ffhq-dataset

I want to train model on online available free dataset i.e. ffth-dataset which has size around 2.5 TB.
I tried to download it from browser but it gives me an error.
Is there any way to download it from terminal?

[ERROR] c++: error: c10_cuda.lib: No such file or directory

I'm getting this error while using docker-compose.

Docker Compose:
version: '3.7'
services:
test:
build:
context: .
dockerfile: Dockerfile
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [ gpu ]
volumes:
- ./inputs:/input
- ./outputs:/output
ports:
- "5005:5005"

Dockerfile:

FROM nvidia/cuda:10.2-cudnn7-devel
CMD nvidia-smi

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y curl
RUN apt-get install unzip
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
RUN apt-get install make
RUN apt-get install zlib1g-dev -y
RUN apt-get install libjpeg-dev -y

WORKDIR /
COPY ./requirements.txt ./

RUN pip3 install --no-cache-dir pillow
RUN pip3 install --no-cache-dir cmake
RUN pip3 install --no-cache-dir --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt
RUN pip3 install --no-cache-dir torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

RUN rm requirements.txt

COPY . /
WORKDIR /

ENTRYPOINT [ "python3" ]

CMD ["infer.py" ]

(venv) C:\Work\SAM>docker compose up
[+] Running 2/2

  • Network sam_default Created 0.6s
  • Container sam-test-1 Created 0.1s
    Attaching to sam-test-1
    sam-test-1 | Traceback (most recent call last):
    sam-test-1 | File "/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build
    sam-test-1 | env=env)
    sam-test-1 | File "/usr/lib/python3.6/subprocess.py", line 438, in run
    sam-test-1 | output=stdout, stderr=stderr)
    sam-test-1 | subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
    sam-test-1 |
    sam-test-1 | The above exception was the direct cause of the following exception:
    sam-test-1 |
    sam-test-1 | Traceback (most recent call last):
    sam-test-1 | File "infer.py", line 9, in
    sam-test-1 | from models.psp import pSp
    sam-test-1 | File "/models/psp.py", line 12, in
    sam-test-1 | from models.encoders import psp_encoders
    sam-test-1 | File "/models/encoders/psp_encoders.py", line 8, in
    sam-test-1 | from models.stylegan2.model import EqualLinear
    sam-test-1 | File "/models/stylegan2/model.py", line 7, in
    sam-test-1 | from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
    sam-test-1 | File "/models/stylegan2/op/init.py", line 1, in
    sam-test-1 | from .fused_act import FusedLeakyReLU, fused_leaky_relu
    sam-test-1 | File "/models/stylegan2/op/fused_act.py", line 15, in
    sam-test-1 | extra_ldflags=['c10_cuda.lib']
    sam-test-1 | File "/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py", line 1091, in load
    sam-test-1 | keep_intermediates=keep_intermediates)
    sam-test-1 | File "/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile
    sam-test-1 | is_standalone=is_standalone)
    sam-test-1 | File "/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py", line 1407, in write_ninja_file_and_build_library
    sam-test-1 | error_prefix=f"Error building extension '{name}'")
    sam-test-1 | File "/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py", line 1683, in run_ninja_build
    sam-test-1 | raise RuntimeError(message) from e
    sam-test-1 | RuntimeError: Error building extension 'fused': [1/3] c++ -MMD -MF fused_bias_act.o.d -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /models/stylegan2/op/fused_bias_act.cpp -o fused_bias_act.o
    sam-test-1 | In file included from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/DeviceType.h:8:0,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/Device.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/Allocator.h:6,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:7,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
    sam-test-1 | from /models/stylegan2/op/fused_bias_act.cpp:1:
    sam-test-1 | /models/stylegan2/op/fused_bias_act.cpp: In function 'at::Tensor fused_bias_act(const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, float, float)':
    sam-test-1 | /models/stylegan2/op/fused_bias_act.cpp:7:42: warning: 'at::DeprecatedTypeProperties& at::Tensor::type() const' is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
    sam-test-1 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
    sam-test-1 | ^
    sam-test-1 | /models/stylegan2/op/fused_bias_act.cpp:13:5: note: in expansion of macro 'CHECK_CUDA'
    sam-test-1 | CHECK_CUDA(input);
    sam-test-1 | ^~~~~~~~~~
    sam-test-1 | In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:3:0,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:9,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
    sam-test-1 | from /models/stylegan2/op/fused_bias_act.cpp:1:
    sam-test-1 | /usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:303:30: note: declared here
    sam-test-1 | DeprecatedTypeProperties & type() const {
    sam-test-1 | ^~~~
    sam-test-1 | In file included from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/DeviceType.h:8:0,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/Device.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/c10/core/Allocator.h:6,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:7,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
    sam-test-1 | from /models/stylegan2/op/fused_bias_act.cpp:1:
    sam-test-1 | /models/stylegan2/op/fused_bias_act.cpp:7:42: warning: 'at::DeprecatedTypeProperties& at::Tensor::type() const' is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
    sam-test-1 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
    sam-test-1 | ^
    sam-test-1 | /models/stylegan2/op/fused_bias_act.cpp:14:5: note: in expansion of macro 'CHECK_CUDA'
    sam-test-1 | CHECK_CUDA(bias);
    sam-test-1 | ^~~~~~~~~~
    sam-test-1 | In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:3:0,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:9,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
    sam-test-1 | from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
    sam-test-1 | from /models/stylegan2/op/fused_bias_act.cpp:1:
    sam-test-1 | /usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:303:30: note: declared here
    sam-test-1 | DeprecatedTypeProperties & type() const {
    sam-test-1 | ^~~~
    sam-test-1 | [2/3] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output fused_bias_act_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.6/dist-packages/torch/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.6/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.6/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
    -D__CUDA_NO_BFLOAT16_CONVERSIONS
    -D__CUDA_NO_HALF2_OPERATORS
    --expt-relaxed-constexpr -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++14 -c /models/stylegan2/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
    sam-test-1 | [3/3] c++ fused_bias_act.o fused_bias_act_kernel.cuda.o -shared c10_cuda.lib -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o fused.so
    sam-test-1 | FAILED: fused.so
    sam-test-1 | c++ fused_bias_act.o fused_bias_act_kernel.cuda.o -shared c10_cuda.lib -L/usr/local/lib/python3.6/dist-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o fused.so
    sam-test-1 | c++: error: c10_cuda.lib: No such file or directory
    sam-test-1 | ninja: build stopped: subcommand failed.
    sam-test-1 |
    sam-test-1 exited with code 1

Run SAM on CPU

Hi, what are the changes I need to make to run this on CPU? Thank you very much!
I've tried use 'cpu' instead of 'cuda', but I got this error while importing pSp:
ImportError: /root/.cache/torch_extensions/py37_cu113/fused/fused.so: cannot open shared object file: No such file or directory

Model do not works well for Asian faces.

As the title mentioned, I use the pretrained model on Asian faces, the faces after transformed, color of their skins become white, I guess this maybe caused by data imbalance. Do you have any suggestions if I want to retrain the model on dataset mostly composed with Asian faces.

Should I retrain the model of pSp Encoder and FFHQ StyleGAN too?
Thanks

Use another generator pretrained weight

Hi sir!

I have a checkpoint file .pkl and then I use Rosinality to convert it into .pt. So I have some confusion:

  1. My question is could I use this model for inference by SAM? I knew that when training you saved stylegan-ffhq-config-f.pt into checkpoint sam.pt.

  2. Could I load the sam.pt model and change the state_dict of stylegan-ffhq-config-f path to my stylegan model? This idea comes from the fact that you freeze your generator when training, so that the weight of generator does not affect the final SAM weight for inference.

  3. Or should I retrain the SAM model with my own generator model?

  4. Or should I retrain styleGAN encode in pSp repo?

Thank you so much sir.

Questions about target_ages.

First of all I want to thank you for the great project. The results were very impressive!
However, when I read the code I felt that something was not like what was described in the paper. I hope you can explain it to me. Thank you!

  1. In the following line of code:

    weights = train_utils.compute_cosine_weights(x=target_ages) if self.opts.use_weighted_id_loss else None

    You calculate the cosine weight based on target_ages. But I think cosine weight should be calculated based on abs(source ages - target_ages). Is that correct?

  2. In the following line of code:

    y_hat_inverse = self.__set_target_to_source(y_hat_clone)

    y_hat_inverse is generated by concatenating y_hat_clone and the age that the age predictor predicts for y_hat_clone. However, according to paper, y_hat_inverse should be made by concatenating y_hat_clone and source ages (of the original image). Is that correct?

Process only one image

In case i process the script i.e. "inference.py" which processes all images in one go that are in the "from" directory. I want to process only 1 selected image at a time amongst all images in the directory i.e. "From". How can I do it?

Way of execution

I am using following command to execute the age effects

python3.6 scripts/inference.py
--exp_dir= to/experiment
--checkpoint_path= trainedmodels/sam_ffhq_aging.pt
--data_path= to/test_data
--test_batch_size=4
--test_workers=4
--couple_outputs
--target_age=0,10,20,30,40,50,60,70,80

exp_dir: It is the output directory
data_path: I placed an image of user face to whom I want to convert into old face
checkpoint_path: I placed the downloaded trained template that was downloaded by clicking the "SAM" trainded model link.

Is there any way to confirm that I have configured all necessary dependencies to execute above command?

Questions about image size

Thank you for your great work.

I noticed you set FFHQ image size 256 in paths_config.py.
However, the stylegan output size is 1024 in your training option setting.

I didn't find stylegan output is resized when calculating loss. Did I miss it?

If I want to train a model, input size is 256 and output is 1024.
Should I set FFHQ 256 in paths_config.py and stylegan output size 1024?
If so, how did you deal with image size of loss calculation?

Looking forward to your reply

Time for training

Hello, I am training the network, but my output at 5800 steps is far from satisfied. Here is the output.
87accb8023e21f4a5527293b5d05a4c
Could you tell me how long the training will take? My training command is the same as the one in README.

ImportError: No module named 'fused'

Hi, I am trying to setup this repo on my own local machine but I am getting this error. I searched on internet but couldn't find a single solution of this. Any help will be appreciated. Thanks

ImportError: No module named 'fused'

Applying old face effects

I want to make my selfie into old face effects as the current configuration change my face into someone else old face. How can I make my face into real old face just like "FaceApp" android app.

Do I need to train my own model?
If yes, from where can I get free faces on which I will train my own comprehensive model for male, female and children as well?

Further clarity

Could you explain me more in term of installation of packages with their versions that are Prerequisites of Anaconda installation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.