Giter VIP home page Giter VIP logo

swapnet's Introduction

STATUS AS OF SEPTEMBER 4 2020

Hi all, thank you for your interest in my replication of this baseline! Since this repository seems to be generating a lot of traffic under issues, I just wanted to comment that I'm currently not able to maintain this repository. I'm working on a separate virtual try-on publication+codebase which I hope to release by next month (October). I welcome and will merge good PRs, but I won't be able to resolve issues for the time being. Thanks for understanding!

SwapNet

Community PyTorch reproduction of SwapNet.

SwapNet example SwapNet diagram

For more than a year, I've put all my efforts into reproducing SwapNet (Raj et al. 2018). Since an official codebase has not been released, by making my implementation public, I hope to contribute to transparency and openness in the Deep Learning community.

Contributing

I'd welcome help to improve the DevOps of this project. Unfortunately I have other life priorities right now and don't have much time to resolve these particular issues. If you'd like to contribute, please look for the help-wanted label in the Issues. Please feel free to email me for questions as well.

Installation

Option 1: Install with Docker

Many thanks to Urwa Muaz for getting this started.

You can install and run this code using Docker (specifically community edition, Docker 19.03 or higher) and the provided Docker image. Docker enables sharing the same environment across different computers and operating systems. This could save you some setup headache; however, there is some developer overhead because you have to interact through Docker. If you prefer to build without Docker, skip to Option 2: Conda Install. Otherwise, follow the instructions below.

  1. Clone this repo to your computer.

    git clone https://github.com/andrewjong/SwapNet.git
    cd SwapNet
  2. If you have a GPU, make sure you have the NVIDIA Container Toolkit installed to connect your GPU with Docker. Follow their install instructions.

  3. Pull the image from Docker Hub. The image is 9.29GB, so at least this much space must be available on your hard drive. Pulling will take a while.

    docker pull andrewjong/swapnet
  4. Start a container that launches the Visdom server.

    docker run -d --name swapnet_env -v ${PWD}:/app/SwapNet -p 8097:8097 \
       --shm-size 8G --gpus all andrewjong/swapnet \
       bash -c "source activate swapnet && python -m visdom.server"

    Command explanation (just for reference, don't run this):

    docker                  # docker program
    run                     # start a new container
    -d                      # detach (leaves the process running)
    --name swapnet_env      # name the launched container as "swapnet_env"
    -v ${PWD}:/app/SwapNet  # mount our code (assumed from the current working directory) into Docker
    -p 8097:8097            # link ports for Visdom
    --shm-size 8G           # expand Docker shared memory size for PyTorch dataloaders
    --gpus all              # let Docker use the GPUs 
    andrewjong/swapnet      # the Docker image to run
    bash -c \               # start the visdom server
       "source activate swapnet \
       && python -m visdom.server"  
  5. Start an interactive shell in the Docker container we created (swapnet_env). All the commands for training and inference can be run in this container.

    docker exec -it swapnet_env bash
  6. Obtain the training data from the Dataset section. Note the data should be extracted to your host machine (outside Docker) under ${SWAPNET_REPO}/data. This will automatically reflect within the Docker container because of the command we ran in Step 4.

To run the environment in the future, just repeat step 4 and 5.

Option 2: Conda Install

I have only tested this build with Linux! If anyone wants to contribute instructions for Windows/MacOS, be my guest :)

This repository is built with PyTorch. I recommend installing dependencies via conda.

With conda installed run:

git clone https://github.com/andrewjong/SwapNet.git
cd SwapNet/
conda env create  # creates the conda environment from provided environment.yml
conda activate swapnet

Dataset

Data in this repository must start with the following:

  • texture/ folder containing the original images. Images may be directly under this folder or in sub directories.

The following must then be added from preprocessing (see the Preprocessing section below):

  • body/ folder containing preprocessed body segmentations
  • cloth/ folder containing preprocessed cloth segmentations
  • rois.csv which contains the regions of interest for texture pooling
  • norm_stats.json which contain mean and standard deviation statistics for normalization

Deep Fashion

The dataset cited in the original paper is DeepFashion: In-shop Clothes Retrieval. I've preprocessed the Deep Fashion image dataset already. The full preprocessed dataset can be downloaded from Google Drive. Extract the data to ${SWAPNET_REPO}/data/deep_fashion.

Next, create a file ${SWAPNET_REPO}/data/deep_fashion/normalization_stats.json, and paste the following contents:

{"path": "body", "means": [0.06484050184440379, 0.06718090599394404, 0.07127327572275131], "stds": [0.2088075459038679, 0.20012519201951368, 0.23498672043315685]}
{"path": "texture", "means": [0.8319639705048438, 0.8105952930426163, 0.8038053056173073], "stds": [0.22878186598352074, 0.245635337367858, 0.2517315913036158]}

If don't plan to preprocess images yourself, jump ahead to the Training section.

Alternatively, if you plan to preprocess images yourself, download the original DeepFashion image data and move the files to ${SWAPNET_REPO}/data/deep_fashion/texture. Then follow the instructions below.

Preprocess Your Own Dataset (Optional)

If you'd like to prepare your own images, move the data into ${SWAPNET_REPO}/data/YOUR_DATASET/texture.

The images must be preprocessed into BODY and CLOTH segmentation representations. These will be input for training and inference.

Body Preprocessing

The original paper cited Unite the People (UP) to obtain body segmentations; however, I ran into trouble installing Caffe to make UP work (probably due to its age). Therefore, I instead use Neural Body Fitting (NBF). My fork of NBF modifies the code to output body segmentations and ROIs in the format that SwapNet requires.

  1. Follow the instructions in my fork. You must follow the instructions under "Setup" and "How to run for SwapNet". Note NBF uses TensorFlow; I suggest using a separate conda environment for NBF's dependencies.

  2. Move the output under ${SWAPNET_REPO}/data/deep_fashion/body/, and the generated rois.csv file to data/deep_fashion/rois.csv.

Caveats: neural body fitting appears to not do well on images that do not show the full body. In addition, the provided model seems it was only trained on one body type. I'm open to finding better alternatives.

Cloth Preprocessing

The original paper used LIP_SSL. I instead use the implementation from the follow-up paper, LIP_JPPNet. Again, my fork of LIP_JPPNet outputs cloth segmentations in the format required for SwapNet.

  1. Follow the installation instructions in the repository. Then follow the instructions under the "For SwapNet" section.

  2. Move the output under ${SWAPNET_REPO}/data/deep_fashion/cloth/

Calculate Normalization Statistics

This calculates normalization statistics for the preprocessed body image segmentations, under body/, and original images, under texture/. The cloth segmentations do not need to be processed because they're read as 1-hot encoded labels.

Run the following: python util/calculate_imagedir_stats.py data/deep_fashion/body/ data/deep_fashion/texture/. The output should show up in data/deep_fashion/norm_stats.json.

Training

Train progress can be viewed by opening localhost:8097 in your web browser. If you chose to install with Docker, run these commands in the Docker container.

  1. Train warp stage
python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion

Sample visualization of warp stage:

warp example

  1. Train texture stage
python train.py --name deep_fashion/texture --model texture --dataroot data/deep_fashion

Below is an example of train progress visualization in Visdom. The texture stage draws the input texture with ROI boundaries (left most), the input cloth segmentation (second from left), the generated output, and target texture (right most).

texture example texture example

Inference

To download pretrained models, download the checkpoints/ folder from here and extract it under the project root. Please note that these models are not yet perfect, requiring a fuller exploration of loss hyperparameters and GAN objectives.

Inference will run the warp stage and texture stage in series.

To run inference on deep fashion, run this command:

python inference.py --checkpoint checkpoints/deep_fashion \
  --dataroot data/deep_fashion \
  --shuffle_data True

--shuffle_data True ensures that bodys are matched with different clothing for the transfer. By default, only 50 images are run for inference. This can be increased by setting the value of --max_dataset_size.

Alternatively, to translate clothes from a specific source to a specific target:

python inference.py --checkpoint checkpoints/deep_fashion \
  --cloth_dir [SOURCE] --texture_dir [SOURCE] --body_dir [TARGET]

Where SOURCE contains the clothing you want to transfer, and TARGET contains the person to place clothing on.

Comparisons to Original SwapNet

Similarities

  • Warp Stage
    • Per-channel random affine augmentation for cloth inputs
    • RGB images for body segmentations
    • Dual U-Net warp architecture
    • Warp loss (cross-entropy plus small adversarial loss)
  • Texture Stage
    • ROI pooling
    • Texture module architecture
    • mostly everything else is the same

Differences

  • Warp Stage
    • Body segmentation: Neural Body Fitting instead of Unite the People (note NBF doesn't work well on cropped bodies)
    • I store cloth segmentations as a flat 2D map of numeric labels, then expand this into 1-hot encoded tensors at runtime. In the original SwapNet, they used probability maps, but this took up too much storage space (tens of dozens of GB) on my computer.
    • Option to train on video data. For video data, the different frames provide additional "augmentation" for input cloth in the warp stage. Use --data_mode video to enable this.
  • Texture Stage
    • Cloth segmentation: LIP_JPPNet instead of LIP_SSL
    • Currently VGG feature loss prevents convergence, need to debug!
  • Overall
    • Hyperparameters most likely; the hyperparameters were not listed in the original paper, so I had to experiment with these values.
    • Implemented random label smoothing for better GAN stability

TODO:

  • Copy face data from target to generated output during inference ("we copy the face and hair pixels from B into the result")
  • Match texture quality produced in original paper (likely due to Feature Loss)
  • Test DRAGAN penalty and other advanced GAN losses

What's Next?

I plan to keep improving virtual try-on in my own research project (I've already made some progress which is scheduled to be published in the upcoming HPCS 2019 proceedings, but I aim to contribute more.) Stay tuned.

Credits

  • The layout of this repository is strongly influenced by Jun-Yan Zhu's pytorch-CycleGAN-and-pix2pix repository, though I've implemented significant changes. Many thanks to their team for open sourcing their code.
  • Many thanks to Amit Raj, the main author of SwapNet, for patiently responding to my questions throughout the year.
  • Many thanks to Khiem Pham for his helpful experiments on the warp stage and contribution to this repository.
  • Thank you Dr. Teng-Sheng Moh for advising this project.
  • Thanks Urwa Muaz for starting the Docker setup.

swapnet's People

Contributors

andrewjong avatar matheushent avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swapnet's Issues

How does the inference command work for clothing transfer, exactly?

Hi, I've been trying to get this transfer code to work:

python inference.py --checkpoint checkpoints/deep_fashion
--cloth_dir [SOURCE] --texture_dir [SOURCE] --body_dir [TARGET]

For each of [SOURCE] and [TARGET], I'm assuming they have their set of cloth/texture/body/stats within them (I saw it was looking for the stats json file in each directory). However, when I tried to do a sanity check with the following command:

python inference.py --checkpoint checkpoints/deep_fashion
--cloth_dir ./data/deep_fashion/ --texture_dir ./data/deep_fashion/ --body_dir ./data/deep_fashion/

It had an error: KeyError: Caught KeyError in DataLoader worker process 0.

I haven't gotten through the data preprocessing since that's a lot of setup, but my question is when the transfer happens, does it match every style in the SOURCE directory with the body in the TARGET? So if I have n elements in SOURCE and m in TARGET, the results would be a n * m list of all the styles in SOURCE applied to every one in TARGET? Very new to the code so appreciate any help!

Pretrained weights

Thank you for the awesome project. It would be awesome if you can also share the pretrained weights (checkpoints) with it.
Would it be possible ?

How to test?

Hello, can you please guide me on how to test the output what type of files must be exactly placed?
inference.py: error: unrecognized arguments: --cloth_dir cloth/images1.jpg
I am facing the above error I used command as
python inference.py --checkpoint checkpoints/deep_fashion \ --cloth_dir cloth/images1.jpg --texture_dir txture/images.jpg --body_dir body/images2.jpg
I used images please help me out I am a beginner

KeyError: Caught KeyError in DataLoader worker process 0.

Hi, when i running inferrnce code,i get the error below,need your help

D:\python\files\SwapNet-master>python inference.py --checkpoint checkpoints/deep_fashion --cloth_dir ./data/deep_fashion/ --texture_dir ./data/deep_fashion/ --body_dir ./data/deep_fashion/
model None
dataset None
=====OPTIONS======
config_file : None
comments :
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : False
num_workers : 0
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : ./data/deep_fashion/
cloth_dir : ./data/deep_fashion/
texture_dir : ./data/deep_fashion/
results_dir : results
skip_intermediates : False
dataroot : None
model : None
name :
is_train : False

The experiment directory 'results' already exists.
Here are its contents:
['args.json', 'texture', 'warp']

Existing data will be overwritten!
Are you sure you want to continue? (y/N): y

Set warp_checkpoint to checkpoints/deep_fashion\warp\latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion\texture\latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion\warp\latest_net_generator.pth
Not overriding: {'body_dir', 'cloth_dir', 'texture_dir', 'checkpoint'}
initialize network with kaiming
model [WarpModel] was created
loading the model generator from checkpoints/deep_fashion\warp\latest_net_generator.pth
---------- Networks initialized -------------
[Network generator] Total number of parameters : 137.584 M

Creating dataset warp... cloth dir ./data/deep_fashion/
Extensions: ['.npz']
body dir ./data/deep_fashion/
dataset [WarpDataset] was created
Warping cloth to match body segmentations in ./data/deep_fashion/...
0%| | 0/50 [00:00<?, ?img/s]D:\python\files\SwapNet-master\env\lib\site-packages\torch\nn\functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
D:\python\files\SwapNet-master\env\lib\site-packages\torch\nn\functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
D:\python\files\SwapNet-master\env\lib\site-packages\torch\nn\functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
D:\python\files\SwapNet-master\env\lib\site-packages\torch\nn\functional.py:2506: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
2%|█▋ | 1/50 [00:04<03:25, 4.20s/img 8%|██████▌ | 4/50 [00:04<02:15, 2.96 14%|███████████▍ | 7/50 [00:04<01:29, 20%|████████████████▏ | 10/50 [00:04<00 26%|█████████████████████ | 13/50 [00:0 32%|█████████████████████████▉ | 16/50 38%|██████████████████████████████▊ | 1 44%|███████████████████████████████████▋ 50%|████████████████████████████████████████▌ 56%|█████████████████████████████████████████████▎ 62%|██████████████████████████████████████████████████▏ 68%|███████████████████████████████████████████████████████ 74%|█████████████████████████████████████████████████████████ 80%|█████████████████████████████████████████████████████████ 86%|█████████████████████████████████████████████████████████ 92%|█████████████████████████████████████████████████████████ 98%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ ████████████████████████| 50/50 [00:06<00:00, 7.49img/s]
Warp results stored in results\warp
Running texture inference...
Rebuilding texture from checkpoints/deep_fashion\texture\latest_net_generator.pth
Not overriding: {'body_dir', 'cloth_dir', 'texture_dir', 'checkpoint'}
initialize network with kaiming
model [TextureModel] was created
loading the model generator from checkpoints/deep_fashion\texture\latest_net_generator.pth
---------- Networks initialized -------------
[Network generator] Total number of parameters : 41.900 M

Creating dataset texture... dataset [TextureDataset] was created
Texturing cloth segmentations in results\warp...
0%| | 0/50 [00:00<?, ?img/s]
Traceback (most recent call last):
File "inference.py", line 222, in
_run_texture()
File "inference.py", line 184, in _run_texture
_run_test_loop(texture_model, texture_dataset, webpage)
File "inference.py", line 109, in run_test_loop
for i, data in enumerate(dataset):
File "D:\python\files\SwapNet-master\datasets_init
.py", line 82, in iter
for i, data in enumerate(self.dataloader):
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next
data = self._next_data()
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data\dataloader.py", line 856, in _next_data
return self._process_data(data)
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data\dataloader.py", line 881, in _process_data
data.reraise()
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch_utils.py", line 394, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\indexes\base.py", line 2657, in get_loc
return self._engine.get_loc(key)
File "pandas_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 127, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 147, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: './data/deep_fashion/body\MEN\Denim\id_00000080\01_1_front'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\python\files\SwapNet-master\env\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\python\files\SwapNet-master\datasets\texture_dataset.py", line 118, in getitem
rois = np.rint(self.rois_df.loc[file_id].values * scale)
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\indexing.py", line 1500, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\indexing.py", line 1913, in _getitem_axis
return self._get_label(key, axis=axis)
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\indexing.py", line 141, in _get_label
return self.obj._xs(label, axis=axis)
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\generic.py", line 3585, in xs
loc = self.index.get_loc(key)
File "D:\python\files\SwapNet-master\env\lib\site-packages\pandas\core\indexes\base.py", line 2659, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 127, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 147, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: './data/deep_fashion/body\MEN\Denim\id_00000080\01_1_front'

Add to inference: Copy over face data

SwapNet says they copy face data

Add this to the inference script. Add option for "--face_dir" that points to the root dir for cloth and texture of the original person. Will copy where the cloth says "face" and "hair"

Create a docker image

People seem to have had trouble with dependencies and setup. Let's create a docker image to provide a common environment.

Environment Build Issues in mac machine

yml build fails on mac machine. Should one export environment like this?

(base) USCS-Mac198:SwapNet-master admin$ conda env create
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • mkl_random==1.0.2=py37hd81dba3_0
  • libgcc-ng==8.2.0=hdf63c60_1
  • libedit==3.1.20181209=hc058e9b_0
  • pyqt==5.9.2=py37h05f1152_2
  • numpy==1.16.2=py37h7e9f1db_0
  • zeromq==4.3.1=he6710b0_3
  • pyrsistent==0.14.11=py37h7b6447c_0
  • xz==5.2.4=h14c3975_4
  • libtiff==4.0.10=h2733197_2
  • freetype==2.9.1=h8a8886c_1
  • gst-plugins-base==1.14.0=hbbd80ab_1
  • pcre==8.43=he6710b0_0
  • libxml2==2.9.9=he19cac6_0
  • expat==2.2.6=he6710b0_0
  • gstreamer==1.14.0=hb453b48_1
  • readline==7.0=h7b6447c_5
  • icu==58.2=h9c2bf20_1....

ModuleNotFoundError: No module named 'model'

Hi,

I am running your model on colab , I have already run the file (make.sh) successfully but when I am running the command python train.py it stuck on following error which is not solved

model warp
Traceback (most recent call last):
  File "train.py", line 32, in <module>
    opt = TrainOptions().parse(store_options=True)  # get training options
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/options/base_options.py", line 212, in parse
    opt = self.gather_options()
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/options/base_options.py", line 181, in gather_options
    options_modifier = import_source.get_options_modifier(name)
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/models/__init__.py", line 29, in get_options_modifier
    model_class = find_model_using_name(model_name)
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/models/__init__.py", line 12, in find_model_using_name
    modellib = importlib.import_module(model_filename)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/models/warp_model.py", line 10, in <module>
    from modules.swapnet_modules import WarpModule
  File "/content/drive/My Drive/Angelium/fashion_recommandation/SwapNet/SwapNet/modules/swapnet_modules.py", line 14, in <module>
    from model.roi_layers import ROIAlign
ModuleNotFoundError: No module named 'model'

Kindly help to solve this issue.
Thanks

How to test it ?

I am sorry I am still a beginner in DL, and I am not sure how to test it.
By testing I mean :

  1. I have a first original image with someone wearing clothes,
  2. I have an image with the isolate cloth I want to swap
  3. I have a destination image where someone is wearing another cloth.
    1 should correspond to the cloth folder
    2 should correspond to the texture folder
    3 should correspond to the body folder

But with this what command should I do ?
And where to find the result image ? ( 3) cloth should be swap by 2) )
Also 1) 2) 3) are jpeg image with original picture and not segmented one ?

Thank you for your splendid work !

Colab notebook

If someone please share a colab notebook to run this project.

seaborn dependency not included in environment.yml

Hi, thanks for providing this project. When running the inference.py script I noticed that the seaborn dependency could not be found after installing via conda. I see in the Dockerfile that seaborn is explicitly installed after creating the environment, is there any reason why this shouldn't be included in the main environment.yml list of dependencies? I'm happy to provide a PR if so.

Is it different from MGN? If so, how?

Hi Andrew!

Seems like you're working on something kind of similar to (MGN)[https://github.com/bharat-b7/MultiGarmentNetwork]. Does it use SMPL?

Cool to see this,
Nathan

Help wanted !! KeyError: Caught KeyError in DataLoader worker process 0.

im facing this error when running the test. Kindly help me

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Khizer\anaconda3\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\Khizer\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\Khizer\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\Ayesha\Upwork\Projects\FCGAN Clothing style transfer\swapnet\SwapNet-master\SwapNet-master\datasets\texture_d
ataset.py", line 118, in getitem
rois = np.rint(self.rois_df.loc[file_id].values * scale)
File "C:\Users\Khizer\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1767, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\Khizer\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1964, in _getitem_axis
return self._get_label(key, axis=axis)
File "C:\Users\Khizer\anaconda3\lib\site-packages\pandas\core\indexing.py", line 624, in _get_label
return self.obj._xs(label, axis=axis)
File "C:\Users\Khizer\anaconda3\lib\site-packages\pandas\core\generic.py", line 3537, in xs
loc = self.index.get_loc(key)
File "C:\Users\Khizer\anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 2648, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas_libs\index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 133, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 157, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: 'data\deep_fashion\deep_fashion_all\texture\WOMEN\Blouses_Shirts\id_00000001\02_1_front'

AssertionError: Torch not compiled with CUDA enabled

@andrewjong (raghu) dioxe@dioxe-Inspiron-3542:~/project/SwapNet-master$ python inference.py --checkpoint checkpoints/deep_fashion \

--dataroot data/deep_fashion
--shuffle_data True
model None
dataset None
=====OPTIONS======
config_file : None
comments :
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : True
num_workers : 4
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : None
cloth_dir : None
texture_dir : None
results_dir : results
skip_intermediates : False
dataroot : data/deep_fashion
model : None
name :
is_train : False
==================
The experiment directory 'results' already exists.
Here are its contents:
['warp', 'args.json']

Existing data will be overwritten!
Are you sure you want to continue? (y/N): y

Set warp_checkpoint to checkpoints/deep_fashion/warp/latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion/texture/latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion/warp/latest_net_generator.pth
Not overriding: {'dataroot', 'checkpoint', 'shuffle_data'}
Traceback (most recent call last):
File "inference.py", line 217, in
_run_warp()
File "inference.py", line 137, in _run_warp
opt.warp_checkpoint, cloth_dir=opt.cloth_dir, body_dir=opt.body_dir
File "inference.py", line 72, in _rebuild_from_checkpoint
model = create_model(loaded_opt)
File "/home/dioxe/project/SwapNet-master/models/init.py", line 42, in create_model
instance = model(opt)
File "/home/dioxe/project/SwapNet-master/models/warp_model.py", line 57, in init
BaseGAN.init(self, opt)
File "/home/dioxe/project/SwapNet-master/models/base_gan.py", line 140, in init
self.net_generator = self.define_G().to(self.device)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 425, in to
return self._apply(convert)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 201, in _apply
module._apply(fn)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 201, in _apply
module._apply(fn)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 201, in _apply
module._apply(fn)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 223, in _apply
param_applied = fn(param)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 423, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/cuda/init.py", line 196, in _lazy_init
_check_driver()
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/torch/cuda/init.py", line 94, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
@andrewjong

Labels are off by 1, hats are missing

It seems that most of the labels (except hair) are off by 1. They deviate from the cloth labels in decode_labels.py. For example, face is said to be 13, according to the labels, but in the cloth segmentation numpy array, the face is represented by 12.

Additionally, hats are said to be 1 in the labels, and the issue with the labels being off by 1 causes the hats to be considered the background.

02_1_front.jpg
image
02_1_front.npz
image

cannot deepcopy this pattern object

I tried to run the project on Google Colab. I have imported and downloaded everything I guess so. But when I run the Inference script it gives the following error.
y = copier(memo) TypeError: cannot deepcopy this pattern object
If anyone please help.

Inference with the CPU

When running this command: python inference.py --checkpoint checkpoints/deep_fashion
--cloth_dir model1.jpeg --texture_dir model1.jpeg --body_dir model2.jpeg

an error is given stating that Cuda has not been found. Is there any reason why CPU inference is impossible?

Cool project by the way

Edit: My laptop does not have cuda but I still recieved that error. My fix was to edit line 36 at the base_model.py file to this:

self.device = (
torch.device("cpu")
)

Debug perceptual loss

Perceptual loss (called feature loss in SwapNet) is key to generating the texture details. Without perceptual loss, the generator can only create clothing shapes without details.

I've currently set the perceptual loss weight to 0 (the --lambda_feat option), because any other value currently causes a lot of noise throughout the entire image. Here's an example.
Noisy perceptual loss

I believe my perceptual loss implementation is buggy. It could be an issue of which layers we compare for perceptual loss.

Add args.json to provided checkpoints download

When I try to inference using the given command, I get this error:

model None
dataset None
=====OPTIONS======
config_file : None
comments : 
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : True
num_workers : 4
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : None
cloth_dir : None
texture_dir : None
results_dir : results
skip_intermediates : False
dataroot : data/deep_fashion
model : None
name : 
is_train : False
==================
The experiment directory 'results' already exists.
 Here are its contents:
	 ['warp', 'args.json']

 Existing data will be overwritten!
 Are you sure you want to continue? (y/N): y

Set warp_checkpoint to checkpoints/deep_fashion/warp/latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion/texture/latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion/warp/latest_net_generator.pth
Not overriding: {'shuffle_data', 'dataroot', 'checkpoint'}
Traceback (most recent call last):
  File "inference.py", line 217, in <module>
    _run_warp()
  File "inference.py", line 137, in _run_warp
    opt.warp_checkpoint, cloth_dir=opt.cloth_dir, body_dir=opt.body_dir
  File "inference.py", line 72, in _rebuild_from_checkpoint
    model = create_model(loaded_opt)
  File "/content/SwapNet/models/__init__.py", line 41, in create_model
    model = find_model_using_name(opt.model)
  File "/content/SwapNet/models/__init__.py", line 11, in find_model_using_name
    model_filename = "models." + model_name + "_model"
TypeError: must be str, not NoneType

with this command:

!python inference.py --checkpoint checkpoints/deep_fashion \
  --dataroot data/deep_fashion \
  --shuffle_data True

Might be related to #17, as your bugfix seems to have worked, but now I have another problem.

Cloth segmentation issues.

Hi, thanks for your excellent work!
But i found that most of the cloth segmentation results is incomplete, like some of them only have the torso part, which is not very acceptable for me. And i think these will affect the final results.
Those segmentation results that shown in the original paper is pretty good, though.
So what do you think is the main reason that cause this?
Is that because of the compress operation or because of the pixels of image is too low?

inference img

how to combine warp and texture file to generate a new image,Bcs?
inference.py doesn’t produce a final result.

{F994A9DE-B56E-43DA-AE00-43B30D625727}_20200706191704

Without training trying to test

Set warp_checkpoint to checkpoints/deep_fashion/warp/latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion/texture/latest_net_generator.pth

I have tried testing using the checkpoints without training. But its asking for generator.pth. Can you pls guide me what to do...

Error(s) in loading state_dict() for WrapModule

Hi, Andrew! This is an amazing project.
I am trying using pretrained checkpoints.
I am getting the following error
error
I would be grateful if you could help me navigate this issue. Thankyou!
By the way, I renamed 18_net_generator.pth to latest_net_generator.pth so solve earlier issue

Change PerceptualLoss to specific layers instead of "last n layers"

From M2E-Net section 3.5:

We extract 7 different features Φi from relu1_2, relu2_2, relu3_3, relu4_3, relu5_3, fc6 and fc7. Then we use the l2 distance to penalize the differences.

From "Perceptual Losses for Real-Time Style Transfer" section 4.1 Training Details:

For all style transfer experiments we compute feature reconstruction loss at layer relu3_3 and style reconstruction loss at layers relu1_2, relu2_2, relu3_3, and relu4_3 of the VGG-16 loss network φ

Testing error

(swapnet) C:\Users\Rohit\SwapNet>python inference.py --checkpoint checkpoints/deep_fashion --dataroot data/deep_fashion --shuffle_data True
model None
dataset None
=====OPTIONS======
config_file : None
comments :
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : True
num_workers : 4
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : None
cloth_dir : None
texture_dir : None
results_dir : results
skip_intermediates : False
dataroot : data/deep_fashion
model : None
name :
is_train : False

The experiment directory 'results' already exists.
Here are its contents:
['args.json', 'warp']

Existing data will be overwritten!
Are you sure you want to continue? (y/N): y

Set warp_checkpoint to checkpoints/deep_fashion\warp\latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion\texture\latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion\warp\latest_net_generator.pth
Not overriding: {'checkpoint', 'dataroot', 'shuffle_data'}
initialize network with kaiming
model [WarpModel] was created
loading the model generator from checkpoints/deep_fashion\warp\latest_net_generator.pth
Traceback (most recent call last):
File "inference.py", line 217, in
_run_warp()
File "inference.py", line 136, in _run_warp
warp_model, warp_dataset = _rebuild_from_checkpoint(
File "inference.py", line 74, in _rebuild_from_checkpoint
model.load_model_weights("generator", checkpoint_file).eval()
File "C:\Users\Rohit\SwapNet\models\base_model.py", line 184, in load_model_weights
state_dict = torch.load(weights_file, map_location=self.device)
File "C:\Users\Rohit\anaconda3\envs\swapnet\lib\site-packages\torch\serialization.py", line 571, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\Rohit\anaconda3\envs\swapnet\lib\site-packages\torch\serialization.py", line 229, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\Rohit\anaconda3\envs\swapnet\lib\site-packages\torch\serialization.py", line 210, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/deep_fashion\warp\latest_net_generator.pth'

Make preprocessing easier

Preprocessing requires setting up 2 separate repositories. It's crazy. We should improve this.

Ideas

  1. Add the two preprocessing repositories as submodules so they're in a central location
  2. Create a single, simple script that calls both preprocessing libraries. Note these libraries have different (and possibly conflicting) dependencies, so that's a bit tricky.

FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/deep_fashion/warp/args.json'

@andrewjong python inference.py --checkpoint checkpoints/deep_fashion --dataroot data/deep_fashion --shuffle_data True
model None
dataset None
=====OPTIONS======
config_file : None
comments :
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : True
num_workers : 4
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : None
cloth_dir : None
texture_dir : None
results_dir : results
skip_intermediates : False
dataroot : data/deep_fashion
model : None
name :
is_train : False

Set warp_checkpoint to checkpoints/deep_fashion/warp/latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion/texture/latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion/warp/latest_net_generator.pth
Traceback (most recent call last):
File "inference.py", line 217, in
_run_warp()
File "inference.py", line 137, in _run_warp
opt.warp_checkpoint, cloth_dir=opt.cloth_dir, body_dir=opt.body_dir
File "inference.py", line 62, in _rebuild_from_checkpoint
loaded_opt = load(copy.deepcopy(opt), os.path.join(checkpoint_dir, "args.json"))
File "/home/dioxe/project/SwapNet-master/options/base_options.py", line 271, in load
with open(json_file, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/deep_fashion/warp/args.json'

RuntimeError when training

I just run the code for training and I got the traceback:

Traceback (most recent call last):
  File "train.py", line 64, in <module>
    model.optimize_parameters()
  File "C:\Users\mathe\projects\SwapNet\models\warp_model.py", line 177, in optimize_parameters
    super().optimize_parameters()
  File "C:\Users\mathe\projects\SwapNet\models\base_gan.py", line 198, in optimize_parameters
    self.backward_D()
  File "C:\Users\mathe\projects\SwapNet\models\warp_model.py", line 117, in backward_D
    self.loss_D_fake = self.criterion_GAN(pred_fake, False)
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 119, in __call__
    target_tensor = self.get_target_tensor(prediction, target_is_real)
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 101, in get_target_tensor
    target_tensor = GANLoss.rand_between(low, high).to(
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 75, in rand_between
    return rand_func(1) * (high - low) + low
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

It seems some parameters are not allocated on GPU. Any ideia of solving it?

Rehaul the code to use PyTorch Lightning

I feel the code in its current state is quite cumbersome. Training is also cumbersome. Visdom isn't as good as Tensorboard. Thinking of rehauling the codebase to be neat and modular using PyTorch Lightning on another branch.

Training Warp stage stops at epoch 3

Hi,

I ran the train.py for training warp stage twice. (python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion)
However, the training does not proceed beyond epoch 3. Could you help me with this issue?
I have attached screenshots for reference.

IMG-20200617-WA0001
Capture

TypeError: cannot deepcopy this pattern object

@andrewjong @Chinmay-Vadgama (raghu) dioxe@dioxe-Inspiron-3542:~/project/SwapNet-master$ python inference.py --checkpoint checkpoints/deep_fashion \

--dataroot data/deep_fashion
--shuffle_data True
model None
dataset None
=====OPTIONS======
config_file : None
comments :
verbose : False
display_winsize : 256
checkpoints_dir : ./checkpoints
load_epoch : latest
dataset : None
dataset_mode : image
cloth_representation : labels
body_representation : rgb
cloth_channels : 19
body_channels : 12
texture_channels : 3
pad : False
load_size : 128
crop_size : 128
crop_bounds : None
max_dataset_size : 50
batch_size : 8
shuffle_data : True
num_workers : 4
gpu_id : 0
no_confirm : False
interval : 1
warp_checkpoint : None
texture_checkpoint : None
checkpoint : checkpoints/deep_fashion
body_dir : None
cloth_dir : None
texture_dir : None
results_dir : results
skip_intermediates : False
dataroot : data/deep_fashion
model : None
name :
is_train : False
==================
The experiment directory 'results' already exists.
Here are its contents:
['warp', 'args.json']

Existing data will be overwritten!
Are you sure you want to continue? (y/N): y

Set warp_checkpoint to checkpoints/deep_fashion/warp/latest_net_generator.pth
Set texture_checkpoint to checkpoints/deep_fashion/texture/latest_net_generator.pth
Running warp inference...
Rebuilding warp from checkpoints/deep_fashion/warp/latest_net_generator.pth
Traceback (most recent call last):
File "inference.py", line 227, in
_run_warp()
File "inference.py", line 147, in _run_warp
opt.warp_checkpoint, cloth_dir=opt.cloth_dir, body_dir=opt.body_dir
File "inference.py", line 58, in _rebuild_from_checkpoint
loaded_opt = _copy_and_load_config(checkpoint_dir).opt
File "inference.py", line 89, in _copy_and_load_config
return config.copy().load(os.path.join(directory, "args.json"))
File "/home/dioxe/project/SwapNet-master/options/base_options.py", line 264, in copy
return copy.deepcopy(self)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 215, in _deepcopy_list
append(deepcopy(a, memo))
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/copy.py", line 161, in deepcopy
y = copier(memo)
TypeError: cannot deepcopy this pattern object

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.