Giter VIP home page Giter VIP logo

dot's Introduction

the Deepfake Offensive Toolkit

stars license Python 3.8 build-dot code-check

dot (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. dot is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.

If you want to learn more about dot is used for penetration tests with deepfakes in the industry, read these articles by The Verge and Biometric Update.

dot is developed for research and demonstration purposes. As an end user, you have the responsibility to obey all applicable laws when using this program. Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

How it works

In a nutshell, dot works like this

flowchart LR;
    A(your webcam feed) --> B(suite of realtime deepfakes);
    B(suite of realtime deepfakes) --> C(virtual camera injection);

All deepfakes supported by dot do not require additional training. They can be used in real-time on the fly on a photo that becomes the target of face impersonation. Supported methods:

  • face swap (via SimSwap), at resolutions 224 and 512
    • with the option of face superresolution (via GPen) at resolutions 256 and 512
  • lower quality face swap (via OpenCV)
  • FOMM, First Order Motion Model for image animation

Running dot

Graphical interface

GUI Installation

Download and run the dot executable for your OS:

  • Windows (Tested on Windows 10 and 11):

    • Download dot.zip from here, unzip it and then run dot.exe
  • Ubuntu:

    • ToDo
  • Mac (Tested on Apple M2 Sonoma 14.0):

    • Download dot-m2.zip from here and unzip it
    • Open terminal and run xattr -cr dot-executable.app to remove any extended attributes
    • In case of camera reading error:
      • Right click and choose Show Package Contents
      • Execute dot-executable from Contents/MacOS folder

GUI Usage

Usage example:

  1. Specify the source image in the field source.
  2. Specify the camera id number in the field target. In most cases, 0 is the correct camera id.
  3. Specify the config file in the field config_file. Select a default configuration from the dropdown list or use a custom file.
  4. (Optional) Check the field use_gpu to use the GPU.
  5. Click on the RUN button to start the deepfake.

For more information about each field, click on the menu Help/Usage.

Watch the following demo video for better understanding of the interface

Command Line

CLI Installation

Install Pre-requisites
  • Linux

    sudo apt install ffmpeg cmake
  • MacOS

    brew install ffmpeg cmake
  • Windows

    1. Download and install Visual Studio Community from here
    2. Install Desktop development with C++ from the Visual studio installer
Create Conda Environment

The instructions assumes that you have Miniconda installed on your machine. If you don't, you can refer to this link for installation instructions.

With GPU Support
conda env create -f envs/environment-gpu.yaml
conda activate dot

Install the torch and torchvision dependencies based on the CUDA version installed on your machine:

  • Install CUDA 11.8 from link

  • Install cudatoolkit from conda: conda install cudatoolkit=<cuda_version_no> (replace <cuda_version_no> with the version on your machine)

  • Install torch and torchvision dependencies: pip install torch==2.0.1+<cuda_tag> torchvision==0.15.2+<cuda_tag> torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118, where <cuda_tag> is the CUDA tag defined by Pytorch. For example, pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 for CUDA 11.8.

    Note: torch1.9.0+cu111 can also be used.

To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.cuda.is_available())". If the output is True, the dependencies are installed with CUDA support.

With MPS Support(Apple Silicon)
conda env create -f envs/environment-apple-m2.yaml
conda activate dot

To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.backends.mps.is_available())". If the output is True, the dependencies are installed with Metal programming framework support.

With CPU Support (slow, not recommended)
conda env create -f envs/environment-cpu.yaml
conda activate dot
Install dot
pip install -e .
Download Models
  • Download dot model checkpoints from here
  • Unzip the downloaded file in the root of this project

CLI Usage

Run dot --help to get a full list of available options.

  1. Simswap

    dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
  2. SimSwapHQ

    dot -c ./configs/simswaphq.yaml --target 0 --source "./data" --use_gpu
  3. FOMM

    dot -c ./configs/fomm.yaml --target 0 --source "./data" --use_gpu
  4. FaceSwap CV2

    dot -c ./configs/faceswap_cv2.yaml --target 0 --source "./data" --use_gpu
    

Note: To enable face superresolution, use the flag --gpen_type gpen_256 or --gpen_type gpen_512. To use dot on CPU (not recommended), do not pass the --use_gpu flag.

Controlling dot with CLI

Disclaimer: We use the SimSwap technique for the following demonstration

Running dot via any of the above methods generates real-time Deepfake on the input video feed using source images from the data/ folder.

When running dot a list of available control options appear on the terminal window as shown above. You can toggle through and select different source images by pressing the associated control key.

Watch the following demo video for better understanding of the control options:

Docker

Setting up docker

  • Build the container

    docker-compose up --build -d
    
  • Access the container

    docker-compose exec dot "/bin/bash"
    

Connect docker to the webcam

Ubuntu

  1. Build the container

    docker build -t dot -f Dockerfile .
    
  2. Run the container

    xhost +
    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

Windows

  1. Follow the instructions here under Windows to set up the webcam with docker.

  2. Build the container

    docker build -t dot -f Dockerfile .
    
  3. Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=192.168.99.1:0 \
    -v .:/dot \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    dot
    

macOS

  1. Follow the instructions here to set up the webcam with docker.

  2. Build the container

    docker build -t dot -f Dockerfile .
    
  3. Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=$IP:0 \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

Virtual Camera Injection

Instructions vary depending on your operating system.

Windows

Choose Install and register only 1 virtual camera.

  • Run OBS Studio.

  • In the Sources section, press on Add button ("+" sign),

    select Windows Capture and press OK. In the appeared window, choose "[python.exe]: fomm" in Window drop-down menu and press OK. Then select Edit -> Transform -> Fit to screen.

  • In OBS Studio, go to Tools -> VirtualCam. Check AutoStart,

    set Buffered Frames to 0 and press Start.

  • Now OBS-Camera camera should be available in Zoom

    (or other videoconferencing software).

Ubuntu

sudo apt update
sudo apt install v4l-utils v4l2loopback-dkms v4l2loopback-utils
sudo modprobe v4l2loopback devices=1 card_label="OBS Cam" exclusive_caps=1
v4l2-ctl --list-devices
sudo add-apt-repository ppa:obsproject/obs-studio
sudo apt install obs-studio

Open OBS Studio and check if tools --> v4l2sink exists. If it doesn't follow these instructions:

mkdir -p ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
ln -s /usr/lib/obs-plugins/v4l2sink.so ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/

Use the virtual camera with OBS Studio:

  • Open OBS Studio
  • Go to tools --> v4l2sink
  • Select /dev/video2 and YUV420
  • Click on start
  • Join a meeting and select OBS Cam

MacOS

  • Download and install OBS Studio for MacOS from here
  • Open OBS and follow the first-time setup (you might be required to enable certain permissions in System Preferences)
  • Run dot with --use_cam flag to enable camera feed
  • Click the "+" button in the sources section → select "Windows Capture", create a new source and enter "OK" → select window with "python" included in the name and enter OK
  • Click "Start Virtual Camera" button in the controls section
  • Select "OBS Cam" as default camera in the video settings of the application target of the injection

Run dot with an Android emulator

If you are performing a test against a mobile app, virtual cameras are much harder to inject. An alternative is to use mobile emulators and still resort to virtual camera injection.

  • Run dot. Check running dot for more information.

  • Run OBS Studio and set up the virtual camera. Check virtual-camera-injection for more information.

  • Download and Install Genymotion.

  • Open Genymotion and set up the Android emulator.

  • Set up dot with the Android emulator:

    • Open the Android emulator.
    • Click on camera and select OBS-Camera as front and back cameras. A preview of the dot window should appear. In case there is no preview, restart OBS and the emulator and try again. If that didn't work, use a different virtual camera software like e2eSoft VCam or ManyCam.
    • dot deepfake output should be now the emulator's phone camera.

Speed

With GPU

Tested on a AMD Ryzen 5 2600 Six-Core Processor with one NVIDIA GeForce RTX 2070

Simswap: FPS 13.0
Simswap + gpen 256: FPS 7.0
SimswapHQ: FPS 11.0
FOMM: FPS 31.0

With Apple Silicon

Tested on Macbook Air M2 2022 16GB

Simswap: FPS 3.2
Simswap + gpen 256: FPS 1.8
SimswapHQ: FPS 2.7
FOMM: FPS 2.0

License

This is not a commercial Sensity product, and it is distributed freely with no warranties

The software is distributed under BSD 3-Clause. dot utilizes several open source libraries. If you use dot, make sure you agree with their licenses too. In particular, this codebase is built on top of the following research projects:

Contributing

If you have ideas for improving dot, feel free to open relevant Issues and PRs. Please read CONTRIBUTING.md before contributing to the repository.

Maintainers

Contributors

Run dot on pre-recorded image and video files

FAQ

  • dot is very slow and I can't run it in real time

Make sure that you are running it on a GPU card by using the --use_gpu flag. CPU is not recommended. If you still find it too slow it may be because you running it on an old GPU model, with less than 8GB of RAM.

  • Does dot only work with a webcam feed or also with a pre-recorded video?

You can use dot on a pre-recorded video file by these scripts or try it directly on Colab.

dot's People

Contributors

ajndkr avatar dependabot[bot] avatar ghassen-chaabouni avatar giorgiop avatar imgbot[bot] avatar vassilispapadop avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dot's Issues

Did you include a watermark in DOT?

Does the DOT have a watermark that can be used to determine the stream uses DOT?

Description:

For the DOT to work reliably independent of whether or not an authenticator used the/a watermark detector, it is important the DOT is indistinguishable from an arbitrary real stream. Hence, I was wondering whether you included a watermark or not. Additionally, I would like to kindly ask: would you perhaps be able to enlighten us on whether there are data characteristics/signatures that hint on the DOT being used instead of an arbitrary video?

How to enable super resolution

dot
-c ./configs/simswap.yaml
--target 0
--source "./data"
--show_fps
--gpen_type gpen_256

I used this command, but my GUP doesn't seem to be working and the video is very stuttering. I use a 2060 12G graphics card

run dot

When I try to start the point, it gives an error, what do I have to do?

(dot) C:\Users\1\dot-main>dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Traceback (most recent call last):
File "C:\Users\1\miniconda3\envs\dot\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\1\miniconda3\envs\dot\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\1\miniconda3\envs\dot\Scripts\dot.exe_main
.py", line 7, in
File "C:\Users\1\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
File "C:\Users\1\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\1\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\1\miniconda3\envs\dot\lib\site-packages\click\core.py", line 760, in invoke
return callback(*args, **kwargs)
File "C:\Users\1\dot-main\src\dot_main
.py", line 206, in main
run(
File "C:\Users\1\dot-main\src\dot_main
.py", line 67, in run
dot.generate(
File "C:\Users\1\dot-main\src\dot\dot.py", line 131, in generate
option.generate_from_camera(
File "C:\Users\1\dot-main\src\dot\commons\model_option.py", line 184, in generate_from_camera
self.create_model(opt_crop_size=opt_crop_size, **kwargs)
File "C:\Users\1\dot-main\src\dot\simswap\option.py", line 80, in create_model
self.spNorm = SpecificNorm(use_gpu=self.use_gpu)
File "C:\Users\1\dot-main\src\dot\simswap\util\norm.py", line 17, in init
self.mean = torch.from_numpy(self.mean).float().cuda()
File "C:\Users\1\miniconda3\envs\dot\lib\site-packages\torch\cuda_init
.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

(dot) C:\Users\1\dot-main>dot
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Usage: dot [OPTIONS]
Try 'dot --help' for help.

Error: Missing option '--source'.

(Degree of) Anonymity of the Deepfake Offensive Toolkit?

Would you be able to elaborate on the degree of anonymity when using your Deepfake Offensive Toolkit?

Description:

Thank you for your work, and for sharing it with the world! For some applications, a red-teamer may want to demonstrate the red-teamer was able to perform certain actions without being identified. This leads me to wonder what the degree of anonymity is when using the DOT. To be specific;

  1. Does the DOI feed allow for reverse engineering Face Feature Points for face detection of the original user?
  2. Would you be able to provide the average number of identifying bits that are shared per millisecond using DOT?
  3. In extension to 2, would you perhaps be able to provide a graph with time in milliseconds seconds on the x-axis and the (average*) possible matches based on facial movement patterns on the y-axis? (Analogue to the number of identifying bits of information as demonstrated by Panopticlick 3.0.)
  • I assume some people's mimics are more easily identifiable/unique than others, yet taking this difference into account makes the analysis significantly more difficult.

Add Python 3.10 support

✨ Feature Request

Description:

As a code owner, I want to add Python 3.10 support for dot.

Acceptance Criteria:

  • update setup.cfg
  • update GA workflows + run unit tests

During the dot installation phase, an error appears

I get an error when I follow the instructions, tell me what's wrong

Install dot

pip install -e .

(dot) C:\Users\Александр>pip install -e .

Obtaining file:///C:/Users/%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B0%D0%BD%D0%B4%D1%80
ERROR: file:///C:/Users/%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B0%D0%BD%D0%B4%D1%80 does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.

Torch and torchvision check issue

❓ Ask a Question:

Description:

last step was installing torch and toch vision:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

after checking with:

python -c "import torch; print(torch.cuda.is_available())

I get error:

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\user1\miniconda3\envs\dot\lib\site-packages\torch_init_.py", line 122, in
raise err
OSError: [WinError 127] Die angegebene Prozedur wurde nicht gefunden. Error loading "C:\Users\user1\miniconda3\envs\dot\lib\site-packages\torch\lib\caffe2_detectron_ops.dll" or one of its dependencies.

Wrong torch or cudatoolkit version?

I downloaded pytorch cuda 11.7 and cudatoolki 11.3.1 ( conda install -c anaconda cudatoolkit).

Need help

There is no "[python.exe]: fomm" in Windows dropdown

Following the guide on here https://github.com/sensity-ai/dot#virtual-camera-injection I can't find [python.exe]: fomm, I am using OBS 27.0.1 as you describe because the current version of OBS has built in virtual camera, there is no something like "[python.exe]: fomm" as you described here. (In the appeared window, choose "[python.exe]: fomm" in Window drop-down menu and press OK.) Also there is no In OBS Studio, go to Tools -> VirtualCam. Check AutoStart section on OBS What am i missing, what version of the OBS does this dot works with?

ERROR: No matching distribution found for onnxruntime-gpu==1.9.0

❓ Ask a Question:

I am trying to install DOT but I have an issue while running creating conda environment.

Description:

I have followed the steps to install DOT, but I'm facing this issue:

(base) ➜  dot git:(main) ✗ conda env create -f envs/environment-gpu.yaml
Collecting package metadata (repodata.json): done
Solving environment: done

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Installing pip dependencies: | Ran pip subprocess with arguments:
['/Users/lfabbro/.miniconda/envs/dot/bin/python', '-m', 'pip', 'install', '-U', '-r', '/Users/lfabbro/Documents/code/dot/envs/condaenv.ywrul920.requirements.txt', '--exists-action=b']
Pip subprocess output:

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu==1.9.0 (from versions: none)
ERROR: No matching distribution found for onnxruntime-gpu==1.9.0

failed

CondaEnvException: Pip failed

Additional info:

(base) ➜  dot git:(main) ✗ conda --version
conda 23.3.1
(base) ➜  dot git:(main) ✗ python --version
Python 3.10.9
(base) ➜  dot git:(main) ✗ uname -a
Darwin xwing.local 22.3.0 Darwin Kernel Version 22.3.0: Mon Jan 30 20:38:37 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6000 arm64
(base) ➜  dot git:(main) ✗ 

How to set DOT on windows?

I can’t figure out the installation, there are no options for installing on windows in the documentation, although the instructions for installing on windows are written only for Obs studio. please give advice.

GFPGAN

I believe that incorporating GFPGAN v3 instead of GPEN can give significant image quality improvement

An issue with windows installation

I just download Conda from https://www.anaconda.com/products/distribution
after that, I checked my CUDA version and got 12.0
Tried to create a .yaml file but didn't succeed

PS E:\DeepFace\dot-1.1.0\dot-1.1.0> conda env create -f envs/environment-gpu.yaml

EnvironmentFileNotFound: 'E:\DeepFace\dot-1.1.0\dot-1.1.0\envs\environment-gpu.yaml' file not found

Tried to install cudatoolkit but didn't succeed (11.1, 12.0, and 11.8.0)

PS E:\DeepFace\dot-1.1.0\dot-1.1.0\envs> conda install cudatoolkit=11.1
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - cudatoolkit=11.1

Current channels:

  - https://repo.anaconda.com/pkgs/main/win-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/win-64
  - https://repo.anaconda.com/pkgs/r/noarch
  - https://repo.anaconda.com/pkgs/msys2/win-64
  - https://repo.anaconda.com/pkgs/msys2/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

this is my nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 528.24       Driver Version: 528.24       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:2D:00.0  On |                  N/A |
|  0%   43C    P8    33W / 340W |   1504MiB / 10240MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

any ideas?

Thanks

Error no Face detected!

When I run on cpu i got the following errors an I followed everything correctly:
'strict' is an invalid keyword argument for load()
=== Control keys ===
1-9: Change avatar
1: .\data\ronaldo.png
2: .\data\schwarzenegger.png
3: .\data\Brad Pitt.jpg
4: .\data\David Beckham.jpg
5: .\data\einstein.jpg
6: .\data\eminem.jpg
7: .\data\jobs.jpg
8: .\data\Joe Biden.jpg
9: .\data\Leonardo Dicaprio.jpg
10: .\data\Markiplier.jpg
11: .\data\mona.jpg
12: .\data\obama.jpg
13: .\data\Pewdiepie.jpg
14: .\data\potter.jpg
15: .\data\Tom Cruise.jpg
C:\Users\Pro\miniconda3\envs\dot\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ..\c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
C:\Users\Pro\miniconda3\envs\dot\lib\site-packages\torch\nn\functional.py:3609: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
warnings.warn(
ERROR: No face detected!
ERROR: No face detected!
ERROR: No face detected!
ERROR: No face detected!

Aborted!

Warnings when running dot

🐛 Bug Report

Description:

Getting warnings when running dot. This issue was mentioned in #59.

PS E:\DeepFace\dot-1.1.0\dot-1.1.0> dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'models.ArcMarginModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'models.ResNet' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.activation.PReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'models.SEBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.AdaptiveAvgPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Sigmoid' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Python\lib\site-packages\torch\serialization.py:671: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
'strict' is an invalid keyword argument for load()
=== Control keys ===
1-9: Change avatar
1: .\data\ronaldo.png
2: .\data\schwarzenegger.png
3: .\data\Brad Pitt.jpg
4: .\data\David Beckham.jpg
5: .\data\einstein.jpg
6: .\data\eminem.jpg
7: .\data\jobs.jpg
8: .\data\Joe Biden.jpg
9: .\data\Leonardo Dicaprio.jpg
10: .\data\Markiplier.jpg
11: .\data\mona.jpg
12: .\data\obama.jpg
13: .\data\Pewdiepie.jpg
14: .\data\potter.jpg
15: .\data\Tom Cruise.jpg
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
C:\Python\lib\site-packages\torch\nn\functional.py:3609: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  warnings.warn(

Fix `--save_folder` CLI Option

🐛 Bug Report

Description:

Actual Behavior:

See #12 (comment) for details.

Expected Behavior:

The output path using --save_folder should save the files in chosen folder, instead of the root directory

Need info

Hello, I'm new to programming and stuff and I had great difficulties to successfully install dot but I did.
So I just wanna ask if I only have to know about python to successfully use dot?

Issue with Colab Demo

Hello,

When running the colab demo I encounter this issue when attempting to run the final cell (the one that performs the swap on video):

python: can't open file 'scripts/video_swap.py': [Errno 2] No such file or directory

If I correct the path to /content/dot/scripts/video_swap.py I get this error:

Loading config: ./dot/simswap/configs/config.yaml
Traceback (most recent call last):
  File "scripts/video_swap.py", line 77, in <module>
    main()
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1126, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1051, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1393, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 752, in invoke
    return __callback(*args, **kwargs)
  File "scripts/video_swap.py", line 53, in main
    with open(config) as f:
FileNotFoundError: [Errno 2] No such file or directory: './dot/simswap/configs/config.yaml'

Finally, if I move the "dot" folder in "src" to the to the main "dot" folder (following the error above) I receive this error:

Traceback (most recent call last):
  File "/content/dot/scripts/video_swap.py", line 10, in <module>
    import dot
ModuleNotFoundError: No module named 'dot'

After that I am stumped, though I am sure there is something quite obvious that I am missing.

GPU tesla_a100

I have difficulty running dot with GPU tesla_a100, any suggestions?

Detect relatively small/far-away faces for swap

I noticed that the face is not always detected in target images and this happens when the face is relatively small in the target image. I have been digging into it a bit, but as far as I understand this has something to do with the implementation of the Google Mediapipe Face Mesh into Simswap. Face Mesh is not able to detect such small faces. But when using Simswap from the original repo this is not an issue and the swap always works fine. I also read that the Mediapipe has no solution yet for smaller faces in combination with the Face Mesh, tho they have it for Face Detection.

Is there a way to bypass the Face Mesh Pipeline and to make the swap happen using original Simswap? Or is there maybe another way to make swapping of relatively small faces possible or to implement something for this?

SimSwap & SimSwap HQ are not working

🐛 Bug Report

SimSwap and SimswapHQ are not working.
FOMM and faceswapping are.

As I try to run SimSwap I get the error code below:

(dot) C:\Users\alber\Desktop\dot-main>dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'models.ArcMarginModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'models.ResNet' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.activation.PReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'models.SEBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.AdaptiveAvgPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Sigmoid' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py:888: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
Traceback (most recent call last):
  File "C:\Users\alber\miniconda3\envs\dot\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\alber\miniconda3\envs\dot\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\alber\miniconda3\envs\dot\Scripts\dot.exe\__main__.py", line 7, in <module>
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "C:\Users\alber\Desktop\dot-main\src\dot\__main__.py", line 206, in main
    run(
  File "C:\Users\alber\Desktop\dot-main\src\dot\__main__.py", line 67, in run
    _dot.generate(
  File "C:\Users\alber\Desktop\dot-main\src\dot\dot.py", line 131, in generate
    option.generate_from_camera(
  File "C:\Users\alber\Desktop\dot-main\src\dot\commons\model_option.py", line 184, in generate_from_camera
    self.create_model(opt_crop_size=opt_crop_size, **kwargs)
  File "C:\Users\alber\Desktop\dot-main\src\dot\simswap\option.py", line 100, in create_model
    self.model = create_model(
  File "C:\Users\alber\Desktop\dot-main\src\dot\simswap\models\models.py", line 31, in create_model
    model.initialize(
  File "C:\Users\alber\Desktop\dot-main\src\dot\simswap\fs_model.py", line 76, in initialize
    netArc_checkpoint = torch.load(arcface_model_path)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py", line 815, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\serialization.py", line 1043, in _legacy_load
    result = unpickler.load()
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\optim\sgd.py", line 30, in __setstate__
    super().__setstate__(state)
  File "C:\Users\alber\miniconda3\envs\dot\lib\site-packages\torch\optim\optimizer.py", line 214, in __setstate__
    self.defaults.setdefault('differentiable', False)
AttributeError: 'SGD' object has no attribute 'defaults'
[ WARN:[email protected]] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (539) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Best regards

windows installation

Can you make an UPDATED step-by-step detailed instructions for windows? At the moment, the steps you have provided do not result in a successful installation of dot on windows.
Please provide exact version of supported CUDA, conda, windows, and what should I type in console to get working pytorch.
I am constantly getting different errors during various methods of installations.
I even re-installed windows, I've tried many versions of conda, different command lines to get torch etc...
The worst thing is that if you did any of the steps wrong, it becomes clear only at the end. And so you have to try do all the NEW steps (using google) at half-random, including the command lines in terminal. And at the end you will get an new error and you have to do it all over and over again. At this time it's more like a lottery than an installation. Help me, please
It would be great if we had step-by-step tutorial with screenshots and exact versions of each component & command lines to get them.

How can I convert it into exe?

Hi
how can I convert dot fully with all of its library into exe?
I tried a few things it didn't work out for me.
Could anyone please guide me properly which things to use or anything else?

Thank You

How to run dot?

Ask a Question:

Did everything as described. But zoom/obs is not showing anything, only a black screen of obs and my mouse movements.

How to I set it up to use the facecam?

Description:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.