Giter VIP home page Giter VIP logo

crfill's Introduction

crfill

Usage | Web App | | Paper | Supplementary Material | More results |

code for paper ``Image Inpainting with Contextual Reconstruction Loss". This repo (including code and models) are for research purposes only.

Usage

Basic usage

  1. Download code and model
git clone --single-branch https://github.com/zengxianyu/crfill

download model files and put them in the ./files/ directory

  1. Install dependencies:
conda env create -f environment.yml

or manually install these packages in a Python 3.6 enviroment:

pytorch=1.3.1, opencv=3.4.2, tqdm

  1. Use the code:

with GPU:

python test.py --image path/to/images --mask path/to/hole/masks --output path/to/save/results

without GPU:

python test.py --image path/to/images --mask path/to/hole/masks --output path/to/save/results --nogpu

path/to/images is the path to the folder of input images; path/to/masks is the path to the folder of hole masks; path/to/save/results is where the results will be saved.

Hole masks are grayscale images where pixel values> 0 indicates the pixel at the corresponding position is missing and will be replaced with generated new content.

๐Ÿ“ฃ ๐Ÿ“ฃ The white area of a hole mask should fully cover all pixels in the missing regions. ๐Ÿ“ฃ ๐Ÿ“ฃ

Web APP

For your convinience of visualization and evaluation, I provide an inpainting APP where you can interact with the inpainting model in a browser, to open a photo and draw area to remove. To use the web app, these additional packages are required:

flask, requests, pillow

Then execute the following:

With GPU:

cd app
python hello.py

Without GPU:

cd app
python hello.py --nogpu

After that, open http://localhost:2334 in the browser

The adjusted model for high-res inpainitng

To use the adjusted model for high-res inpainting (specifiy the option --nogpu to run on cpu```):

python test.py --opt nearestx2 --load ./files/model_near512.pth \
--image path/to/images \
--mask path/to/hole/masks \
--output path/to/save/results \

By default, the Web APP selects from the two models based on the image size. The adjusted model will be used if the short side >= 512. To mannualy specify the model used in the Web APP:

cd app
python hello.py --opt nearestx2 --load ./files/model_near512.pth

or

cd app
python hello.py --opt convnet --load ./files/model_256.pth

Auxiliary network

The auxiliary network (used during training) is defined in networks/auxiliary.py. The full training code will be released after paper accepted.

crfill's People

Contributors

zengxianyu avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.