Giter VIP home page Giter VIP logo

inpaint-anything's Introduction

Inpaint Anything (Inpainting with Segment Anything)

Inpaint Anything performs stable diffusion inpainting on a browser UI using any mask selected from the output of Segment Anything.

Using Segment Anything enables users to specify masks by simply pointing to the desired areas, instead of manually filling them in. This can increase the efficiency and accuracy of the mask creation process, leading to potentially higher-quality inpainting results while saving time and effort.

Extension version for AUTOMATIC1111's Web UI

Explanation image

Installation

Please follow these steps to install the software:

  • Create a new conda environment:
conda create -n inpaint python=3.10
conda activate inpaint
  • Clone the software repository:
git clone https://github.com/Uminosachi/inpaint-anything.git
cd inpaint-anything
  • For the CUDA environment, install the following packages:
pip install -r requirements.txt
  • If you are using macOS, please install the package from the following file instead:
pip install -r requirements_mac.txt

Running the application

python iasam_app.py
  • Open http://127.0.0.1:7860/ in your browser.
  • Note: If you have a privacy protection extension enabled in your web browser, such as DuckDuckGo, you may not be able to retrieve the mask from your sketch.

Options

  • --save-seg: Save the segmentation image generated by SAM.
  • --offline: Execute inpainting using an offline network.
  • --sam-cpu: Perform the Segment Anything operation on CPU.

Downloading the Model

Usage

  • Drag and drop your image onto the input image area.
    • Outpainting can be achieved by the Padding options, configuring the scale and balance, and then clicking on the Run Padding button.
    • The Anime Style checkbox enhances segmentation mask detection, particularly in anime style images, at the expense of a slight reduction in mask quality.
  • Click on the Run Segment Anything button.
  • Use sketching to point the area you want to inpaint. You can undo and adjust the pen size.
    • Hover over either the SAM image or the mask image and press the S key for Fullscreen mode, or the R key to Reset zoom.
  • Click on the Create mask button. The mask will appear in the selected mask image area.

Mask Adjustment

  • Expand mask region button: Use this to slightly expand the area of the mask for broader coverage.
  • Trim mask by sketch button: Clicking this will exclude the sketched area from the mask.
  • Add mask by sketch button: Clicking this will add the sketched area to the mask.

Inpainting Tab

  • Enter your desired Prompt and Negative Prompt, then choose the Inpainting Model ID.
  • Click on the Run Inpainting button (Please note that it may take some time to download the model for the first time).
    • In the Advanced options, you can adjust the Sampler, Sampling Steps, Guidance Scale, and Seed.
    • If you enable the Mask area Only option, modifications will be confined to the designated mask area only.
  • Adjust the iteration slider to perform inpainting multiple times with different seeds.
  • The inpainting process is powered by diffusers.

Tips

  • You can directly drag and drop the inpainted image into the input image field on the Web UI. (useful with Chrome and Edge browsers)

Model Cache

  • The inpainting model, which is saved in HuggingFace's cache and includes inpaint (case-insensitive) in its repo_id, will also be added to the Inpainting Model ID dropdown list.
    • If there's a specific model you'd like to use, you can cache it in advance using the following Python commands:
python
from diffusers import StableDiffusionInpaintPipeline
pipe = StableDiffusionInpaintPipeline.from_pretrained("Uminosachi/dreamshaper_5-inpainting")
exit()
  • The model diffusers downloaded is typically stored in your home directory. You can find it at /home/username/.cache/huggingface/hub for Linux and MacOS users, or at C:\Users\username\.cache\huggingface\hub for Windows users.

Cleaner Tab

  • Choose the Cleaner Model ID.
  • Click on the Run Cleaner button (Please note that it may take some time to download the model for the first time).
  • Cleaner process is performed using Lama Cleaner.

Mask only Tab

  • Gives ability to just save mask without any other processing, so it's then possible to use the mask in other graphic applications.
  • Get mask as alpha of image button: Save the mask as RGBA image, with the mask put into the alpha channel of the input image.
  • Get mask button: Save the mask as RGB image.

UI image

Auto-saving images

  • The inpainted image will be automatically saved in the folder that matches the current date within the outputs directory.

Development

With the Inpaint Anything library, you can perform segmentation and create masks using sketches from other applications.

License

The source code is licensed under the Apache 2.0 license.

References

inpaint-anything's People

Contributors

uminosachi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

inpaint-anything's Issues

Pip install already but still like this

╰─ pip install diffusers
Requirement already satisfied: diffusers in c:\users\administrator\miniconda3\lib\site-packages (0.16.1)
Requirement already satisfied: Pillow in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (9.5.0)
Requirement already satisfied: filelock in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (3.12.4)
Requirement already satisfied: huggingface-hub>=0.13.2 in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (0.17.3)
Requirement already satisfied: importlib-metadata in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (6.8.0)
Requirement already satisfied: numpy in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (1.26.0)
Requirement already satisfied: regex!=2019.12.17 in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (2023.8.8)
Requirement already satisfied: requests in c:\users\administrator\miniconda3\lib\site-packages (from diffusers) (2.29.0)
Requirement already satisfied: fsspec in c:\users\administrator\miniconda3\lib\site-packages (from huggingface-hub>=0.13.2->diffusers) (2023.9.2)
Requirement already satisfied: tqdm>=4.42.1 in c:\users\administrator\miniconda3\lib\site-packages (from huggingface-hub>=0.13.2->diffusers) (4.66.1)
Requirement already satisfied: pyyaml>=5.1 in c:\users\administrator\miniconda3\lib\site-packages (from huggingface-hub>=0.13.2->diffusers) (6.0.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\administrator\miniconda3\lib\site-packages (from huggingface-hub>=0.13.2->diffusers) (4.8.0)
Requirement already satisfied: packaging>=20.9 in c:\users\administrator\miniconda3\lib\site-packages (from huggingface-hub>=0.13.2->diffusers) (23.0)
Requirement already satisfied: zipp>=0.5 in c:\users\administrator\miniconda3\lib\site-packages (from importlib-metadata->diffusers) (3.17.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\administrator\miniconda3\lib\site-packages (from requests->diffusers) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in c:\users\administrator\miniconda3\lib\site-packages (from requests->diffusers) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\administrator\miniconda3\lib\site-packages (from requests->diffusers) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\administrator\miniconda3\lib\site-packages (from requests->diffusers) (2023.5.7)
Requirement already satisfied: colorama in c:\users\administrator\miniconda3\lib\site-packages (from tqdm>=4.42.1->huggingface-hub>=0.13.2->diffusers) (0.4.6)

Inpaint Anything 运行报错

运行 Inpaint Anything 总是报内存不足,没有一次成功的,R5 3600 3060ti RAM16G 是不是这个卡无法运行这个插件,
2024-02-03 20:39:53,872 - Inpaint Anything - ERROR - Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 3.26 GiB
Requested : 2.64 GiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

How to create custom huggingface model?

Hey, Thanks for your project!
I want to create absolutereality_v181INPAINTING.safetensors model for inpainting. I don't know how to create custom models to Huggingface. You mentioned here but I am new to this. So, is there any tutorial to create custom models like yours or can you please guide me how to create and which platform? and I don't want to use in SD WebUI Extension. I want in standalone version of Inpaint!

Failed in Run inpainting (ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU)

CPU: i9 13900k
GPU:4090
I follow the GitHub tutorial, but in the last step "Run inpainting" after a few min loading, it show error message. please someone help thank you so much.

error message below.

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
input_image: (4000, 6000, 3) uint8
SamAutomaticMaskGenerator sam_vit_b_01ec64.pth
SAM processing time: 124.52666926383972
Uminosachi/dreamshaper_6Inpainting
Downloading (…)ain/model_index.json: 100%|█████████████████████████████████████████████████████████| 579/579 [00:00<00:00, 1.15MB/s]
text_encoder\model.safetensors not found
Downloading (…)cheduler_config.json: 100%|█████████████████████████████████████████████████████████████████| 460/460 [00:00<?, ?B/s]
Downloading (…)rocessor_config.json: 100%|█████████████████████████████████████████████████████████████████| 520/520 [00:00<?, ?B/s]
Downloading (…)_checker/config.json: 100%|█████████████████████████████████████████████████████| 4.58k/4.58k [00:00<00:00, 4.59MB/s]
Downloading (…)_encoder/config.json: 100%|█████████████████████████████████████████████████████████████████| 612/612 [00:00<?, ?B/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████| 472/472 [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████████████████| 737/737 [00:00<00:00, 1.47MB/s]
Downloading (…)6c9/unet/config.json: 100%|█████████████████████████████████████████████████████| 1.55k/1.55k [00:00<00:00, 2.60MB/s]
Downloading (…)tokenizer/merges.txt: 100%|████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 931kB/s]
Downloading (…)e6c9/vae/config.json: 100%|█████████████████████████████████████████████████████████████████| 577/577 [00:00<?, ?B/s]
Downloading (…)tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.34MB/s]
Downloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████| 246M/246M [00:20<00:00, 11.9MB/s]
Downloading (…)on_pytorch_model.bin: 100%|███████████████████████████████████████████████████████| 167M/167M [00:29<00:00, 5.67MB/s]
Downloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████| 608M/608M [00:46<00:00, 13.1MB/s]
Downloading (…)on_pytorch_model.bin: 100%|█████████████████████████████████████████████████████| 1.72G/1.72G [01:55<00:00, 14.9MB/s]
Fetching 15 files: 100%|████████████████████████████████████████████████████████████████████████████| 15/15 [01:57<00:00, 7.83s/it]
C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.7MB/s]
warnings.warn(n_pytorch_model.bin: 100%|█████████████████████████████████████████████████████| 1.72G/1.72G [01:55<00:00, 21.3MB/s]
The config attributes {'addition_embed_type': None, 'addition_embed_type_num_heads': 64, 'class_embeddings_concat': False, 'cross_attention_norm': None, 'encoder_hid_dim': None, 'mid_block_only_cross_attention': None, 'resnet_out_scale_factor': 1.0, 'resnet_skip_time_act': False, 'time_embedding_act_fn': None, 'time_embedding_dim': None} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Using sampler DDIM
Traceback (most recent call last):
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\gradio\routes.py", line 427, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Researchteam\Inpainting anything\inpaint-anything\iasam_app.py", line 465, in run_inpaint
pipe.enable_xformers_memory_efficient_attention()
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1080, in enable_xformers_memory_efficient_attention
self.set_use_memory_efficient_attention_xformers(True, attention_op)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1096, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\modeling_utils.py", line 212, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "C:\Users\user\anaconda3\envs\inpaint\lib\site-packages\diffusers\models\attention.py", line 104, in set_use_memory_efficient_attention_xformers
raise ValueError(
ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU

CUDA out of memory with 24gb VRAM

image_2024-03-21_012400781
Im getting a CUDA out of memory error with continual inpainting and segmenting. seems to build up VRAM usage and not release it. here it is stuck at 1/20 and continually eating up more ram, over 50gb.

is there a way to import custom models into the program?

For example, I downloaded juggernautxlinpaint from civitai and would like to experiment with that and others.

I tried placing it into the models directory but it didn't do anything, I then tried placing it into the huggingface cache but it also didnt show up in the programs drop down menu, I found the file ia_ui_items.py and ran it but that didnt seem to change anything.

I then modified it trying to force it to rescan the cache using the following code:

from huggingface_hub import scan_cache_dir

def get_inp_model_ids():

    hf_cache_info = scan_cache_dir()
    inpaint_repos = []
    for repo in hf_cache_info.repos:
        if repo.repo_type == "model" and "inpaint" in repo.repo_id.lower() and repo.repo_id not in model_ids:
            inpaint_repos.append(repo.repo_id)
    inp_list_from_cache = sorted(inpaint_repos, reverse=True, key=lambda x: x.split("/")[-1])
    model_ids.extend(inp_list_from_cache)
    return model_ids

    return model_ids

and it seemed to run without errors, but I refreshed/relaunched and nothing changes it.

How do I add custom models to the programs?
Is there a directory in the program I need to add my own folder to? And what should it be named.

Inpaint doesn't take prompt

hello, so I found this extension through youtube and I'm interested to try it out. So I've downloaded the model, I've inserted the image but when I fill the prompt in Inpaint and click run inpainting it only gives masked area as an output.

  1. Model
    image

  2. Inpaint prompt + Mask selection
    image
    image

  3. Result + CMD
    image
    image

Please help, am I missing something?

In the plugin paint anything, the repair model realisticVisionV51 has been downloaded offline for repair_ V51VAE repainting, installed in the corresponding folder, but unable to execute the file

WARNING:huggingface_hub.utils._http:'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /Uminosachi/realisticVisionV51_v51VAE-inpainting/resolve/main/model_index.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002219323F6A0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/Uminosachi/realisticVisionV51_v51VAE-inpainting/resolve/main/model_index.json
2024-01-17 10:37:39,215 - Inpaint Anything - ERROR - We have no connection or you passed local_files_only, so force_download is not an accepted option.

Support Torch 2.1.0

The requirements.txt file specifies torch==2.0.1. but may be better just to specify torch without locking it into a version number.

Save masked image as transparent PNG

This tool would be great for selecting parts of the image and then saving as a PNG with the unmasked parts treated as transparent. Would it be possible to add this transparency feature?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.