continue-revolution / sd-webui-segment-anything Goto Github PK
View Code? Open in Web Editor NEWSegment Anything for Stable Diffusion WebUI
Segment Anything for Stable Diffusion WebUI
I'd like to write a feature to expose this model through the segment webui API. This would be pretty straightforward to accomplish but ideally, the endpoint would not accept dots as the prompt, but text. When do you think the integration with Grounded DINO would be completed? It's something I've worked on before so if you haven't started yet I could probably knock it out today.
How can I set Dino to run using the CPU? When running on the GPU, the error "NameError: name '_ C' is not defined" will appear.
好几个插件都在用seg和dino...一个seg模型2.5G,连续下几个太麻烦了。
现在用软连接放各个插件里面,还是希望能像controlnet那样以后迁移到models文件夹下面。
目录迁移
目录迁移
目录迁移
No response
目录迁移
目录迁移
No response
D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\c10/util/Optional.h(554): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<T>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<at::Tensor,std::allocator<at::Tensor>>
]
D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\torch\csrc\api\include\torch/optim/lbfgs.h(50): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<at::Tensor,std::allocator<at::Tensor>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base<T>\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<at::Tensor,std::allocator<at::Tensor>>
]
D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\torch/csrc/python_headers.h(12): fatal error C1083: \xce\xa8\xb4\xfc\xc0\xa8\xceļ\xfe: \xa1\xb0Python.h\xa1\xb1: No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for groundingdino
error: subprocess-exited-with-error
× Building wheel for pycocotools (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-cpython-310\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-cpython-310\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-cpython-310\pycocotools
copying pycocotools\__init__.py -> build\lib.win-amd64-cpython-310\pycocotools
running build_ext
building 'pycocotools._mask' extension
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
creating build\temp.win-amd64-cpython-310\Release\common
creating build\temp.win-amd64-cpython-310\Release\pycocotools
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\numpy\core\include" -I./common "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\include" "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tc./common/maskApi.c /Fobuild\temp.win-amd64-cpython-310\Release\./common/maskApi.obj
maskApi.c
./common/maskApi.c(151): warning C4101: \xa1\xb0xp\xa1\xb1: δ\xd2\xfd\xd3õľֲ\xbf\xb1\xe4\xc1\xbf
None
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\numpy\core\include" -I./common "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\include" "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcpycocotools/_mask.c /Fobuild\temp.win-amd64-cpython-310\Release\pycocotools/_mask.obj
GroundingDINO install failed. Please submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.
_mask.c
c1: fatal error C1083: \xce\xa8\xb4\xf2\xbf\xaaԴ\xceļ\xfe: \xa1\xb0pycocotools/_mask.c\xa1\xb1: No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycocotools
ERROR: Could not build wheels for pycocotools, which is required to install pyproject.toml-based projects
求助,有人知道错在哪里,该怎么弄么?
Add a button to unload model, please
I try the GroundingDino API.
Here is the test script :
import base64
import requests
from PIL import Image
from io import BytesIO
url = "http://127.0.0.1:7860/sam-webui/image-mask";
def image_to_base64(img_path: str) -> str:
with open(img_path, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode()
return img_base64
payload = {
"image": image_to_base64("out1.png"),
"prompt": "body",
"box_threshold": 0.3
}
res = requests.post(url, json=payload)
print(res)
for dct in res.json():
image_data = base64.b64decode(dct['image'])
image = Image.open(BytesIO(image_data))
image.show()
The Execution is :
C:\Users\Jyce\Desktop>stable-diffusion-webui\venv\Scripts\python.exe sagd.py
<Response [500]>
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\sagd.py", line 23, in <module>
image_data = base64.b64decode(dct['image'])
TypeError: string indices must be integers
Generate the mask images ?
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: a5c000f
No response
Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch
Start SAM Processing
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
final text_encoder_type: bert-base-uncased
C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Initializing SAM
Running SAM Inference (512, 512, 3)
SAM inference with 2 boxes, point prompts disgarded
Creating output image
API error: POST: http://127.0.0.1:7860/sam-webui/image-mask {'error': 'AttributeError', 'detail': '', 'body': '', 'errors': "'list' object has no attribute 'save'"}
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
message = await recv_stream.receive()
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\modules\api\api.py", line 145, in exception_handling
return await call_next(request)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
response = await self.dispatch_func(request, call_next)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\modules\api\api.py", line 110, in log_and_time
res: Response = await call_next(req)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in __call__ await route.handle(scope, receive, send)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 53, in process_image
response = [{"image": pil_image_to_base64(mask)} for mask in masks]
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 53, in <listcomp>
response = [{"image": pil_image_to_base64(mask)} for mask in masks]
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 28, in pil_image_to_base64
img.save(buffered, format="JPEG")
AttributeError: 'list' object has no attribute 'save'
Extra bonus : Is it possible to add an option on the API to select the "expand mask" value ?
ע\xd2\xe2: \xb0\xfc\xba\xac\xceļ\xfe: D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\include\ATen/ops/special_airy_ai.h
with
[
T=c10::SymInt
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::SymInt\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/TensorImpl.h(1602): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::SymInt\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::SymInt
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::basic_string<char,std::char_traits,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type_base.h(452): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::basic_string<char,std::char_traits,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::QualifiedName
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::QualifiedName
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::QualifiedName
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::QualifiedName\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type_base.h(700): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::QualifiedName\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::QualifiedName
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::TensorBase
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::TensorBase
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::TensorBase
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::TensorBase\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBase.h(933): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::TensorBase\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::TensorBase
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::Tensor
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::Tensor
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::Tensor
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::Tensor\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(518): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::Tensor\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::Tensor
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::Generator
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::Generator
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=at::Generator
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::Generator\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(597): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::Generator\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=at::Generator
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::Scalar
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::Scalar
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::Scalar
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::Scalar\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(628): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::Scalar\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::Scalar
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::shared_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::shared_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::shared_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::shared_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue.h(1437): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::shared_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::shared_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::weak_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::weak_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::weak_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::weak_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue.h(1438): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::weak_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::weak_ptr<torch::jit::CompilationUnit>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(484): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<bool,std::allocator<bool>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<bool,std::allocator<bool>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<bool,std::allocator<bool>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<bool,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(443): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<bool,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<bool,std::allocator<bool>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(569): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(845): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::VaryingShapec10::Stride\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(569): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(615): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::VaryingShape<__int64>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<__int64,std::allocator<__int64>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<__int64,std::allocator<__int64>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<__int64,std::allocator<__int64>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<__int64,std::allocator<__int64>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(728): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<__int64,std::allocator<__int64>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<__int64,std::allocator<__int64>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineDeviceGuard.h(427): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/DeviceGuard.h(178): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineOptionalDeviceGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineStreamGuard.h(197): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/StreamGuard.h(139): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineOptionalStreamGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::VirtualGuardImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::VirtualGuardImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::VirtualGuardImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineStreamGuard.h(232): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::impl::VirtualGuardImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/StreamGuard.h(162): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineMultiStreamGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::impl::VirtualGuardImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>,std::allocator<c10::weak_intrusive_ptr<TTarget,c10::detail::intrusive_target_default_null_type>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
TTarget=c10::StorageImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue_inl.h(884): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>,std::allocator<c10::weak_intrusive_ptr<TTarget,c10::detail::intrusive_target_default_null_type>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
TTarget=c10::StorageImpl
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::SmallVector<__int64,5>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::SmallVector<__int64,5>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
with
[
T=c10::SmallVector<__int64,5>
]
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::SmallVector<__int64,5>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/TensorIterator.h(918): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::SmallVector<__int64,5>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3
D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1
with
[
T=c10::SmallVector<__int64,5>
]
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
subprocess.run(
File "subprocess.py", line 526, in run
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\80450\AppData\Local\Temp\pip-req-build-rp9ihets\setup.py", line 192, in <module>
setup(
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\install.py", line 68, in run
return orig.install.run(self)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\install.py", line 698, in run
self.run_command('build')
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build.py", line 132, in run
self.run_command(cmd_name)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 843, in build_extensions
build_ext.build_extensions(self)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 468, in build_extensions
self._build_extensions_serial()
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 494, in _build_extensions_serial
self.build_extension(ext)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 549, in build_extension
objects = self.compiler.compile(
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 815, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1574, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> groundingdino
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
None
GroundingDINO install failed. Please submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.
使用的是aki最新整合包的最新版本,gradio= 3.23.0 ,WebUI=22bcc7be
C:\Users\80450>python
Python 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
from torch.utils.cpp_extension import CUDA_HOME
print(CUDA_HOME)
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
CUDA也是正确版本,现在每次安装GroundingDino的时候都会出错
#58 mentions Inpaint Anything, but Inpaint Anything has actually been supported even in the earliest version of this extension. Given that ControlNet inpainting model has already been connected to this extension, you should expect a far better preformance if you use this extension + ControlNet extension + a good base model without the need of downloading a huge, annoying and general-purposed inpainting model.
For Remove Anything and Fill Anything, they are just mask+inpainting. Go to img2img
, use point prompts and/or text prompts to get your mask, check copy to inpaint
and copy to ControlNet inpaint
, select appropriate index of ControlNet panel associated to inpainting, write your prompt and click Generate
.
For Replace Anything, it is just mask+inpaint not masked. The only thing you need to do more is to check inpaint not masked
in the img2img
panel, everything else should remain the same.
That's it! Simple and easy, not mysterious at all which their big fantasy name sounds like!
My plan is to support all interesting applications which connect SAM to Stable Diffusion. When #57 is merged to master branch, you should be able to try almost all of the interesting applications which apply both SAM and Stable Diffusion. If you find another interesting application but have no idea how to use in stable diffusion, submit an issue like #58 and I can see whether I should update once more or write a tutorial like this to guide you how to do it.
I'm using sam to mask face + inpaint upload with loopback to fix faces like people suggested. I'm assuming that Switch to Inpaint Upload is working because it only changes the part that was masked ( like pic below ).
There's no indicator that the inpaint is using your mask or not though. Can you update the Mask instead of leaving it blank and if not, some other form of indication.
Thanks for the hard work you've put on.
(I clicked Switch to Inpaint Upload multiple times )
pycocotools_mask.c(6): fatal error C1083: \xce\xa8\xb4\xfc\xc0\xa8\xceļ\xfe: \xa1\xb0Python.h\xa1\xb1: No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycocotools
ERROR: Could not build wheels for groundingdino, pycocotools, which is required to install pyproject.toml-based projects
ControlNet的seg模式十分强大,可以用来控制构图和画面内容,但似乎controlNet使用的seg模型识别的颜色与语义的映射,与segment-anything不太一样。但segment-anything拥有更加强大的识别和分割能力。是否有这样的可能性:使用segment-anything分割和识别一张图片,并输出Controlnet的Seg模式能使用的图呢。
注:该问题可能问得非常的外行,属于处于直觉的一个疑问。
感谢大佬
After a fresh install of the extension, it seems that I am unable to open the gradio accordion. The whole interface freezes when I click on it to open.
Tested both soft ui reload and hard restart, same result in both cases.
The interface should not freeze when interacting with it.
webui: 22bcc7be
sam: e93d178
Mozilla Firefox
--api --api-log --allow-code
none
The ui does not freeze in chrome. If anyone else has the same issue, try changing browser. I do not have any other extension than adblock. (which is disabled on localhost in my case)
如何运行
我
无
webui:
extension:
No response
无
无
无
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Dilation Amount: 18
Traceback (most recent call last):
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\novelai-webui-aki\extensions\sd-webui-segment-anything\scripts\sam.py", line 63, in update_mask
binary_img = np.array(mask_image.convert('1'))
AttributeError: 'list' object has no attribute 'convert'
Traceback (most recent call last):
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\extensions\sd-webui-segment-anything\scripts\sam.py", line 187, in sam_predict
masks, _, _ = predictor.predict(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\predictor.py", line 154, in predict
masks, iou_predictions, low_res_masks = self.predict_torch(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\predictor.py", line 222, in predict_torch
sparse_embeddings, dense_embeddings = self.model.prompt_encoder(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\modeling\prompt_encoder.py", line 155, in forward
point_embeddings = self._embed_points(coords, labels, pad=(boxes is None))
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\modeling\prompt_encoder.py", line 85, in _embed_points
labels = torch.cat([labels, padding_label], dim=1)
RuntimeError
When following the instructions for ControlNet Inpainting:
ControlNet Inpainting
Check Copy to ControlNet Inpaint and select the ControlNet panel for inpainting if you want to use multi-ControlNet. You can >be either at img2img tab or at txt2img tab to use this functionality.
Configurate ControlNet panel. Click Enable, preprocessor choose inpaint_global_harmonious, model choose >control_v11p_sd15_inpaint [ebff9138]. There is no need to upload image to the ControlNet inpainting panel, as SAM extension will >help you to do that. Write your prompts, configurate A1111 panel and click Generate.
When using Txt2Img get a message to say there is no image input:
Error running process: D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, *script_args) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 779, in process raise ValueError('controlnet is enabled but no input image is given') ValueError: controlnet is enabled but no input image is given
If I follow the same steps in Img2Img a different error appears, however it does seem to actually generate the image correctly.
Error in Img2Img is:
Error running process: D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, *script_args) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 808, in process detected_map, is_image = preprocessor(input_image, res=unit.processor_res, thr_a=unit.threshold_a, thr_b=unit.threshold_b) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 57, in inpaint mask = resize_image(img[:, :, 3:4], res) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\util.py", line 33, in resize_image img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) cv2.error: Unknown C++ exception from OpenCV code
Included images of my settings:
The instructions state that I do not need to put an image into the ControlNet area as the extension will handle that. However it is giving an error saying there is no input image.
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 4ee968b
No response
--opt-sdp-attention --no-half-vae --opt-channelslast
Included in the error message above
No response
/AppleInternal/Library/BuildRoots/9e200cfa-7d96-11ed-886f-a23c4f261b56/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1309: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'
none
webui:
extension:
No response
none
/AppleInternal/Library/BuildRoots/9e200cfa-7d96-11ed-886f-a23c4f261b56/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1309: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'
No response
i tried to install this extension but it fails with this error
Traceback (most recent call last):
File "X:\stable-diffusion-webui\modules\scripts.py", line 270, in wrap_call
res = func(*args, **kwargs)
File "X:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 329, in ui
priorize_sam_scripts(is_img2img)
File "X:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 304, in priorize_sam_scripts
if cnet_idx < sam_idx:c
TypeError: '<' not supported between instances of 'NoneType' and 'int'`
When I look at it, it appears to fail on cnet_idx < sam_idx:
and I believe that is because I don't have the control net extension installed so it fails because cnet_idx is None.
The app should load and I should see this extension appear.
webui: ebd3758129d3dbfc9796273fea2022e0ef4e6daf ( should be latest )
extension: 4ee968b
Mozilla Firefox
no
not relevant
maybe you can initialize cnet_idx to a high number like 100000000 instead of None, then it wouldn't fail the comparison.
https://github.com/geekyutao/Inpaint-Anything
请问可以把这个加进来吗?
Given that automatic1111 has mask mode of inpaint not masked, controlnet should also have that. Since segment anything has a controlnet option, there should be a mask mode to send to controlnet from SAM. This way I can mask a small part of the problem image which I do not want to be disturbed and change the rest of it with controlnet
也想写个简单的webui界面
Hi,
Works great! A small problem I see is that the generated masks leave a very thin edge that is not inpainted. I tried playing with inpaint options, mainly with mask blur without success to remove edges
I think some some kind of mask expansion to the normal direction would be nice!
hello! amazing extension, but for me when I tick the box "copy to inpaint upload" and then press "switch to Inpaint Upload", the mask does not transfer, and I have to do it manually.
The segment anything menu now looks fairly cluttered, and there is a lot of unnecessary space taken up by GroundingDINO even when it is not being used. This could be solved by putting GroundingDINO inside of a drop down menu like what is done with the ControlNet canvas sliders.
When I use this plugin on mac(M1 Apple core), this error occurs even with the ViT-B SAM model with the smallest amount of parameters. Is this problem caused by insufficient video memory? If so, is it because there is no support for mac to reason through the cpu?
Properly Segmented images
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: c9340671 (Sat Mar 11 01:01:43 2023)
Google Chrome
In webui-user.sh
export COMMANDLINE_ARGS="--skip-version-check --upcast-sampling --no-half-vae --skip-torch-cuda-test --no-half --no-half-controlnet --use-cpu interrogate --api"
Initializing SAM
Running SAM Inference (638, 918, 3)
/AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1377: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'
zsh: abort ./webui.sh
buliuguyy@luyinyudeMacBook-Pro stable-diffusion-webui % /opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
No response
Maybe I’m misunderstanding the API, but it seems like it only works with groundingDINO right now.
Would it be possible to add a field to the API endpoint that takes in a series of points that are the equivalent of where you would click in the UI? Or maybe a mask of green and/or red that gets translated into include/exclude points?
Firstly kudos and thanks for getting SAM working with webui.
If one could make a request, it'd be nice to have the ability to control
There could be sliders for each of these parameters.
Python 版本 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Commit hash值: 3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39
本整合包由 NovelAI 中文频道 出品,严禁倒卖
正在启动 WebUI 中...
Web UI 运行参数为: --autolaunch --no-half --precision full --opt-sub-quad-attention
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: D:\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper-main\setting.json
Error loading script: api.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'
Error loading script: sam.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'
SD-Webui API layer loaded
Trying to use the grounding dino mode.
I get an error when I press "generate bounding box"
The error is :
C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 208, in dino_predict
boxes_filt, install_success = dino_predict_internal(input_image, dino_model_name, text_prompt, box_threshold)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 138, in dino_predict_internal
boxes_filt = get_grounding_output(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 114, in get_grounding_output
outputs = model(image[None], captions=[caption])
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\groundingdino.py", line 313, in forward
hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 258, in forward
memory, memory_text = self.encoder(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 576, in forward
output = checkpoint.checkpoint(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 785, in forward
src2 = self.self_attn(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 338, in forward
output = MultiScaleDeformableAttnFunction.apply(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 53, in forward
output = _C.ms_deform_attn_forward(
NameError: name '_C' is not defined
Generating the bounding boxes images
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 1664834
Google Chrome
Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch
venv "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Installing None
Installing onnxruntime-gpu...
Installing None
Installing opencv-python...
Installing None
Installing Pillow...
Installing sd-webui-controlnet requirement: fvcore
Installing sd-webui-controlnet requirement: pycocotools
Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch
Loading weights [f93e6a50ac] from C:\Users\Jyce\Desktop\stable-diffusion-webui\models\Stable-diffusion\uberRealisticPornMerge_urpmv13.safetensors
Creating model from config: C:\Users\Jyce\Desktop\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(1): nrealfixer
Model loaded in 5.7s (load weights from disk: 0.2s, create model: 0.4s, apply weights to model: 2.8s, apply half(): 0.5s, move model to device: 0.7s, load textual inversion embeddings: 1.0s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 15.7s (import torch: 1.5s, import gradio: 0.9s, import ldm: 0.6s, other imports: 1.0s, setup codeformer: 0.1s, load scripts: 3.2s, load SD checkpoint: 6.1s, create ui: 1.9s, gradio launch: 0.2s).
Installing sd-webui-segment-anything requirement: groundingdino
GroundingDINO install success.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
final text_encoder_type: bert-base-uncased
Downloading (…)/main/tokenizer.json: 100%|██████████████████████████████████████████| 466k/466k [00:00<00:00, 6.83MB/s]
Downloading model.safetensors: 100%|████████████████████████████████████████████████| 440M/440M [00:06<00:00, 73.0MB/s]
Downloading: "https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth" to C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\models/grounding-dino\groundingdino_swint_ogc.pth
100%|███████████████████████████████████████████████████████████████████████████████| 662M/662M [00:09<00:00, 69.8MB/s]
C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 208, in dino_predict
boxes_filt, install_success = dino_predict_internal(input_image, dino_model_name, text_prompt, box_threshold)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 138, in dino_predict_internal
boxes_filt = get_grounding_output(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 114, in get_grounding_output
outputs = model(image[None], captions=[caption])
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\groundingdino.py", line 313, in forward
hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 258, in forward
memory, memory_text = self.encoder(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 576, in forward
output = checkpoint.checkpoint(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 785, in forward
src2 = self.self_attn(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 338, in forward
output = MultiScaleDeformableAttnFunction.apply(
File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 53, in forward
output = _C.ms_deform_attn_forward(
NameError: name '_C' is not defined
No response
勾选了copy to inpaint 进行重绘后,对生成的图片进行保存,发生报错,图片无法被正确的保存。在不使用本插件进行图生图能正确保存,但如果已经报错了,后面即使不使用本插件,也一样无法进行保存。
应当正确将图片保存到指定目录
webui: sd-webui
extension: multidiffusion-upscaler-for-automatic1111,同时使用了这个插件
Google Chrome
NO
Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
[Tiled VAE] VAE is on CPU. Please enable 'Move VAE to GPU' to use Tiled VAE.
Error completing request
Arguments: ('{"prompt": "blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style", "all_prompts": ["blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style"], "negative_prompt": "(EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,", "all_negative_prompts": ["(EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,"], "seed": 1033529053, "all_seeds": [1033529053], "subseed": 374074384, "all_subseeds": [374074384], "subseed_strength": 0, "width": 512, "height": 1024, "sampler_name": "DPM++ 2M Karras", "cfg_scale": 9, "steps": 70, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": "1d1e459f9f", "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": 0.6, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style\\nNegative prompt: (EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,\\nSteps: 70, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 1033529053, Size: 512x1024, Model hash: 1d1e459f9f, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337"], "styles": [], "job_timestamp": "20230415105520", "clip_skip": 2, "is_using_inpainting_conditioning": false}', [{'name': 'C:\\Users\\admin\\AppData\\Local\\Temp\\tmps6i6f9i1.png', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\admin\\AppData\\Local\\Temp\\tmps6i6f9i1.png', 'is_file': True}], False, 5) {}
Traceback (most recent call last):
File "F:\SD_WebUI_launcher\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "F:\SD_WebUI_launcher\modules\ui_common.py", line 56, in save_files
images = [images[index]]
IndexError: list index out of range
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
推测可能和生成的蒙版图片有关?
after instalation. Trying to reproduce video guide.
Looks like a python prerequisite.
Normal launch, extension "section" should be in img2img but it is not
webui: 22bcc7b
controlnet:
Mozilla Firefox
set COMMANDLINE_ARGS=--xformers --medvram
Traceback (most recent call last):
File "E:\Files\SD_auto\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\Files\SD_auto\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "E:\Files\SD_auto\stable-diffusion-webui\extensions\sd-webui-segment-everything\scripts\sam.py", line 20, in <module>
from segment_anything import SamPredictor, build_sam
ModuleNotFoundError: No module named 'segment_anything'
No response
这个插件很好用,但是在改变颜色的时候,尤其是要求指定一种颜色时,写tag的效果并不是那么有效果,想到之前的颜色蒙版,那是否可以添加这样一个功能,可以改变蒙版的颜色,做到类似之前颜色蒙版的效果。我对程序并不了解,所以不清楚这个功能是否能简单实现。最后,感谢大佬的插件。
After dropping image to Segment Anyting tab I can't add point prompts or pick a box threshold after checking Enable GroundingDINO.
Dots should appear on the image after clicking at it.
webui: latest A11111 as for 14.04.23
extension: sd-webui-segment-anything 16.04.23
No response
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --xformers --no-half-vae
set ATTN_PRECISION=fp16
call webui.bat
No errors when clicking on dropped image
This error after entering text in GroundingDINO Detection Prompt:
127.0.0.1/:1 Uncaught (in promise) API Error
Promise.then (async)
(anonymous) @ index.4395ab38.js:76
(anonymous) @ index.4395ab38.js:4
le @ index.4395ab38.js:4
x @ index.4395ab38.js:79
(anonymous) @ index.4395ab38.js:4
(anonymous) @ index.4395ab38.js:4
u @ index.4395ab38.js:78
t.$$.update @ index.4395ab38.js:78
ql @ index.4395ab38.js:4
bt @ index.4395ab38.js:4
Promise.then (async)
yo @ index.4395ab38.js:4
Yl @ index.4395ab38.js:4
(anonymous) @ index.4395ab38.js:4
g @ index.4395ab38.js:34
i @ index.4395ab38.js:34
(anonymous) @ index.4395ab38.js:4
S @ index.4395ab38.js:79
i @ index.4395ab38.js:79
(anonymous) @ index.4395ab38.js:4
k @ index.4395ab38.js:78
No response
Today,after update sd-webui-segment-anything, when i use text2 img I got error from log,and can’t got the image
Today,after update sd-webui-segment-anything, when i use text2 img I got error from log,and can’t got the image
webui: can't use to make image
extension: after update
Google Chrome
nohup bash webui.sh >/home/ubuntu/stable-diffusion-webui/nohup.log 2>&1 &
Traceback (most recent call last):
File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1073, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 962, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components.py", line 1203, in preprocess
return self.choices.index(x)
ValueError: '0' is not in list
No response
Error loading script: sam.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 15, in from segment_anything import SamPredictor, sam_model_registry ModuleNotFoundError: No module named 'segment_anything'
根据错误信息说没有明为 'segment_anything'的模型,但该模型不是叫sam_vit_h_4b8939.pth,为什么会有这个报错呢?进去webui后并没有seganything区域
Can upload pictures
Clicking preview does nothing
python version 3.10
in txt2img left click add a black dot, but right click add a black dot too.
in img2img left click and right click both show nothing.
Clicking preview should show something
webui: yes
controlnet:yes
Google Chrome
webui.sh --listen --share --xformers --enable-insecure-extension-access --disable-nan-check'
no information showed about this extension。
stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range
this is the error about uploading image in img2img box.
none
Hi, 想参考你的项目实现一个:
读取图片文件,并抠图后输出抠图后的文件。参考了 sam.py 这个文件做了代码调整,但是出现了问题:
transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image_np.shape[:2])
File "/home/jerry/go/src/github.com/facebookresearch/segment-anything/segment_anything/utils/transforms.py", line 90, in apply_boxes_torch
boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size)
AttributeError: 'tuple' object has no attribute 'reshape'
#/usr/bin/env python3
#coding=utf-8
from skimage import io,data
import argparse
import os
import cv2
import os
import copy
import numpy as np
from PIL import Image
import torch
from segment_anything import SamPredictor, sam_model_registry
import groundingdino.datasets.transforms as T
from groundingdino.models import build_model
from groundingdino.util.slconfig import SLConfig
from groundingdino.util.utils import clean_state_dict
#from modules.devices import device, torch_gc, cpu
#from modules.safe import unsafe_torch_load, load
model_dir = "/home/jerry/workbench/download"
dino_batch_dest_dir="/home/jerry/go/src/github.com/JerryZhou343/AILab/demo/base/"
input_image_path = "/home/jerry/go/src/github.com/JerryZhou343/AILab/demo/base/20230415145253.jpg"
device = "cpu"
dino_batch_save_mask = True
dino_batch_save_image_with_mask=True
batch_dilation_amt= 10
dino_batch_output_per_image = 1
def dilate_mask(mask, dilation_amt):
# Create a dilation kernel
x, y = np.meshgrid(np.arange(dilation_amt), np.arange(dilation_amt))
center = dilation_amt // 2
dilation_kernel = ((x - center)**2 + (y - center)**2 <= center**2).astype(np.uint8)
# Dilate the image
dilated_binary_img = binary_dilation(mask, dilation_kernel)
# Convert the dilated binary numpy array back to a PIL image
dilated_mask = Image.fromarray(dilated_binary_img.astype(np.uint8) * 255)
return dilated_mask, dilated_binary_img
def show_boxes(image_np, boxes, color=(255, 0, 0, 255), thickness=2, show_index=False):
if boxes is None:
return image_np
image = copy.deepcopy(image_np)
for idx, box in enumerate(boxes):
x, y, w, h = box
cv2.rectangle(image, (x, y), (w, h), color, thickness)
if show_index:
font = cv2.FONT_HERSHEY_SIMPLEX
text = str(idx)
textsize = cv2.getTextSize(text, font, 1, 2)[0]
cv2.putText(image, text, (x, y+textsize[1]), font, 1, color, thickness)
return image
def show_masks(image_np, masks: np.ndarray, alpha=0.5):
image = copy.deepcopy(image_np)
np.random.seed(0)
for mask in masks:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
image[mask] = image[mask] * (1 - alpha) + 255 * color.reshape(1, 1, -1) * alpha
return image.astype(np.uint8)
def load_dino_image(image_pil):
import groundingdino.datasets.transforms as T
transform = T.Compose(
[
T.RandomResize([800], max_size=1333),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
]
)
image, _ = transform(image_pil, None) # 3, h, w
return image
def load_dino_model(dino_checkpoint):
args = SLConfig.fromfile("grd.cfg.py")
args.device = device
dino = build_model(args)
checkpoint = torch.load(os.path.join(model_dir,dino_checkpoint), map_location="cpu")
dino.load_state_dict(clean_state_dict(
checkpoint['model']), strict=False)
dino.to(device=device)
dino.eval()
return dino
def load_sam_model(sam_checkpoint):
model_type = '_'.join(sam_checkpoint.split('_')[1:-1])
sam_checkpoint = os.path.join(model_dir, sam_checkpoint)
#torch.load = unsafe_torch_load
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)
sam.eval()
#torch.load = load
return sam
def get_grounding_output(model, image, caption, box_threshold):
caption = caption.lower()
caption = caption.strip()
if not caption.endswith("."):
caption = caption + "."
image = image.to(device)
with torch.no_grad():
outputs = model(image[None], captions=[caption])
logits = outputs["pred_logits"].sigmoid()[0] # (nq, 256)
boxes = outputs["pred_boxes"][0] # (nq, 4)
# filter output
logits_filt = logits.clone()
boxes_filt = boxes.clone()
filt_mask = logits_filt.max(dim=1)[0] > box_threshold
logits_filt = logits_filt[filt_mask] # num_filt, 256
boxes_filt = boxes_filt[filt_mask] # num_filt, 4
return boxes_filt.cpu()
def dino_predict_internal(input_image, dino_model, text_prompt, box_threshold):
dino_image = load_dino_image(input_image.convert("RGB"))
boxes_filt = get_grounding_output(
dino_model, dino_image, text_prompt, box_threshold
)
H, W = input_image.size[1], input_image.size[0]
for i in range(boxes_filt.size(0)):
boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H])
boxes_filt[i][:2] -= boxes_filt[i][2:] / 2
boxes_filt[i][2:] += boxes_filt[i][:2]
#gc.collect()
#torch_gc()
return boxes_filt,
if __name__ == "__main__":
parser = argparse.ArgumentParser("example", add_help=True)
#parser.add_argument("--input_image", type=str, required=True, help="path to image file")
#parser.add_argument("--text_prompt", type=str, required=True, help="text prompt")
sam = load_sam_model("sam_vit_h_4b8939.pth")
predictor = SamPredictor(sam)
dino_model = load_dino_model("groundingdino_swinb_cogcoor.pth")
args = parser.parse_args()
input_image = Image.open(input_image_path).convert("RGBA")
image_np = np.array(input_image)
image_np_rgb = image_np[...,:3]
boxes_filt = dino_predict_internal(input_image,dino_model,"head",0.3)
predictor.set_image(image_np_rgb)
transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image_np.shape[:2])
masks, _, _ = predictor.predict_torch(
point_coords=None,
point_labels=None,
boxes=transformed_boxes.to(device),
multimask_output=(dino_batch_output_per_image == 1),
)
masks = masks.permute(1, 0, 2, 3).cpu().numpy()
boxes_filt = boxes_filt.cpu().numpy().astype(int)
filename, ext = os.path.splitext(os.path.basename(input_image_path))
for idx, mask in enumerate(masks):
blended_image = show_masks(show_boxes(image_np, boxes_filt), mask)
merged_mask = np.any(mask, axis=0)
if batch_dilation_amt:
_, merged_mask = dilate_mask(merged_mask, batch_dilation_amt)
image_np_copy = copy.deepcopy(image_np)
image_np_copy[~merged_mask] = np.array([0, 0, 0, 0])
output_image = Image.fromarray(image_np_copy)
output_image.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_output{ext}"))
if dino_batch_save_mask:
output_mask = Image.fromarray(merged_mask)
output_mask.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_mask{ext}"))
if dino_batch_save_image_with_mask:
output_blend = Image.fromarray(blended_image)
output_blend.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_blend{ext}"))
#if shared.cmd_opts.lowvram:
# sam.to("cpu")
#gc.collect()
#torch_gc()
#return "Done"
#cropped_image.save(f"path/to/your/output_{i}.jpg")
GroundingDINO always access GPU 0 even if --device-id
is set to non-zero value, and trigger illegal memory access CUDA error when you generate bounding box again.
./webui.sh --device-id 1
Enable GroundingDINO
I want to preview GroundingDINO detection result and select the boxes I want.
Generate bounding box
Generate bounding box
againRuntimeError: CUDA error: an illegal memory access was encountered
nvidia-smi
in another terminal, you should notice a process named python3
using both GPU 0 and the one you specified in step 1.GroundingDINO should not access GPU 0 at any moment.
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 724b4db
Google Chrome
cmdline:
./webui.sh -f --listen --device-id 7
modified webui-user.sh:
install_dir="/mnt"
I'm running WebUI inside a docker container with:
docker run --name stable-diffusion -it --runtime nvidia --gpus all --ipc host -v ${HOME}:/mnt -p 7860:7860 pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel
Launching Web UI with arguments: -f --listen --device-id 3
No module 'xformers'. Proceeding without it.
Loading weights [1a189f0be6] from /mnt/stable-diffusion-webui/models/Stable-diffusion/sdv1-5-pruned.safetensors
Creating model from config: /mnt/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 2.1s (load weights from disk: 0.6s, create model: 0.4s, apply weights to model: 0.2s, apply half(): 0.2s, load VAE: 0.2s, move model to device: 0.4s).
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 9.4s (import torch: 1.0s, import gradio: 1.1s, import ldm: 1.4s, other imports: 1.9s, load scripts: 1.1s, load SD checkpoint: 2.2s, create ui: 0.5s, gradio launch: 0.1s).
Start SAM Processing
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinB (938MB)
final text_encoder_type: bert-base-uncased
/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Initializing SAM
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/opt/conda/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 161, in sam_predict
sam = init_sam_model(sam_model_name)
File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 130, in init_sam_model
sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 56, in load_sam_model
sam.to(device=device)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Generated by neofetch on host machine:
OS: Ubuntu 20.04.5 LTS x86_64
Host: X660 G45 Whitley
Kernel: 5.4.0-147-generic
Uptime: 6 hours, 5 mins
Packages: 1199 (dpkg), 4 (snap)
Shell: zsh 5.8
Resolution: 1024x768
Terminal: /dev/pts/3
CPU: Intel Xeon Platinum 8369C (128) @ 3.500GHz
GPU: NVIDIA 8e:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 56:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA e8:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 8a:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA eb:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 6b:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 71:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 51:00.0 NVIDIA Corporation Device 20b2
Memory: 26134MiB / 1031335MiB
Error loading script: sam.py
Traceback (most recent call last):
File "/home/zetaphor/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/zetaphor/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/home/zetaphor/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 10, in
from modules.paths_internal import extensions_dir
ModuleNotFoundError: No module named 'modules.paths_internal'
The script should initialize
webui: a9eab236d7e8afa4d6205127904a385b2c43bb24
controlnet: 187ae88038af6f4daa91d5dc941564d9a4df90ef
No response
--api --cors-allow-origins=* --opt-split-attention --upcast-sampling --precision autocast --opt-sdp-attention
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on zetaphor user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.9.16 (main, Mar 6 2023, 19:01:01)
[GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
Commit hash: a9eab236d7e8afa4d6205127904a385b2c43bb24
Installing requirements for Web UI
Installing sd-dynamic-prompts requirements.txt
Launching Web UI with arguments: --api --cors-allow-origins=* --opt-split-attention --upcast-sampling --precision autocast --opt-sdp-attention
/home/zetaphor/stable-diffusion-webui/venv/lib/python3.9/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: /home/zetaphor/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper/setting.json
Civitai Helper: No setting file, use default
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
Error loading script: sam.py
Traceback (most recent call last):
File "/home/zetaphor/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/zetaphor/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/zetaphor/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 10, in <module>
from modules.paths_internal import extensions_dir
ModuleNotFoundError: No module named 'modules.paths_internal'
Loading weights [26fc13daff] from /home/zetaphor/stable-diffusion-webui/models/Stable-diffusion/People/mishen-protogen34-5k-astria.ckpt
Creating model from config: /home/zetaphor/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: /home/zetaphor/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying scaled dot product cross attention optimization.
Model loaded in 4.6s (load weights from disk: 1.1s, create model: 0.4s, apply weights to model: 0.8s, apply half(): 0.5s, load VAE: 0.9s, move model to device: 0.5s, load textual inversion embeddings: 0.4s).
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 4 (delta 2), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (4/4), 906 bytes | 906.00 KiB/s, done.
From https://github.com/zero01101/openOutpaint
64bc673..899c2cb main -> origin/main
Submodule path 'app': checked out '899c2cb59262c278314e87717ed01c566a4dd769'
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.5s (import gradio: 2.2s, import ldm: 0.6s, other imports: 0.9s, list extensions: 1.1s, load scripts: 1.8s, load SD checkpoint: 4.6s, create ui: 2.7s, gradio launch: 0.5s).
No response
sd version:226d840
Error loading script: api.py
Traceback (most recent call last):
File "Y:\stable-diffusion-webui_23-03-10\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Y:\stable-diffusion-webui_23-03-10\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'
Error loading script: sam.py
Traceback (most recent call last):
File "Y:\stable-diffusion-webui_23-03-10\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Y:\stable-diffusion-webui_23-03-10\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'
segment_anything already installed from pip
Get error when launching webui
_Error loading script: api.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 20, in
from scripts.auto import clear_sem_sam_cache, register_auto_sam, semantic_segmentation, sem_sam_garbage_collect, image_layer_internal, categorical_mask_image
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)
Error loading script: auto.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)
Error loading script: sam.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 20, in
from scripts.auto import clear_sem_sam_cache, register_auto_sam, semantic_segmentation, sem_sam_garbage_collect, image_layer_internal, categorical_mask_image
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)
I cannot add points by this extension in img2img, but can add points in txt2img.
Then, when I click preview, it cannot show image and report error.
Successfully preview segmentation map
No response
No
Traceback (most recent call last):
File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range
No response
Traceback (most recent call last):
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range
Startup of webui does produce this error:
ModuleNotFoundError: No module named 'groundingdino'
The README doesn't say anything about needing to nave it installed. It says groundingdino is optional. If it's required, it should be automatically installed with requirements.txt or some other way.
start webui
no error
webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: b8f3c09
Mozilla Firefox
--api --disable-safe-unpickle --no-half-vae
No module 'xformers'. Proceeding without it.
Error loading script: dino.py
Traceback (most recent call last):
File "K:\AI\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "K:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 12, in <module>
import groundingdino.datasets.transforms as T
ModuleNotFoundError: No module named 'groundingdino'
Error loading script: sam.py
Traceback (most recent call last):
File "K:\AI\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "K:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 18, in <module>
from scripts.dino import dino_model_list, dino_predict_internal, show_boxes, clear_dino_cache
File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 12, in <module>
import groundingdino.datasets.transforms as T
ModuleNotFoundError: No module named 'groundingdino'
Windows 10, not on WSL
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.