klokinator / umi-ai-embeds Goto Github PK
View Code? Open in Web Editor NEWWildcards and Code for the Umi AI Engine
Home Page: https://www.patreon.com/posts/umi-ai-official-73544634
License: The Unlicense
Wildcards and Code for the Umi AI Engine
Home Page: https://www.patreon.com/posts/umi-ai-official-73544634
License: The Unlicense
This would be easier for updating and maintenance in the future without having to copy over files every time.
See for context;
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions
Example with just a script
https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards
How to manage wildcards.
I can't find Wildcards Manager
I noticed at https://civitai.com/models/45448?modelVersionId=164799 they recommended to switch to this extension, but my wildcards are located in a shared location. This extension can't find them.
Thanks for the prompt collections! I've played with them a bit, and it seems when I uncheck the option in question in the WebUI, it only works for the 1st batch.
I ran batches of four, and in the 1st batch images 1,2,3 had different prompts; image 1.4 had a same one as 1.3. In the 2nd batch image 2.1 was similar to 1.4 and 1.3; 2.2, 2.3, 2.4, 3.1 had a shared prompt; 3.2, 3.3, 3.4 shared a prompt as well. From batch 4 onwards the prompt was shared by the entire batch.
I appreciate the new functionalities to the wildcard system, but I don't understand why this comes with nearly 900 anime character embeddings and a bunch of porn wildcards?
I mean, I do understand it, but wouldn't it make more sense to have that as an opt-in thing?
With the embeddings recently added to the repo, it's now a 619.84 MB download to clone, the vast majority of which are embeddings. I use Umi-AI for the (fantastic) wildcard system. The embeddings are not something that I (and presumably a fair amount of other users) am that interested in using, but they are slowing down my automated builds and bloating my Docker images pretty significantly.
Is it possible to use Git LFS to store them, or perhaps move them to a separate repository?
So one user on discord had this situation where they deleted all wildcards and started chiselling their own. Valid use case. They complained that tag-autocomplete still suggested old tags to them. Not sure if that's the case. I think the index of tags is built on each web-ui start in the install extension step. Not sure now and too tired to check.
ToDo later.
I'm using version 12d04605
It is currently possible to generate tensor size mismatch errors during batch generation, if there are negative prompts in the wildcards.
Example:
I have the following static negative prompt:
(abstract paint), (3d, realistic:1.3), (low quality, worst quality:1.4), (bad anatomy), extra digit, fewer digits, (extra arms:1.2), bad hands, by (bad-artist:0.6), bad-image-v2-39000, Asian-Less-Neg
-> this amounts to ~43 tokens according to the A1111 UI.
I also have the following somewhere in my positive prompt:
**(melee weapon, sword, axe, mace, blade, dagger, modern technology, modern weapons, automatic rifle, backpack, cigarette, jetpack, metal, cars:1.2),**
If I would add this to the static negative prompt, it would bring it over the 75 token limit (~77), but I want to keep it in the positive prompt, as it is used as part of a wildcard.
If I run batch generation (batch size > 1), the following error occurs fairly frequently:
RuntimeError: The expanded size of the tensor (77) must match the existing size (154) at non-singleton dimension 0. Target sizes: [77, 768]. Tensor sizes: [154, 768]
I believe that the root cause of the issue is indeed that the negative prompts of the individual images in the batch have different tensor sizes. Manual workaround for the issue is to insert BREAK at the end of the static negative prompt, so that it always creates a new block, for the dynamically inserted negative prompts, regardless of how long the static negative prompt is.
However, I suggest an improvement of the extension to handle such cases. It should be possible to preprocess negative prompts in case of batch generation, and align the tensor sizes of the negative prompts, eg. by inserting 'BREAK' at the end of negative prompts that are below the maximum tensor size, until they match.
Would it be possible to add a prefix or something to change the result of {x-y$$a|b|c|...} to be something like "[a | c | d]" instead of "a , c , d"?
I'm one of the maintainers of stable-diffusion-webui-extensions we received a request to add this extension to index
but due to the large file size of the extension I'm currently a bit apprehensive to add this to index
for details see PR AUTOMATIC1111/stable-diffusion-webui-extensions#184
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.