Giter VIP home page Giter VIP logo

thereforegames / unprompted Goto Github PK

View Code? Open in Web Editor NEW
742.0 742.0 64.0 36.46 MB

Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.

Python 63.81% Jupyter Notebook 18.85% Shell 0.15% CSS 0.06% Batchfile 0.01% JavaScript 0.03% C++ 9.51% Cuda 7.49% C 0.06% Makefile 0.01% HTML 0.01% Dockerfile 0.01%
a1111-stable-diffusion-webui ai-art deep-learning gpt gradio img2img python shortcode stable-diffusion template-engine text2image txt2img wildcards

unprompted's People

Contributors

bsweezy avatar heavytony2 avatar kylechallis avatar maikotan avatar o0oradaro0o avatar pmajor74 avatar thereforegames avatar vanhall avatar webersamuel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unprompted's Issues

Getting Error with Unprompted and Randomize

if I enabled both unprompted and randomize extension,
I am not able to do img2img. It always has zero iteration on whatever denoise strength I set.
This is very werid as randomize should do nothing on img2img.

webui: 828438b4a190759807f9054932cae3a8b880ddf1
Unprompted: 76860e6
randomize: 1da87513c0f63c109f939a169d48c78f2451d948

text2mask is generating a mask even when not enabled

I have to uncheck from the top of the script, because having show mask/auto prompt unchecked doesn't stop it from working. if i having nothing in the text field and click generate output it shows 'gu' in the middle where a word would go
that results in a vague outline around my one character because it's close to 'guy'.
not sure what's causing it to be on at all with those unchecked

img2mask Not working after today's Auto1111 update

I am a giant fan of img2mask! I was using it a bunch yesterday. Its stopped working after Auto1111 update today.

Error running process: C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shortcodes.py", line 133, in render
    return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context, content))
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shared.py", line 57, in handler
    return(self.shortcode_objects[f"{keyword}"].run_block(pargs, kwargs, context, content))
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted/shortcodes\stable_diffusion\txt2mask.py", line 149, in run_block
    self.image_mask = get_mask().resize((self.Unprompted.shortcode_user_vars["init_images"][0].width,self.Unprompted.shortcode_user_vars["init_images"][0].height))
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted/shortcodes\stable_diffusion\txt2mask.py", line 102, in get_mask
    img = transform(self.Unprompted.shortcode_user_vars["init_images"][0]).unsqueeze(0)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 94, in __call__
    img = t(img)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 269, in forward
    return F.normalize(tensor, self.mean, self.std, self.inplace)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py", line 360, in normalize
    return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py", line 959, in normalize
    tensor.sub_(mean).div_(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\modules\scripts.py", line 338, in process
    script.process(p, *script_args)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 105, in process
    Unprompted.shortcode_user_vars["prompt"] = Unprompted.process_string(apply_prompt_template(original_prompt,Unprompted.Config.templates.default))
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shared.py", line 77, in process_string
    string = self.shortcode_parser.parse(string,context)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shortcodes.py", line 219, in parse
    return stack.pop().render(context)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shortcodes.py", line 58, in render
    return ''.join(child.render(context) for child in self.children)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shortcodes.py", line 58, in <genexpr>
    return ''.join(child.render(context) for child in self.children)
  File "C:\Users\username\AUTOMATIC1111\stable-diffusion-webui\extensions\unprompted\lib\shortcodes.py", line 137, in render
    raise ShortcodeRenderingError(msg) from ex
lib.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'txt2mask' shortcode in line 1.

Unprompt error with new unprompt_seed function TypeError: unhashable type: 'list'

While trying the new v7 (coming from 5.2) I find an error stopping Unprompted.

Scenario below is with Unpromted seed = -1

The code in question was generated by the wizard:

[txt2mask mode="add" show precision=100.0 smoothing=20.0 neg_precision=100.0 neg_smoothing=20.0]face[/txt2mask]

Call fails either by letting the wizard "Auto-Include" or by setting manually in the prompt

----------------Error Trace----------------------------
Error running process: D:\AI\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "D:\AI\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 342, in process
random.seed(unprompted_seed)
File "C:\Users\Ray_3d\AppData\Local\Programs\Python\Python310\lib\random.py", line 167, in seed
super().seed(a)
TypeError: unhashable type: 'list'

Enabling the extension combines consecutive whitespace into single spaces

To organize my prompts I like to use newlines to separate different prompt elements, style, subject etc. When the extension is active, it removes all newlines, which means putting past generations in png info puts all the prompt elements on one line, which makes it unmanageable. I have to add back all the new lines every time.

Another problem I encountered with this is that the variable syntax for prompt fusion uses newlines to determine the end of string values. Stripping all newlines into single spaces makes this extension incompatible with prompt fusion. (As I'm the main developer for this extension and variable syntax is very new, I could change the syntax to use i.e. ; but I'm not 100% comfortable with this option, as it would make the syntax less user-friendly for non-developers or non-python users)

I'm open to changes on the side of the prompt fusion extension repo if it introduces a solution that scales better than the one I suggest here (i.e. preserving consecutive whitespace).

Unprompted breaks when batch is used

Unprompted breaks on the third (and further) image in the batch.

Traceback (most recent call last):
  File "E:\sandbox\stable-diffusion-webui\modules\scripts.py", line 327, in process
    script.process(p, *script_args)
  File "E:\sandbox\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 88, in process
    p.all_negative_prompts[i] = Unprompted.process_string(Unprompted.shortcode_user_vars["negative_prompt"])
KeyError: 'negative_prompt'

I believe that bug is on the line 85 of scripts/unprompted.py:

					Unprompted.shortcode_user_vars = {} # < here you empty shortcode_user_var
					Unprompted.shortcode_user_vars["batch_index"] = i
					p.all_prompts[i] = Unprompted.process_string(original_prompt)
					p.all_negative_prompts[i] = Unprompted.process_string(Unprompted.shortcode_user_vars["negative_prompt"]) # < and here you use it

About licensing

Hi, I just found this project and looks pretty cool!

I see the project uses casefy's code (here). Casefy has MIT License, which basically lets you do whatever with the code, but it's still subject to: "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software." Do you think you could add it?

Yes, I'm the author of casefy and I hope this doesn't come across as rude or anything, but I believe developers' work should be recognized, especially when dealing with free software.

Also, I recommend that you add a license to your own project, since the lack of it means that it's copyrighted, which I'm not entirely sure is what you want? This tool can help you decide: https://choosealicense.com/.

[if] not working

The following :

[set format]
    {choose}portrait|landscape{/choose}
[/set]

[if format="portrait"]
    {sets width=512 height=768}
[/if]
[elif format="landscape"]
    {sets width=960 height=512}
[/elif]

does not work at all.

When printing the format variable, I do get either "portrait" or "landscape" but the [if][elif] is never entered.
I tried adding dummy text to print in the if, but it never print.

I also tried to switch from text to int, same result :

[set format]
    {choose}1|2{/choose}
[/set]

[if format=1]
    {sets width=512 height=768}
[/if]
[elif format=2]
    {sets width=960 height=512}
[/elif]

Am I doing something wrong or is this a bug ?

Sidenote, when I write the following :

[set format]
    {choose}
        portrait
        landscape
    {/choose}
[/set]

It chooses between "portrait", "landscape" and ""
I feel like this is not the intended result too.

full python interpreter via shortcodes/basic/eval.py

(tl;dr The actual issue is at the very bottom of this post and asks how unprompted's `context', in its source code, works.)

The current source code of shortcodes/basic/eval.py suggests replacing return(str(self.Unprompted.parse_advanced(content,context))) with return str(eval(content)) to obtain a python interpreter. This does not actually work because of reasons I forgot (… sorry). In any case, a full and multi-line python interpreter (technically it's a complete REPL as the contents of results contains all intermediate results on a line-by-line basis) can be embedded into a prompt like so (replaces the contents of eval.py); the following code was blindly copy-and-pasted and slightly modified from (https://stackoverflow.com/a/74224965).

[eval]
var = 0
var # final result is 0
[/eval]
import ast

class Shortcode():
	def __init__(self, Unprompted):
		self.Unprompted = Unprompted
	def run_block(self, pargs, kwargs, context, content):
		# For some reason, `CONTENT' is double-escaped.
		content_without_extra_escape = content.replace('\\n', '\n')
		results = []
		other_context = {}
		for node in ast.parse(content_without_extra_escape).body:
			if isinstance(node, ast.Expr):
				result = eval(compile(ast.Expression(node.value), '<string>', 'eval'), other_context)
				results.append(result)
			else:
				module = ast.Module([node], type_ignores=[])
				results.append(exec(compile(module, '<string>', 'exec'), other_context))
		return results[-1]

This works well. The following is annotated as if by a REPL.

«eval»
def foo():
    'hello'               ## Displays None, but returns <function foo at 0x7fbfb6767a30>
foo.__name__              ## Displays 'foo'

import pprint
pp = pprint.PrettyPrinter(indent = 4)
pp.pprint(None)           ## Displays None, but prints 'None' at the terminal.
pp.pprint(foo)            ## Displays None, but prints '<function foo at 0x7fbfb6767a30>'
                          ## at the terminal.

input('Type something: ') ## Displays Loading…, requests user input at the terminal, and
                          ## finally displays user input; typing 'hi' displays 'hi'. At
                          ## the terminal, hitting C-j to manually enter newlines results
                          ## in extremely strange delayed output over multiple eval's.

pp.pprint(foo())          ## Displays None, but prints 'None' at the terminal. INCORRECT!

x = 0
x                         ## Displays 0, but prints None at the terminal.
pp.pprint(x)              ## Displays None, but prints 0 at the terminal.

y = 0
def bar():
    y = 1
bar()                     ## Displays None, but prints None at the terminal. INCORRECT!
y                         ## 0 CORRECT IN PYTHON APPARENTLY should be 1 after invoking bar()

z = 0
def baz():
    return 1
z = baz()
z                         ## 1 CORRECT

global w
w = 0
def foobar():
    global w
    w = 1
foobar()
w                         ## 1 CORRECT python is so weird
«/eval»

(Like regular Python, this is sensitive to mixing spaces and tabs, but the error message in that case is different and incomprehensible.) This seems to work perfectly. The REPL-esque nature is particularly interesting if the user were to annotate his results as if by [[result_type_0, 'positive prompt'], [result_type_1, 'negative prompt'], [result_type_2, 'configuration']], but that is certainly going far outside the scope of the unprompted extension.

NOW FOR THE ISSUE. How does unprompted's context work, and are there any suggestions how variables and functions defined in one `eval' can persist in another eval? For example, I want to do this, otherwise I can't use a positive prompt to parameterize a negative prompt or an x-y grid configuration:

«eval»
x = 0
«/eval»
«eval»
x # This fails; x is not defined
«/eval»

txt2img2img does not work.

Hi there, unfortunately there seems to be a problem with txt2img2img, or setting parameters for img2img:
This is the supplied examples/txt2img2img file:

Traceback (most recent call last):
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 117, in render
    return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shared.py", line 61, in handler
    return(self.shortcode_objects[f"{keyword}"].run_atomic(pargs, kwargs, context))
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted/shortcodes\stable_diffusion\img2img.py", line 20, in run_atomic
    img2img_result = modules.img2img.img2img(
  File "C:\Apps\stable-diffusion-webui\modules\img2img.py", line 96, in img2img
    assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
TypeError: '<=' not supported between instances of 'float' and 'NoneType'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Apps\stable-diffusion-webui\modules\scripts.py", line 375, in postprocess
    script.postprocess(p, processed, *script_args)
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 398, in postprocess
    Unprompted.shortcode_objects[i].after(p,processed)
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted/shortcodes\basic\after.py", line 26, in after
    self.Unprompted.process_string(self.Unprompted.parse_alt_tags(content,"after"))
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shared.py", line 146, in parse_alt_tags
    return(self.shortcode_parser.parse(string,context))
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 219, in parse
    return stack.pop().render(context)
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 58, in render
    return ''.join(child.render(context) for child in self.children)
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 58, in <genexpr>
    return ''.join(child.render(context) for child in self.children)
  File "C:\Apps\stable-diffusion-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 121, in render
    raise ShortcodeRenderingError(msg) from ex
lib_unprompted.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'img2img' shortcode in line 1.```

Cannot load Config

Error on launch:

(SETUP) Initializing Unprompted object...
(SETUP) Loading configuration files...
Error loading script: unprompted.py
Traceback (most recent call last):
  File "D:\Projects\Tools\stable-diffusion-webui\modules\scripts.py", line 155, in load_scripts
    exec(compiled, module.__dict__)
  File "D:\Projects\Tools\stable-diffusion-webui\scripts\unprompted.py", line 15, in <module>
    Unprompted = Unprompted()
  File "D:\Projects\Tools\stable-diffusion-webui\unprompted\lib\shared.py", line 22, in __init__
FileNotFoundError: [Errno 2] No such file or directory: './unprompted/config.json'

I placed the scripts files in the root and the unprompted files in the extensions folder. Is this not correct? Seems like a simple path issue.

Dynamic Prompts not populating when Unprompted is enabled

When Unprompted and Dynamic Prompts are installed and you run a prompt using Dynamic Prompts syntax while Unprompted is enabled (using the UI checkbox), the prompts are not being updated for every generated image.

The prompt is being populated once and stays the same (except the seed that iterates +1) for the whole job. Doesn't matter which Batch count or Batch size is chosen.

For example:
A cat in the __seasons__ where __seasons__ calls a text file via Dynamic Prompts, the prompt will be populated with for example A cat in the winter. Winter will stay locked for the whole job. It is not updating.

If I disable Unprompted using the UI checkbox it starts working again.

So of course I don't know if that is a problem with Unprompted or Dynamic Prompts.

As a potential quick workaround would it be possible to have a parameter for config_user.json that disables Unprompted by default so it can be turned on selectively?

Love the templates and idea behind Unprompted. Keep up the good work!

Reuses mask on batch img2img

As per subject, as far as I can see it is creating a mask once, and then applying that to all images when using "batch img2img" (not to be confused with "batch count" or "batch size" - those are set to 1 here).

Is there some way to force it to regenerate the mask for each image in the batch? Looking at the code, batch img2img doesn't really go through the same steps as a single img2img AFAICT so would probably need code change in auto1111?

FWIW I also tried it with enhanced-img2img script on the inpaint tab, but same exact problem there it seems.

Or maybe there's some workaround that I'm not seeing?

unprompted fails to install via AUTOMATIC1111's webui

When installing unprompted, either through the list of available extensions or directly via "Install from URL", the following error gets thrown during installation:

GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone -v https://github.com/ThereforeGames/unprompted C:\Users\USER\Programs\stable-diffusion-webui\tmp\unprompted stderr: 'Cloning into 'C:\Users\USER\Programs\stable-diffusion-webui\tmp\unprompted'... POST git-upload-pack (185 bytes) POST git-upload-pack (227 bytes) Downloading lib/stable_diffusion/clipseg/weights/rd16-uni.pth (1.1 MB) Error downloading object: lib/stable_diffusion/clipseg/weights/rd16-uni.pth (61545cd): Smudge error: Error downloading lib/stable_diffusion/clipseg/weights/rd16-uni.pth (61545cdb3a28f99d33d457c64a9721ade835a9dfbda604c459de6831c504167a): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access. Errors logged to C:\Users\USER\Programs\stable-diffusion-webui\tmp\unprompted\.git\lfs\logs\20230112T224734.3787972.log Use `git lfs logs last` to view the log. error: external filter 'git-lfs filter-process' failed fatal: lib/stable_diffusion/clipseg/weights/rd16-uni.pth: smudge filter lfs failed warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry with 'git restore --source=HEAD :/' '

(I attempted to fork the repository and clone my fork, but that doesn't seem to use my LFS quota. It looks like it's tied to the original uploader of the file.)

Collapsible menu and advertising

Hello, I like your custom script but I wish to suggest you to make it collapsible, like aesthetic-gradients because, frankly, that AD is a little bit invasive in the UI. I do understand the reason to have it and I have nothing in contrary, but I would appreciate to have the menu collapsed by default, or to be able to collapse it. Do you think it might be feasible? Would you consider it?
Thank you for your attention and consideration.

Permit user to change the "|" choose separator & question on tag delimiters.

Actually I already did this and it works but I don't know how to submit a pull request or use anything more complicated than git branch git fetch git checkout git commit and that's about it. Help would be appreciated in that regard!

Objective: Before the following, we cannot use webui's prompt alternation feature. The following should only choose between two options, river and [magma|river]: [choose]river|[magma|river][/choose]. We solve this by using ¦ instead of | for unprompted's stuff.

Step 1. Add a new setting to unprompted/config.json: { … "syntax": { "choose_sep":"|", … }, … }

Step 2. It seems the separator is only ever used in one file, so change unprompted/shortcodes/basic/choose.py:9 to: parts = content.replace(self.Unprompted.Config.syntax.n_temp,self.Unprompted.Config.syntax.choose_sep).split(self.Unprompted.Config.syntax.choose_sep)

Elaboration: The webui's "alternating prompt" syntax uses the pipe character between brackets, and the more elaborate forms of prompt weighing as in webui's PR#1273 go even more over-the-top with pipe character usage. I use the EURKEY keyboard layout so C-S-| gives me the unicode ¦ character very easily, permitting me to shut off my brain and not think about anything else.

Aside: In practice, I also change the tag delimiter syntax things to beg:« end:» altbeg:‹ altend:› so that I don't have to bother thinking about when unprompted's syntax might clash with some extension's or webui's syntaxes. However, this would cause horrible problems if I used other people's template files. Is it possible to specify tag delimiters and other syntax rules at the very top of a file? For example, in emacs a line usually exists like # -*- var: 1; rule: (eval 2); constraint; -*- in the first three lines of a file to provide file-local variables to control things like indentation or syntax highlighting.

Video tutorial

Could you make a video tutorial showing us how to use this?

I get an error and it doesn't work.

After updating and running Webui, I get the following error and prompts from the file do not load properly.

Prompt generated in 0.0 seconds
Error running process: F:\tools\stable-diffusion\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
File "F:\tools\stable-diffusion\modules\scripts.py", line 347, in process
script.process(p, *script_args)
File "F:\tools\stable-diffusion\extensions\unprompted\scripts\unprompted.py", line 218, in process
setattr(p,att,Unprompted.shortcode_user_vars[att])
AttributeError: can't set attribute 'sd_model'

get shortcode doesn't understand eval shortcode

Working, result is 0.

[sets n=0]
[get n]

Also working, result is 0.

[sets n=0]
[get "n"]

Fails:

[sets n=0]
[get "'n'"]

Fails:

[sets n=0]
[get "{eval}'n'{/eval}"]

I encountered this problem while trying to build a symbol name and then retrieve that symbol:

[sets max0=5]
[sets max1=10]
[sets n="{random _min 0 _max 1}"]
[get "'max' + str(n)"] <-- does nothing

Simpler example that fails:

«sets o0="one¦two"»
«sets o1="cat¦dog"»
«get "{choose}'o0'¦'o1'{/choose}"»

ModuleNotFoundError: No module named 'lib.shared'

After updating the SD Web UI to Commit hash: 8fba733c0906dd3a80c0a3873793cffa4c78ce04 this error message is being displayed in the console, right after the "Launching Web UI with arguments":

Error loading script: unprompted.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 184, in load_scripts
module = script_loading.load_module(scriptfile.path)
File "C:\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
exec(compiled, module.dict)
File "C:\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 18, in
from lib.shared import Unprompted
ModuleNotFoundError: No module named 'lib.shared'

Unprompted then doesn't appear available in the Web UI.

set sets variable only once

Only affects settings once per generation, not considering batch count or batch size.
I'd really want to bring some diversity by modifying various variable but it looks like it's not there yet

random_cfg.txt

[choose]
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[/choose]

prompt:
[set cfg_scale][file random_cfg][/set]

Conflict with depthmap script

A different conflict with depthmap than before. Only get this error with depthmap-script extension when unprompted is also in the extensions folder:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\stable-diffusion-webui-depthmap-script\scripts\depthmap.py", line 40, in
from midas.dpt_depth import DPTDepthModel
File "D:\stable-diffusion-webui\extensions/stable-diffusion-webui-depthmap-script/scripts\midas\dpt_depth.py", line 5, in
from .blocks import (
File "D:\stable-diffusion-webui\extensions/stable-diffusion-webui-depthmap-script/scripts\midas\blocks.py", line 4, in
from .backbones.beit import (
File "D:\stable-diffusion-webui\extensions/stable-diffusion-webui-depthmap-script/scripts\midas\backbones\beit.py", line 9, in
from timm.models.beit import gen_relative_position_index
ModuleNotFoundError: No module named 'timm.models.beit'

Can't set sampler_index with set shortcode due to it passing it as a float

Template text:

[set sampler_index]1[/set]

Exception:

Traceback (most recent call last):
  File "F:\sd\automatic-webui-win11\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "F:\sd\automatic-webui-win11\webui.py", line 55, in f
    res = func(*args, **kwargs)
  File "F:\sd\automatic-webui-win11\modules\txt2img.py", line 48, in txt2img
    processed = process_images(p)
  File "F:\sd\automatic-webui-win11\modules\processing.py", line 423, in process_images
    res = process_images_inner(p)
  File "F:\sd\automatic-webui-win11\modules\processing.py", line 519, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "F:\sd\automatic-webui-win11\modules\processing.py", line 648, in sample
    self.sampler = sd_samplers.create_sampler_with_index(sd_samplers.samplers, self.sampler_index, self.sd_model)
  File "F:\sd\automatic-webui-win11\modules\sd_samplers.py", line 51, in create_sampler_with_index
    config = list_of_configs[index]
TypeError: list indices must be integers or slices, not float

Error when using negative embeddings in a chance block in Automatic1111's web ui

I'm trying to run this prompt:

positive: solo, adult
negative: [chance 50]NG_DeepNegative_V1_75T,  stretched[/chance]

When I do so, I encounter the following error, and no images are generated:

Data shape for DDIM sampling is (3, 4, 64, 64), eta 0.0
Running DDIM Sampling with 20 timesteps
DDIM Sampler:   0%|                                                                                                        | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(r15arh0wq0a7kc4)', 'solo, adult', '[chance 50]NG_DeepNegative_V1_75T,  stretched[/chance]', [], 20, 17, False, False, 1, 3, 6.5, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.55, 1.5, 'Latent', 0, 0, 0, [], 0, '', '', True, -1.0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/home/foggy/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/foggy/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/foggy/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/home/foggy/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/home/foggy/stable-diffusion-webui/modules/processing.py", line 628, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/home/foggy/stable-diffusion-webui/modules/processing.py", line 828, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/home/foggy/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 158, in sample
    samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
  File "/home/foggy/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 43, in launch_sampling
    return func()
  File "/home/foggy/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 158, in <lambda>
    samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
  File "/home/foggy/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/foggy/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddim.py", line 103, in sample
    samples, intermediates = self.ddim_sampling(conditioning, size,
  File "/home/foggy/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/foggy/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddim.py", line 163, in ddim_sampling
    outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
  File "/home/foggy/stable-diffusion-webui/modules/sd_samplers_compvis.py", line 62, in p_sample_ddim_hook
    unconditional_conditioning = prompt_parser.reconstruct_cond_batch(unconditional_conditioning, self.step)
  File "/home/foggy/stable-diffusion-webui/modules/prompt_parser.py", line 223, in reconstruct_cond_batch
    res[i] = cond_schedule[target_index].cond
RuntimeError: The expanded size of the tensor (154) must match the existing size (77) at non-singleton dimension 0.  Target sizes: [154, 768].  Tensor sizes: [77, 768]

The negative embedding used may be found on Civit.ai.

Similar errors occur regardless of image size, sampler, or use of a VAE or hypernetwork. The error above occurred while using the Anything v3 f32 pruned model, but it's not dependent on the model either.

Some negative prompts change the final line of the error to this:

RuntimeError: The expanded size of the tensor (231) must match the existing size (77) at non-singleton dimension 0.  Target sizes: [231, 768].  Tensor sizes: [77, 768]

I am not able to reproduce that second case at the moment.

The argument to chance appears to be a factor. I cannot reproduce the issue with either [chance 1] or [chance 99].

Not working

if i put "Photo of a [choose]man|woman[/choose]" with the script on, nothing happens. the output is just "Photo of a [choose]man|woman[/choose]" as the prompt

txt2mask non-zero padding causes mask to translate

Adding a padding to the txt2mask parameters seems to cause the mask to translate/shift from its starting point. Prompt used was:

[txt2mask smoothing=10 padding=]eyes[/txt2mask] closed eyes, masterpiece, best quality

With 0 denoising strength and a latent nothing fill to show the mask.

Original:
01064-421082352

0 padding:
01070-421082349-padding0

30 padding:
01069-421082349-padding30

50 padding:
01071-421082349-padding50

No module named 'lib.simpleeval'

Hey, with the last unprompted update I have this error :

Traceback (most recent call last):
  File "D:\Téléchargement\Super Stable Diffusion 2.0\stable-diffusion-webui\modules\scripts.py", line 184, in load_scripts
    module = script_loading.load_module(scriptfile.path)
  File "D:\Téléchargement\Super Stable Diffusion 2.0\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "D:\Téléchargement\Super Stable Diffusion 2.0\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 19, in <module>
    Unprompted = Unprompted(base_dir)
  File "D:\Téléchargement\Super Stable Diffusion 2.0\stable-diffusion-webui\extensions\unprompted\lib\shared.py", line 42, in __init__
    spec.loader.exec_module(self.shortcode_modules[shortcode_name])
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\Téléchargement\Super Stable Diffusion 2.0\stable-diffusion-webui\extensions\unprompted/shortcodes\basic\eval.py", line 1, in <module>
    from lib.simpleeval import simple_eval
ModuleNotFoundError: No module named 'lib.simpleeval'

I use last Automatic111, and have the extension below :

a1111-sd-webui-tagcomplete
deforum-for-automatic1111-webui
depthmap2mask
sd-dynamic-prompts
sd_dreambooth_extension
stable-diffusion-webui-artists-to-study
stable-diffusion-webui-inspiration
unprompted

I already tried to pip install simple eval.

Thanks!

[switch] function not working

I tested [switch] as follows:
[set testy]2[/set]

[switch testy]
{case 1}one{/case}
{case 2}two{/case}
{case}three{/case}
[/switch]
And it gave no output. The debug notes that it loads the file, sets "testy" to 2, and then says "result 0:".

Getting and Error with Unprompted and Dynamic Prompting

Getting this every prompt when I'm using the Dynamic_Prompting extension:

Error running process: J:\SD\ASD\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
File "J:\SD\ASD\modules\scripts.py", line 307, in process
script.process(p, *script_args)
File "J:\SD\ASD\extensions\unprompted\scripts\unprompted.py", line 54, in process
Unprompted.shortcode_user_vars["prompt"] = Unprompted.process_string(original_prompt)
File "J:\SD\ASD\extensions\unprompted\lib\shared.py", line 68, in process_string
string = self.shortcode_parser.parse(string).replace(self.Config.syntax.n_temp," ")
AttributeError: 'list' object has no attribute 'replace'

Is there a way to tell Unprompted to just chill when it's not needed?

How to use the negative_mask argument for [text2mask]?

I am using this:
[txt2mask padding=80 smoothing=20 show ]person[/txt2mask], [txt2mask negative_mask show] head [/txt2mask], spiderman
want to subtract face mask from the person mask. The negative_mask doesn't seem to subtract from the content mask.

image
image

Ads

Hey, so yeah, can you get rid of the ad, it's annoying, like I get it, you need money for food and whatnot, but I gave you that (Got the thing it's advertising) so... yeah, can I not see the ad please.

Thanks, -J

[Choose] Tags Don't Select Different Lines Between Batches

I have a template structure like so:

a.txt = "a scene with [file b]"

b.txt = "
[choose]
line1
line2
line3
[/choose]
"

When I run a batch count> 1, [file b] always selects the same line for each batch.

Increasing depth such that

a.txt = "
[choose]
first scene with [file b]
second scene with [file b]
[/choose]
"

[file a] results in selecting the same line from a.txt and same line from b.txt for each run.

Is this a problem with [choose] or a problem with my setup?

txt2img2img doesn't working

This was already mentioned here: #53 (comment)

The example [file common/examples/txt2img2img] doesn't work. The following error is printed in the console and the second generated image is not an img2img image, just a normal txt2img of Walter White:

Error running process: D:\stable-diffusion\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py 1.98it/s]
Traceback (most recent call last):
  File "D:\stable-diffusion\stable-diffusion-webui\modules\scripts.py", line 386, in process
    script.process(p, *script_args)
TypeError: Scripts.process() missing 2 required positional arguments: 'is_enabled' and 'unprompted_seed'

Also with one of my own prompts I'm getting this error as well, I'm not sure if it's related, the demo template doesn't print this. But my script also doesn't work (it generates the source image and the mask, but img2img is not using the mask at all).

Traceback (most recent call last):
  File "D:\stable-diffusion\stable-diffusion-webui\modules\scripts.py", line 404, in postprocess
    script.postprocess(p, processed, *script_args)
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\unprompted\scripts\unprompted.py", line 431, in postprocess
    Unprompted.shortcode_objects[i].after(p,processed)
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\unprompted/shortcodes\stable_diffusion\txt2mask.py", line 228, in after
    overlayed_init_img = draw_segmentation_masks(pil_to_tensor(p.init_images[0]), pil_to_tensor(self.image_mask.convert("L")) > 0)
AttributeError: 'StableDiffusionProcessingTxt2Img' object has no attribute 'init_images'

The prompt was (replaced actual prompts with <>):

[set negative_prompt]<<negative prompt for a person>>[/set]
[set cfg_scale]7[/set]
[set sampling_steps]40[/set]
[txt2img]<<prompt of a person>>[/txt2img]
[after]
    {sets prompt="<<img2img prompt of a person>>" denoising_strength=0.98 mask_blur=4 sampling_steps=140 cfg_scale=11}
    {txt2mask padding=40 neg_padding=10 smoothing=20 neg_smoothing=10 negative_mask="head" show}person{/txt2mask}
    {img2img}
[/after]

WebUI versions from footer:
python: 3.10.9  •  torch: 1.13.1+cu117  •  xformers: 0.0.16rc425  •  gradio: 3.16.2  •  commit: 91c8d0dc  •  checkpoint: fe4efff1e1

ModuleNotFoundError: No module named 'sentence_transformers'

Wanted to try the img2pez feature, got this:

Error running process: G:\stable-webui\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 117, in render
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shared.py", line 61, in handler
return(self.shortcode_objects[f"{keyword}"].run_atomic(pargs, kwargs, context))
File "G:\stable-webui\extensions\unprompted/shortcodes\stable_diffusion\img2pez.py", line 8, in run_atomic
import lib_unprompted.hard_prompts_made_easy as pez
File "G:\stable-webui\extensions\unprompted\lib_unprompted\hard_prompts_made_easy.py", line 15, in
from sentence_transformers.util import (semantic_search,
ModuleNotFoundError: No module named 'sentence_transformers'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "G:\stable-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "G:\stable-webui\extensions\unprompted\scripts\unprompted.py", line 378, in process
Unprompted.shortcode_user_vars["prompt"] = Unprompted.process_string(apply_prompt_template(original_prompt,Unprompted.Config.templates.default))
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shared.py", line 84, in process_string
string = self.shortcode_parser.parse(string,context)
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 219, in parse
return stack.pop().render(context)
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 58, in render
return ''.join(child.render(context) for child in self.children)
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 58, in
return ''.join(child.render(context) for child in self.children)
File "G:\stable-webui\extensions\unprompted\lib_unprompted\shortcodes.py", line 121, in render
raise ShortcodeRenderingError(msg) from ex
lib_unprompted.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'img2pez' shortcode in line 1.

Error: local variable 'times' referenced before assignment

Hey, thanks for a great extension!

I'm experiencing an error though when running the basic example [file human/main] in AUTOMATIC1111.

Traceback (most recent call last):
  File "/Users/dvagala/temp/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 133, in render
    return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context, content))
  File "/Users/dvagala/temp/stable-diffusion-webui/extensions/unprompted/lib/shared.py", line 57, in handler
    return(self.shortcode_objects[f"{keyword}"].run_block(pargs, kwargs, context, content))
  File "/Users/dvagala/temp/stable-diffusion-webui/extensions/unprompted/shortcodes/basic/choose.py", line 46, in run_block
    for x in range(0, times):
UnboundLocalError: local variable 'times' referenced before assignment

Python 3.9.12
Apple M1 Pro
MacOs Venture 13.0

Truncation of Strings

Thanks for the extension. I've been using Dynamic Prompts, but recently am exploring Unprompted for a bit more versatility of options.

So I've been converting some of my wildcard files over, learning the format as I go, and so far I've noticed some unusual truncation.

For the code

a woman wearing outfit [repeat _times="<random 2>"] and another thing[/repeat]

I receive:

a woman wearing outfi

When it triggers the repeat, there is no truncation.

a woman wearing outfit and another thing

I've also had this happening with [file] choices, even where they are in the middle of a prompt, and so it must be happening before that.

Adding a double-space seems to truncate simply the space, so is a current solve for me, if perhaps not the ideal operation.

Thanks for any assistance, and I appreciate the work.

Error with wildcard char

Getting an error when using [file b/places/*]. Below is the error

Error running process: /home/aniket/stable-diffusion-webui/extensions/unprompted/scripts/unprompted.py
Traceback (most recent call last):
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 117, in render
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shared.py", line 50, in handler
return(self.shortcode_objects[f"{keyword}"].run_atomic(pargs, kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/shortcodes/basic/file.py", line 18, in run_atomic
file = random.choice(files)
File "/usr/lib/python3.10/random.py", line 378, in choice
return seq[self._randbelow(len(seq))]
IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 117, in render
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shared.py", line 50, in handler
return(self.shortcode_objects[f"{keyword}"].run_atomic(pargs, kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/shortcodes/basic/file.py", line 25, in run_atomic
return(self.Unprompted.strip_str(self.Unprompted.shortcode_parser.parse(file_contents,path),self.Unprompted.Config.syntax.n_temp))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 219, in parse
return stack.pop().render(context)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in render
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 121, in render
raise ShortcodeRenderingError(msg) from ex
lib.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'file' shortcode in line 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 117, in render
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shared.py", line 50, in handler
return(self.shortcode_objects[f"{keyword}"].run_atomic(pargs, kwargs, context))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/shortcodes/basic/file.py", line 25, in run_atomic
return(self.Unprompted.strip_str(self.Unprompted.shortcode_parser.parse(file_contents,path),self.Unprompted.Config.syntax.n_temp))
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 219, in parse
return stack.pop().render(context)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in render
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 121, in render
raise ShortcodeRenderingError(msg) from ex
lib.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'file' shortcode in line 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/aniket/stable-diffusion-webui/modules/scripts.py", line 327, in process
script.process(p, *script_args)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/scripts/unprompted.py", line 58, in process
Unprompted.shortcode_user_vars["prompt"] = Unprompted.process_string(original_prompt)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shared.py", line 71, in process_string
string = self.shortcode_parser.parse(string).replace(self.Config.syntax.n_temp," ")
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 219, in parse
return stack.pop().render(context)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in render
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 58, in
return ''.join(child.render(context) for child in self.children)
File "/home/aniket/stable-diffusion-webui/extensions/unprompted/lib/shortcodes.py", line 121, in render
raise ShortcodeRenderingError(msg) from ex
lib.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'file' shortcode in line 1.

NLTK resources aren't downloaded when using the new -nyms features

Using the simple example of [hypernyms]food[/hypernyms] gives the following error. From the announcement, it sounds like the script should download this for me, yeah?
I've tried restarting webui, and I'm on the most recent commits for webui and Unprompted.

Error running process: C:\stable-diffusion\a1-sd-webui\extensions\unprompted\scripts\unprompted.py
Traceback (most recent call last):
  File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\nltk\corpus\util.py", line 84, in __load
    root = nltk.data.find(f"{self.subdir}/{zip_name}")
  File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\nltk\data.py", line 583, in find
    raise LookupError(resource_not_found)
LookupError:
**********************************************************************
  Resource wordnet not found.
  Please use the NLTK Downloader to obtain the resource:

  >>> import nltk
  >>> nltk.download('wordnet')

Lora inside [file] is not triggered

Case:
When inside an unprompted file, lora activation tags stop working

How to reproduce:

  1. Download a lora (let's say it's name is "2b")
  2. Lock seed
  3. generate with a prompt lora:2b:1
  4. create an unprompted temlate file(let's call it "2bt.txt") with the same content: lora:2b:1
  5. generate with a prompt [file 2bt]

Expected result:
Both prompts are the same, the images generated are the same.

What actually happens:
Both prompts are the same, but the images are very different.


If you generate without mentioning lora in prompt, and then generate with [file 2bt], images are practically identical, so including lora in an unprompted file does not do anything :(

Feature request: multi-step prompts with user guidance

The idea came from this post. The workflow would be as follows:

  1. generate a set of images using initial settings. It'd be great if users could skip the step
  2. let them pick some of images and add their's if they wish
  3. run the next step only with images picked/added

couple comments on img2pez

I spend a while playing with the repo. I have a few observations you might be interested in:

  • 3000 iterations is way more than necessary. I generally stop seeing much improvement after about 200. Beyond that the improvements are all marginal, and if you want more iterations you might as well just re-run it.
  • With a small tweak, you can get it to give you the best candidates as it finds them. That way if you stop it early you don't have to throw all your progress away.
  • You can pass it multiple images and it will optimize a prompt across all of them, which is good for style transfer.
  • If you use the "ViT-H-14"/"laion2b_s32b_b79k" config, the prompts actually work in Midjourney too. Not something you can directly incorporate but maybe worth documenting.
  • 16 tokens prompt length works better than 8, according to the paper (and in my experience).

Git Command Error Help

When I try and install Unprompted this is what I'm getting as the error. Suggestions?

GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone -v https://github.com/ThereforeGames/unprompted D:\Dreambooth 2\stable-diffusion-webui\tmp\unprompted stderr: 'Cloning into 'D:\Dreambooth 2\stable-diffusion-webui\tmp\unprompted'... POST git-upload-pack (185 bytes) POST git-upload-pack (227 bytes) Downloading lib/stable_diffusion/clipseg/weights/rd16-uni.pth (1.1 MB) Error downloading object: lib/stable_diffusion/clipseg/weights/rd16-uni.pth (61545cd): Smudge error: Error downloading lib/stable_diffusion/clipseg/weights/rd16-uni.pth (61545cdb3a28f99d33d457c64a9721ade835a9dfbda604c459de6831c504167a): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access. Errors logged to 'D:\Dreambooth 2\stable-diffusion-webui\tmp\unprompted.git\lfs\logs\20230107T231111.9983515.log'. Use git lfs logs last to view the log. error: external filter 'git-lfs filter-process' failed fatal: lib/stable_diffusion/clipseg/weights/rd16-uni.pth: smudge filter lfs failed warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry with 'git restore --source=HEAD :/' '

[file] tag does not get a random file from a folder

According to the manual, [file dirname] should get a random file from that directory, however it produces nothing.

CASE:
I have a following structure in templates:
/pose

  • /standing.txt
  • /sitting.txt

I use a dry run on the following:
[file pose/standing]
It gives me the content of the file standing.

I use dry run on the following:
[file pose]
It gives me empty string.

I tried also:
[file pose/]
Also gives an empty string.

WHAT SHOULD HAPPEN:
According to documentation, I should've either gotten contents of [file pose/standing] or of [file pose/sitting].
Quote from the manual:
"If the given path is a directory as opposed to a file, [file] will return the contents of a random file in that directory".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.