Giter VIP home page Giter VIP logo

stablefused's Introduction

StableFused

StableFused is a toy library to experiment with Stable Diffusion inspired by ๐Ÿค— diffusers and various other sources! One of the main reasons I'm working on this project is to learn more about Stable Diffusion, and generative models in general. It is my current area of research at university.

Installation

It is recommended to use a virtual environment. You can use venv or conda to create one.

Unix:

python -m venv venv
source venv/bin/activate

Windows:

python -m venv venv
venv\Scripts\activate

For usage, install the package from PyPI.

pip install stablefused

For development, fork the repository, clone it and install the package in editable mode.

git clone https://github.com/<YOUR_USERNAME>/stablefused.git
cd stablefused
pip install -e ".[dev]"

Usage

Checkout the examples folder for notebooks ๐Ÿฅฐ

Contributing

Contributions are welcome! Note that this project is not a serious implementation for training/inference/fine-tuning diffusion models. It is a toy library. I am working on it for fun and experimentation purposes (and because I'm too stupid to modify large codebases and understand what's going on).

As I'm not an expert in this field, I will have probably made a lot of mistakes. If you find any, please open an issue or a PR. I'll be happy to learn from you!

Acknowledgements/Resources

The following sources have been very helpful to me in understanding Stable Diffusion. I highly recommend you to check them out!

Results

Visualization of diffusion process

Refer to the notebooks for more details and enjoy the denoising process!

Text to Image

These results are generated using the Text to Image notebook.

text_to_image_diffusion.mp4
Image to Image

These results are generated using the Image to Image notebook.

Source Image Denoising Diffusion Process
The Renaissance Astronaut High quality and colorful photo of Robert J Oppenheimer, father of the atomic bomb, in a spacesuit, galaxy in the background, universe, octane render, realistic, 8k, bright colors Stylistic photorealisic photo of Margot Robbie, playing the role of astronaut, pretty, beautiful, high contrast, high quality, galaxies, intricate detail, colorful, 8k
image_to_image_diffusion.mp4
PS The results from Image to Image Diffusion don't seem very great from my experimentation. It might be some kind of bug in my implementation, which I'll have to look into later...

Text to Video

There is a lot of ongoing research on the generation of videos from text prompts. It is also my current area of research at university. The implementation here is adapted from AnimateDiff.

There is immense potential in developing this kind of technology and its possible usecases are unlimited - personalized educational content, marketing and advertising, creativity and art, etc. to name a few. Imagine a world where you have your own personal ChatGPT/Bard like assistants for visual learning - a model that can generate 3Blue1Brown style videos explaining science topics, or depict a story! Current models are not that capable yet, but this is where we are headed, I think, and is what me and my team are researching on. The future of this technology will be fascinating to witness!

Text to Video

These results are generated using the Text to Video notebook.

Text to Video
An astronaut floating in space, interstellar, black background with stars, photorealistic, high quality, 8k
interstellar-astronaut.mp4
A mighty pirate ship sailing through the sea, unpleasant, thundering roar, dark night, starry night, high quality, photorealistic, 8k
mighty-ship.mp4

Inpainting

Image inpainting is a technique that aims to fill in missing or damaged parts of an image. It is used to restore or repair images by extrapolating the surrounding information to recreate the missing regions seamlessly.

These results are generated using the Inpainting notebook.

Inpainting using a fixed mask and different prompts
Inpainting

Prompt 1: Digital illustration of a mythical creature, high quality, realistic, 8k
Prompt 2: Digital illustration of a mythical creature, high quality, realistic, 8k
Prompt 3: Digital illustration of a dragon, high quality, realistic, octane render, 8k
Prompt 4: Digital illustration of a ferocious lion, high quality, realistic, octane render, 8k
Prompt 5: Digital illustration of an evil white rabbit, high quality, realistic, 8k
Prompt 6: Digital illustration of samurai with a moon-like object in the background, high quality, realistic, octane render, 8k

Image Mask
Infinite Zoom In

Prompt: A painting of a cat, in the style of Vincent Van Gogh, hanging in a room

infinite_zoom_in.mp4
Pan and Zoom Out

Prompt: Post-apocalyptic world with ruins, overgrown vegetation, and a lone survivor

pan_and_zoom.mp4

Understanding the effect of Guidance Scale

Guidance scale is a value inspired by the paper Classifier-Free Diffusion Guidance. The explanation of how CFG works is out-of-scope here, but there are many online sources where you can read about it (linked below).

In short, guidance scale is a value that controls the amount of "guidance" used in the diffusion process. That is, the higher the value, the more closely the diffusion process follows the prompt. A lower guidance scale allows the model to be more creative, and work slightly different from the exact prompt. After a certain threshold maximum value, the results start to get worse, blurry and noisy.

Guidance scale values, in practice, are usually in the range 6-15, and the default value of 7.5 is used in many inference implementations. However, manipulating it can lead to some very interesting results. It also only makes sense when it is set to 1.0 or higher, which is why many implementations use a minimum value of 1.0.

But... what happens when we set guidance scale to 0? Or negative? Let's find out!

When you use a negative value for the guidance scale, the model will try to generate images that are the opposite of what you specify in the prompt. For example, if you prompt the model to generate an image of an astronaut, and you use a negative guidance scale, the model will try to generate an image of everything but an astronaut. This can be a fun way to generate creative and unexpected images (sometimes NSFW or absolute horrendous stuff, if you are not using a safety-checker model - which is the case with StableFused).

Results

The original images produced are too large to display in high quality here. You can find them in my Drive. These images are compressed from ~30 MB to ~6 MB in order for GitHub to accept uploads.

Effect of Guidance Scale on Different Prompts
Effect of Guidance Scale on Different Prompts
Each image is sampled with the same prompt and seed to ensure only the guidance scale plays a role.
Column 1: Artistic image, very detailed cute cat, cinematic lighting effect, cute, charming, fantasy art, digital painting, photorealistic
Column 2: A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k
Column 3: A grand city in the year 2100, atmospheric, hyper realistic, 8k, epic composition, cinematic, octane render
Column 4: Starry Night, painting style of Vincent van Gogh, Oil paint on canvas, Landscape with a starry night sky, dreamy, peaceful
effect-of-guidance-scale-on-different-prompts.mp4
Effect of Guidance Scale with increased number of inference steps
Effect of Guidance Scale with increased number of inference steps
Columns have number of inference steps set to 3, 6, 12, 20, 25.
Prompt: Photorealistic illustration of a mystical alien creature, magnificent, strong, atomic, tyrannic, predator, unforgiving, full-body image
effect-of-guidance-scale-vs-steps-3.mp4
effect-of-guidance-scale-vs-steps-4.mp4

Latent Walk

Generative models, like the ones used in Stable Diffusion, learn a latent representation of the world. A latent representation is a low-dimensional vector space embedding of the world. In the case of SD, this latent representation is learnt by training on text-image pairs. This representation is used to generate samples given a prompt and a random noise vector. The model tries to predict and remove noise from the random noise vector, while also aligning the vector to the prompt. This results in some interesting properties of the latent space.

Stable Diffusion models (atleast, the models used here) learn two latent representations - one of the NLP space for prompts, and one of the image space. These latent representations are continuous. If we choose two vectors in the latent space to sample from, we get two different/similar images depending on how different the chosen vectors are. This is the basis of latent walking. We can choose two vectors in the latent space, and sample from the latent path between them. This results in a smooth transition between the two images.

Similar Image Generation by sampling latent space

The results below show just how information rich the latent space of these stable diffusion models are.

Source Image Latent Walks
Large futuristic mechanical robot in the foreground of a baroque-style battle scene, photorealistic, high quality, 8k
Generating Latent Walk videos
Generating Latent Walk videos
Prompt 1: A dog chasing a cat in a thrilling backyard scene, high quality and photorealistic
Prompt 2: A determined dog in hot pursuit, with stunning realism, octane render
Prompt 3: A thrilling chase, dog behind the cat, octane render, exceptional realism and quality
Prompt 4: The exciting moment of a cat outmaneuvering a chasing dog, high-quality and photorealistic detail
Prompt 5: A clever cat escaping a determined dog and soaring into space, rendered with octane render for stunning realism
Prompt 6: The cat's escape into the cosmos, leaving the dog behind in a scene,high quality and photorealistic style
dog-chasing-cat-story.mp4

Note that these results aren't very good. I tried different seeds but for this story, I couldn't make a great video. I did try some other prompts and got better results, but I like this story so I'm sticking with it ๐Ÿค“ You can improve the results by using better prompts and increasing the number of interpolation and inference steps.

Future

At the moment, I'm not sure if I'll continue to expand on this project, but if I do, here are some things I have in mind (in no particular order, and for documentation purposes):

  • Add support for more techniques of inference - explore new sampling techniques and optimize diffusion paths
  • Implement and stay up-to-date with the latest papers in the field
  • Removing ๐Ÿงจ diffusers as a dependency by implementing all required components myself
  • Create user-friendly web demos or GUI tools to make experimentation easier.
  • Add LoRA, training and fine-tuning support
  • Improve codebase, documentation and tests
  • Improve support for not only Stable Diffusion, but other diffusion techniques, involving but not limited to audio, video, etc.

License

MIT

stablefused's People

Contributors

a-r-r-o-w avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

stablefused's Issues

Broken examples

The new change to intake inference parameters as class instances for various diffusion pipelines breaks all the examples/. To fix it, for example in TextToImageDiffusion example, you have to pass parameters not as keyword arguments but as a TextToImageConfig object.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.