The recently released image-generation model by Stability.AI, StableDiffusion, is showing groundbreaking results and promising new innovations after its open-source public release.
Researchers and developers have been experimenting to figure out all the possible fields where Stable Diffusion can be applied. We have compiled a list of a few of these contributions available on Google Colab as notebooks.
Deforum Stable Diffusion
For generating animations with simple prompts, Deforum created a notebook that allows users to input prompts along with the number of frames, frames per second, zoom, angle, and other such metrics.
Recently, the team released a newer version, v05, that includes new options like perspective 2D flipping, importing custom settings file, custom maths expressions, dynamic video masking, auto download of models, and weighted prompts.
Check out Deforum here.
Video Killed the Radio Star…Diffusion
Owing to its interesting name, this notebook can make an animated music video for you, using a YouTube video. The model uses OpenAI’s Whisper speech-to-text to create Stable Diffusion animation by prompts from the lyrics of the video.
First, an image is generated using a text prompt from the lyrics. Then, subsequent images are generated by making variations in the first image. Thereafter, the images are set in a sequence, reordered and organised to create a smooth animation.
Click here to check out the notebook.
With CLIP guidance, perceptual guidance, and Perlin initial noise, this Colab notebook contains all tools required for Stable Diffusion.
The model uses textual-inversion from HuggingFace hub and also allows loading Midjourney style images.
Click here to visit the Colab notebook.
This is designed to train language models on prompts, customised prompts, and render images. It is a fine tuned GPT-2 model.
The model was recently updated to V2.5 to integrate with Stable Diffusion for image generation. It also comes with 185 built-in signature Kyrick prompts to run it straight out of the box, and a streamlined training UX.
To visit the notebook of Prompt Parrot, click here.
Stable Diffusion Interpolation
Two different prompts on Stable Diffusion can now be interpolated seamlessly with this notebook. The combined images can also be generated into a video after the new V2.2 update. The new update also supports multiple seeds and fixes the blurring issue present in previous releases.
To start interpolating images with Stable Diffusion, click here.
This notebook has combined Stable Diffusion and Google’s DreamFusion to create 3D objects from simple text prompts.
Google’s DreamFusion uses NeRF to create 3D models from 2D images generated by text-prompts on Stable Diffusion.
Click here to check out the training and testing process.
Stable Diffusion DreamBooth Inference
This notebook elaborates on teaching Stable Diffusion new concepts via Google’s Dreambooth. Using a set of 3-5 images, developers can input and personalise models.
Unlike Textual Inversion used in Dreambooth on Stable Diffusion, this approach trains the whole model, yielding better results.
Check out the Colab notebook here.
Combining Stable Diffusion and CraiyonAI, this notebook can interpret and improve on the images generated using Craiyon to further improve the quality of the output.
The creator of the notebook also dropped a tutorial on how to use the notebook.
Click here for the Colab notebook and here for the GitHub repository.
Seamless Texture Inpainting
Using Stable Diffusion, MetaSemantic released an inpainting tool to generate seamless textures that do not look like they have been repeated at all.
Though this uses tiles to generate images, users have been trying to generate symmetrical and also abstract images with this method.
Click here for the Colab notebook of this interesting computer graphics application.
Apart from generation of 3D worlds and 2D images, Stable Diffusion can also be used to create panoramic immersive worlds. Stable Worlds generates images using Stable Diffusion then stitches them to create seamless panoramas.
Click here to check out the code.