Hands-on with Gemini: Interacting with multimodal AI
Gemini is our natively multimodal AI model capable of reasoning across text, images, audio, video and code. This video highlights some of our favorite intera...
SDXL ComfyUI Stability Workflow - What I use internally at Stability for my AI Art
Since we have released SDXL to the world, I might as well show you how to get the most from the models as this is the same workflow I use on a daily basis at stability.ai. In this video I show you some of the basics on how to get the model from the models to generate your best AI artwork from our models. You will need some of the custom nodes over at civit, but you can choose the package that works best for you, as they are all pretty similar.
We will start with a basic workflow and then complicate it with a refinement pass, but then we will add in another special twist I am sure you will enjoy. #stablediffusion #sdxl #comfyui
Grab some of the custom nodes from civit.ai: https://civitai.com/tag/comfyui
Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
The refiner is also available here (OFFICIAL):
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
Additional VAE (only needed if you plan to not use the built-in version)
https://huggingface.co/stabilityai/sdxl-vae
ComfyUI - Getting Started : Episode 2 - Custom Nodes Everyone Should Have
In this tutorial we cover how to install the Manager custom node for ComfyUI to improve our stable diffusion process for creating AI Art. I also cover the nesting of nodes, installing the efficient nodes, and a few other little nuances that will set you up for success as you use this amazing tool
#comfyUI #stablediffusion
Grab git here if you don't already have it: https://git-scm.com/
Grab the manager custom node from here: https://github.com/ltdrdata/ComfyUI-Manager
Install ComfyUI from this link: https://github.com/comfyanonymous/ComfyUI
Twitter: https://twitter.com/sedetweiler
Threads: https://www.threads.net/sedetweiler
ComfyUI - Getting Started : Episode 1 - Better than AUTO1111 for Stable Diffusion AI Art generation
Today we cover the basics on how to use ComfyUI to create AI Art using stable diffusion models. This node based editor is an ideal workflow tool to leave how AI art is generated, but also how you can really mess with the internal elements much more than you can with any other AI Art interface out there today. #comfyUI #stablediffusion
Install ComfyUI from this link: https://github.com/comfyanonymous/ComfyUI
Twitter: https://twitter.com/sedetweiler
Threads: https://www.threads.net/sedetweiler
NEW Outpaint for ControlNET - Inpaint_only + Lama is EPIC!!!! A1111 + Vlad Diffusion
The new outpainting for ControlNET is amazing! This uses the new inpaint_only + Lama Method in ControlNET for A1111 and Vlad Diffusion. The method is very ea...
Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111
Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
#### Links from the Video ####
Make Ads in A1111: https://youtu.be/LBTAT5WhFko
Woman Sitting https://unsplash.com/photos/b9Z6TOnHtXE
Goose https://unsplash.com/photos/eObAZAgVAcc
Pillar https://www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/
explorer: https://unsplash.com/photos/8tY7wHckcM8
castle: https://unsplash.com/photos/8tY7wHckcM8
mountains https://unsplash.com/photos/lSXpV8bDeMA
Ruins https://unsplash.com/photos/d57A7x85f3w
#### Join and Support me ####
Buy me a Coffee: https://www.buymeacoffee.com/oliviotutorials
Join my Facebook Group: https://www.facebook.com/groups/theairevolution
Joint my Discord Group: https://discord.gg/XKAk7GUzAW
A recent Reddit post showcased a series of artistic QR codes created with Stable Diffusion. Those QR codes were generated with a custom-trained ControlNet model. Just like another day in the Stable Diffusion community, people have quickly figured out how to make QR codes with Stable Diffusion WITHOUT a custom model.
QR code, short for Quick Response code, is a common way to encode text or URL in a 2D image. You can typically use your phone’s camera app to read the code.
In this post, you will learn how to generate QR codes like these.
Software
We will use AUTOMATIC1111 Stable Diffusion GUI to create QR codes. You can use this GUI on Google Colab, Windows, or Mac.
You will need the ControlNet extension installed. Follow this tutorial to install.
If you are using our Colab Notebook, simply select ControlNet at startup.
Generating QR code
You will first need a QR Code. To increase your chance of success, use a QR code that meets the following criteria.
Use a high fault tolerance setting (30%).
Have a white margin around the QR Code (the quiet zone).
Use the most basic square fill with a black-and-white pattern.
Avoid using generators that introduce a thin white line between black elements.
We will use this QR Code generator in this tutorial.
Step 1: Select the text type and enter the text for the QR code.
Step 2: Set fault tolerance to 30%.
Step 3: Press Generate.
Step 4: Download the QR Code as a PNG file.
Img2img
This method starts with generating an image similar to the QR Code using img2img. But this is not enough to produce a valid QR Code. ControlNet is turned on during the sampling steps to imprint the QR code onto the image. Near the end of the sampling steps, ControlNet is turned off to improve the consistency of the image.
Step-by-step guide
In AUTOMATIC1111 WebUI, navigate to the Img2img page.
Step 1: Select a checkpoint model. We will use GhostMix.
Step 2: Enter a prompt and a negative prompt.
The prompt is quite important to your success. Some prompts blend naturally with your QR Code.
We will use the following prompt.
a cubism painting of a town with a lot of houses in the snow with a sky background, Andreas Rocha, matte painting concept art, a detailed matte painting
And the following negative prompt.
ugly, disfigured, low quality, blurry, nsfw
Step 3: Upload the QR code to the img2img canvas.
Step 4: Enter the following image-to-image settings.
Resize mode: Just resize
Sampling method: DPM++2M Karras
Sampling step: 50
Width: 768
Height: 768
CFG Scale: 7
Denoising strength: 0.75
Step 5: Upload the QR code to ControlNet‘s image canvas.
Step 6: Enter the following ControlNet settings.
Enable: Yes
Control Type: Tile
Preprocessor: tile_resample
Model: control_xxx_tile
Control Weight: 0.87
Starting Control Step: 0.23
Ending Control Step: 0.9
Step 7: Press Generate.
You won’t get a functional QR Code with every single image. The success rate is about one in four. Generate more images and check using your phone.
Tips
QR codes with shorter text have a higher success rate because the patterns are simpler.
Not all QR codes work the same. Some could be marginally working and can only be read at a certain distance.
Some prompts blend more naturally with QR codes. For example, the prompt for generating houses with snow on rooftops you saw previously blends well with QR codes simply because they share similar visual elements.
The parameters that work is different for different models and prompts. You must adjust the following parameter slightly to blend the QR Code and the prompt well.
Denoising strength: Decrease to have the initial composition follows the QR code more. But you will only see the QR code if you reduce it too much. It is typically set higher than 0.7.
Control Weight: Decrease to show the prompt more.
Starting Control Step: Increase to show the prompt more.
Ending Control Step: Decrease to stop the ControlNet earlier so that the QR code and the image can blend more naturally.
Other prompts
Mechanical girl
1mechanical girl,ultra realistic details, portrait, global illumination, shadows, octane render, 8k, ultra sharp,intricate, ornaments detailed, cold colors, metal, egypician detail, highly intricate details, realistic light, trending on cgsociety, glowing eyes, facing camera, neon details, machanical limbs,blood vessels connected to tubes,mechanical vertebra attaching to back,mechanical cervial attaching to neck,sitting,wires and cables connecting to head
ugly, disfigured, low quality, blurry
Denoising strength: 0.75
Control weight: 1
Starting Control Step: 0.23
Ending Control Step: 0.7
Robot
light, futobot, cyborg, ((masterpiece),(best quality),(ultra-detailed), (full body:1.2), 1male, solo, hood up, upper body, mask, 1boy, male focus,white gloves, cloak, long sleeves, spaceship, lightning, hires
ugly, disfigured, low quality, blurry
Denoising strength: 0.75
Control weight: 1
Starting Control Step: 0.24
Ending Control Step: 0.9
Outdoor market
A photo-realistic rendering of a busy market, ((street vendors, fruits, vegetable, shops)), (Photorealistic:1.3), (Highly detailed:1.2), (Natural light:1.2), art inspired by Architectural Digest, Vogue Living, and Elle Decor
ugly, disfigured, low quality, blurry, nsfw
Denoising strength: 0.75
Control weight: 0.87
Starting Control Step: 0.23
Ending Control Step: 0.9
Some interesting reads
A QR Code of a food stall
QR Code of a pool
An excellent summary of Stable Diffusion QR Codes
pThe post How to make a QR code with Stable Diffusion first appeared on Stable Diffusion Art./p
Stable diffusion color grading tutorial. Quick trick!
In this stable diffusion tutorial I'll show you a little trick of how to work with color grading before generating your images. Ultimate stable diffusion gui...
In this outpainting tutorial for Stable diffusion and ControlNet, I'll show you how to easily push the boundaries of Stable diffusion and outpaint or expand your image.
FREE Prompt styles here:
https://bit.ly/3HtFxiR
Support me on Patreon to get access to unique perks! https://bit.ly/3ngMJYv
Chat with me in our community discord: https://bit.ly/3p19ssk
My Weekly AI Art Challenges https://www.youtube.com/playlist?list=PLXS4AwfYDUi7RvFm4K6lKBH_acaZQMKY4
My Stable diffusion workflow to Perfect Images https://youtu.be/4u-Ytioi3DM
ControlNet tutorial and install guide https://youtu.be/vFZgPyCJflE
Famous Scenes Remade by ControlNet AI https://youtu.be/wVbWZ-Ph9lE
LIVE Pose in Stable Diffusion https://youtu.be/uAI_FBK6UPc
Control Lights in Stable Diffusion https://youtu.be/_xHC3bT5GBU
Ultimate Stable diffusion guide https://youtu.be/DHaL56P6f5M
Inpainting Tutorial - Stable Diffusion https://youtu.be/No1_sq-i_5U
The Rise of AI Art: A Creative Revolution https://youtu.be/Ujpr62w7qcU
7 Secrets to writing with ChatGPT (Don't tell your boss!) https://youtu.be/G5pld_ELBI0
Ultimate Animation guide in Stable diffusion https://youtu.be/lztn6qLc9UE
Dreambooth tutorial for Stable diffusion https://youtu.be/Z-hyKADmHmE
5 tricks you're not using in Stable diffusion https://youtu.be/-5TaeHvnVxE
Avoid these 7 mistakes in Stable diffusion https://youtu.be/b8xWjrzTAPY
How to ChatGPT. ChatGPT explained in 1 minute https://youtu.be/APvEaj19Io4
This is Adobe Firefly. AI For Professionals https://youtu.be/TInvekF6NRw
Adobe Firefly Tutorial https://youtu.be/ifnAjKiMVaU
ChatGPT Playlist https://www.youtube.com/playlist?list=PLXS4AwfYDUi7USniHdj0RmGhOgWHitmCN
I'll teach you what you need to know about Inpainting in this Stable diffusion tutorial. Learn how to fix any Stable diffusion generated image through inpain...
Stable diffusion tutorial. ULTIMATE guide - everything you need to know!
In this tutorial I'll go through everything to get you started with #stablediffusion from installation to finished image. We'll talk about txt2img, img2img, ...
Dreambooth tutorial for stable diffusion. Quick, free and easy!
In this tutorial I'll go through #dreambooth for #stablediffusion and how you can train your own stable diffusion model based off of your own images. You can...
[Tutorial] Complete Guide to ControlNet for Stable Diffusion img2img - AiTuts
What exactly is ControlNet and why are Stable Diffusion users so excited about it? Think of Stable Diffusion’s img2img feature on steroids. With regular img2img, you had no control over what parts of the original image you wanted to keep and what parts you wanted to ignore. With ControlNet, you can choose exactly which parts ... Read more
ANYONE can make a cartoon with this groundbreaking technique. Want to learn how? We made a ONE-HOUR, CLICK-BY-CLICK TUTORIAL on http://www.corridordigital.co...