Face Restoration AI

Generative Art
Runway - Online Video Editor | Everything you need to edit video, fast.
Discover advanced video editing capabilities to take your creations to the next level.
Artbreeder
12 awesome free image editing tools to supercharge your DALL·E generations
Free tools to animate images, fix facial details, create videos and more.
12 awesome free image editing tools to supercharge your DALL·E generations
Free tools to animate images, fix facial details, create videos and more.
Disco Diffusion AI Guide – Eliso's Generative Art Guides
Get Started With Disco Diffusion to Create AI Generated Art
Disco Diffusion is a free tool that you can use to create "AI" generated art. You can create machine learning generated images and videos with it.
awesome-generative-deep-art: References of Generative Deep Learning tools, works, models, etc.
References of Generative Deep Learning tools, works, models, etc. - filipecalegario/awesome-generative-deep-art: References of Generative Deep Learning tools, works, models, etc.
BIG.art: Using Machine Learning to Create High-Res Fine Art
How to use GLIDE and BSRGAN to create ultra-high-resolution digital paintings with fine details
In the world of DALL-E 2 and Midjourney, enters open-source Disco Diffusion
Google's latest research leaps toward resolving the diffusion models' image resolution issue through linking SR3 and CDM.
DALL·E prompts from Art History, Part 1 Prehistoric to Medieval
Exploring DALL·E’s ability to capture art styles from cave paintings, ancient cultures, and on to the Dark Ages.
GitHub - kuprel/min-dalle: min(DALL·E) is a fast, minimal port of DALL·E Mega to PyTorch
min(DALL·E) is a fast, minimal port of DALL·E Mega to PyTorch - GitHub - kuprel/min-dalle: min(DALL·E) is a fast, minimal port of DALL·E Mega to PyTorch
AIAIART Lesson #7 - Diffusion Models
Time to dive into diffusion models and see what is going on underneath the magic that is making the news at the moment :)
If you'd prefer a slightly longer, more conversational version, the livestream recording is up here: https://youtu.be/jkSoMlfuUm0 (this also covers a few additional topics thanks to questions from the twitch chat).
Lesson notebook: https://ift.tt/ngptx7o
Github (includes discord invite) : https://ift.tt/lvEuAy0
AIAIART Lesson #5
Welcome back to AIAIART! This lesson recaps some of the core ideas from part 1 (lessons 1-4) and sets us up for the next few weeks, where we'll look into some advanced new techniques like transformers for image synthesis and the recently-famous diffusion models.
The live stream of this lesson ran a little long and had a couple of technical hiccups, so this video is a re-recording that tries to do a more high-level summary. If you'd prefer to follow along with the full-length video in which I actually run all the code and explain in more depth, that video is up on https://ift.tt/bMYi7rx and as an unlisted video here: https://youtu.be/BHkLbzspdt8
See links to past lessons and our Discord where you can ask questions and share your projects via the aiaiart github repository: https://ift.tt/AKGc1ZW
The colab link for lesson 5: https://ift.tt/IKPSQ63
AIAIART Lesson 7.5
Informal chat where we read through a few papers and look at a recent project.
No central notebook for this lesson, but some resources we'll be talking about:
- A brief shoutout to https://multimodal.art/ as a great way to keep up with things
- CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers (https://ift.tt/QgkA92Y)
- Denoising Diffusion GAN (https://ift.tt/J4wdPlS)
- My project, CLOOB Conditioned Latent Denoising Diffusion GANs (https://ift.tt/WIfpH7a) and specifically the demo notebook (https://ift.tt/xmiLsDy)
I forgot to mention a few things:
1) If you're curious how I organised the code into that library with nice docs and such: check out NBDev.
2) The demo grids shown run from no conditioning (left) to fairly extreme conditioning (right) using classifier free guidance.
-- Watch live at https://ift.tt/bMYi7rx
AIAIART Lesson 6
AIAIART Lesson 6, diving into transformer models and their application to image synthesis. We'll start by playing with text generation and build all the way up to creating our own version of the original 'dall-e' model for text-to-image synthesis.
Github: https://ift.tt/lvEuAy0
The Colab notebook for this lesson: https://ift.tt/Ejvemku
The original live-streamed version of this lesson isn't actually all that much longer than this video, coming in just over an hour! You can find it on https://ift.tt/bMYi7rx and I'll also upload it to YouTube at some point and update this description then.
Text2Art
Generate your own art from text with AI technologies!
Pollinations.AI
Make generative art
Overview | Generating AI “Art” with VQGAN+CLIP | Adafruit Learning System
Hands-on neural networks for mere mortals