In conjunction with EleutherAI, LAION, and StabilityAI, a text-to-image machine learning model called Stable Diffusion AI art generator was created to produce digital images from descriptions in natural language.
AI-produced art has been around for a while. However, software released this year, DALL-E 2, Midjourney AI, and Stable Diffusion, has allowed even the most inexperienced artists to produce intricate, abstract, or photorealistic compositions by merely typing a few words into a text box.
Table of Contents
What is Stable Diffusion AI art generator?
Stable Diffusion is an open-source AI art generator released on August 22 by Stability AI. Stable Diffusion is written in Python, and its type is the transformer language model. It can work on any operating system that supports Cuda kernels.
Thanks to the open-source Stable Diffusion image synthesis model, anyone with a PC and a respectable GPU can create practically any visual reality they can envision. If you give it a descriptive sentence, it can mimic almost any visual style, and the results magically appear on your screen.
Stable Diffusion makes its source code available, unlike approaches like DALL-E. The license forbids certain dangerous use scenarios.
AI ethics have come under fire from detractors, who claim that the model can be used to produce deepfakes and raise the issue of whether it is permissible to produce images using a model trained on a dataset that contains copyrighted content without the permission of the original creators.
A portion of the LAION-Aesthetics V2 dataset served as the training set for Stable Diffusion. It was trained for $600,000 using 256 Nvidia A100 GPUs.
The business underlying Stable Diffusion, Stability AI, is in discussions to raise money at a valuation of up to $1 billion as of September 2022.
There are a lot of use cases for artificial intelligence in everyday life.
Are you scared of AI jargon? We have already created a detailed AI glossary for the most commonly used artificial intelligence terms and explained the basics of artificial intelligence as well as the risks and benefits of artificial intelligence for organizations and others.
Stable Diffusion download requirements
In 2022, Stable Diffusion will work on a typical gaming PC but not on your phone or most laptops. These are the Stable Diffusion download requirements that you need to fulfill:
- A GPU with at least 6 gigabytes (GB) of VRAM
- This includes most modern NVIDIA GPUs
- 10GB (ish) of storage space on your hard drive or solid-state drive
- The Miniconda3 installer
- The Stable Diffusion files from GitHub
- The Latest Checkpoints (Version 1.4, as of the time of writing, but 1.5 should be released soon)
- The Git Installer
- Windows 8, 10, or 11
- Stable Diffusion can also be run on Linux and macOS
If you don’t have the hardware, you can use Midjourney AI or other web-based AI generators.
How to run Stable Diffusion AI?
- Install Git
- Install Miniconda3.
- Download the Stable Diffusion GitHub repository and the Latest Checkpoint
Is it as simple as it sounds? Not quite.
Git is a tool that enables programmers to control various iterations of the software they’re creating. They can let other developers contribute to the project while simultaneously maintaining several versions of the software they’re working on in a common repository.
Git offers an easy way to access and download these projects if you’re not a developer. Therefore, we’ll use it in this situation. Installing Git requires running the Windows x64 installer that can be downloaded from the Git website.
While the installer runs, you’ll be allowed to choose from several choices; keep them all set to the default values. The “Adjusting Your PATH Environment” option page is a crucial one. “Git From The Command Line And Also From 3rd-Party Software” is the only option that should be selected.
Several different Python libraries are used by Stable Diffusion. Don’t worry too much about this if you don’t know much about Python; suffice it to say that libraries are just software packages that your computer can utilize to carry out particular tasks, like altering an image or conducting difficult math.
In essence, Miniconda3 is a tool for convenience. It enables you to manage all of the libraries needed for Stable Diffusion to run without requiring a lot of manual labor. It will also be how we apply stable diffusion in practice.
Get the most recent installation by visiting the Miniconda3 download page and selecting “Miniconda3 Windows 64-bit.”
Once it has been downloaded, double-click the executable to launch the installation. Installation with Miniconda3 requires fewer page clicks than with Git. However, you should be cautious with this choice:
Before selecting the next button and completing the installation, ensure that “All Users” is selected.
After setting up Git and Miniconda3, your computer will ask you to restart.
Download the Stable Diffusion GitHub repository and the Latest Checkpoint
Now that the necessary software has been set up, we can download and install Stable Diffusion.
The latest checkpoint should be downloaded first. You must first make an account to download the checkpoint, but all they want is your name and email address.
The “sd-v1-4.ckpt” link will launch the download. Although “sd-v1-4-full-ema.ckpt,” the other file, is roughly twice as large, it might yield better results. Either can be used.
We now need to set up a couple of folders where we can unpack the files for Stable Diffusion. Type “miniconda3” into the Start Menu search bar by clicking the Start button, then select “Open” or Enter.
Using the command line, we’ll make a folder called “stable-diffusion.” Press Enter after pasting the following code block into the Miniconda3 window.
The Miniconda3 window will appear if everything goes according to plan. We’ll need Miniconda3 again in a moment, so keep it open.
Open the “stable-diffusion-main.zip” ZIP archive that you got from GitHub in your preferred file archiver. If you don’t have one, Windows may open ZIP files on its own as an alternative. Open a second File Explorer window and navigate to the “C:stable-diffusion” folder we just created while keeping the ZIP file open in the first window.
Drag and drop the “stable-diffusion-main” folder from the ZIP archive into the “stable-diffusion” folder.
Return to Miniconda3 and paste the commands below into the window:
cd C:\stable-diffusion\stable-diffusion-main conda env create -f environment.yaml conda activate ldm mkdir models\ldm\stable-diffusion-v1
Don’t halt the procedure. It can take some time to download because some files are larger than a gigabyte. You must delete the environment folder, and rerun conda env create -f environment.yaml if you mistakenly pause the process. In that case, perform the previous command after deleting the “ldm” folder in “C:Users(Your User Account).condaenvs.”
We’ve reached the installation’s last phase. Copy and paste the checkpoint file (sd-v1-4.ckpt) into the “C:stable-diffusionstable-diffusion-mainmodelsldmstable-diffusion-v1” folder using File Explorer.
After the file has finished transferring, choose “Rename” from the context menu when you right-click “sd-v1-4.ckpt.” To rename the file, enter “model.ckpt” in the highlighted area and press Enter.
And with that, we are done.
We are now ready to use stable diffusion. But how?
How to use the Stable Diffusion AI art generator?
- Activate the ldm environment
- Change the directory
- Use txt2img.py and write your text
- Wait the process
- Check the results
How does Stable Diffusion work? You must activate the ldm environment we built each time you wish to use stable diffusion because it is crucial. In the Miniconda3 window, type conda activate ldm and press “Enter.” The (ldm) on the left-hand side denotes the presence of an active ldm environment.
Before creating any images, we must first change the directory (thus the commandcd) to “C:stable-diffusionstable-diffusion-main.” Add the command line argument cd C:stable-diffusionstable-diffusion-main.
We will use a program called txt2img.py to turn text prompts into 512512 graphics.
Your console will show you a progress bar as it creates the images.
All produced images located at “C:\stable-diffusion\stable-diffusion-main\outputs\txt2img-samples\samples”.
Stable Diffusion examples
These are some of the best Stable Diffusion examples:
What is Lexica art?
There is a search engine and gallery called Lexica for artwork produced with Stable Diffusion.
You can check Lexica, a website for Stable Diffusion AI-generated images search engine. In Lexica, you can find prompts for generated images.
How big is the Stable Diffusion AI art generator?
You need 10GB (ish) of storage space on your hard drive or solid-state drive.
Is stable diffusion open source?
Yes. A neural network trained on millions of photographs downloaded from the Internet produces the images used in Stable Diffusion, which became open source on August 22.
The open-source picture-generating model from Stability AI is comparable to DALL-E 2 in terms of quality. Additionally, it launched DreamStudio, a for-profit website that offers compute time for creating images using Stable Diffusion. Unlike DALL-E 2, anybody can utilize Stable Diffusion, and as the code is open source, projects can build off of it with few limitations.
There is a new sheriff in town. DALL-E and Midjourney AI just got a new competitor. Most importantly, it is free! We will see what will be changed in the AI art generator wars.
Artificial intelligence careers are hot and on the rise, along with data architects, cloud computing jobs, data engineer jobs, and machine learning engineers. Check out the best master’s in artificial intelligence and improve your skillset.