By gerogero
Updated: November 16, 2024
In August 2024, a team of ex-Stability AI developers announced the formation of Black Forest Labs, and the release of their first AI model, FLUX.1, trained on 12 billion parameters and based upon a novel transformer architecture.
Reception to the model was overwhelmingly positive. Users were shocked by how good the image quality and prompt recognition was.
What makes Flux.1 model all the more special is that it is an open source model, so the community will be able to build on top of it by training custom models and LORAs with different capabilities.
Flux.1 consists of three model variations;
Described by many AI enthusiasts as “the model we’ve been waiting for” (especially after the disappointment of SD3), Flux has been received and embraced with open arms. Image fidelity, prompt adherence, and overall image quality are exceptional, setting a new standard in the text2img landscape.
Just take a look at these examples:
And let’s not forgot the NSFW capabilities (more NSFW prompts here):
You can use FLUX.1 for free (limited use) on Hugging Face. Here’s the generators for the Dev and the Schnell models.
You can also run FLUX.1 on Replicate.com: Dev and Schnell (also limited free use).
Currently, there are a couple of options for local generation, depending on your hardware!
At time of writing there is no Automatic1111 support.
Let’s take a look at our options:
Here’s the download link:
https://github.com/mcmonkeyprojects/SwarmUI
Follow the instructions, which are repeated here:
Note: if you’re on Windows 10, you may need to manually install git and DotNET 8 first. (Windows 11 this is automated).
That should finish installing, offering SD XL Base model.
To start it, double-click the “Launch-Windows.bat” file. It will have also put a shortcut on your desktop, unless you told it not to.
Try creating an image with the XL model. If that works, great!
Download the Flux model from here:
Also download the corresponding “ae.safetensors
” file for whichever model you choose.
Put your chosen FLUX file in your unet folder:
SwarmUI\Models\unet
Then put the “ae.safetensors” file in your VAE folder:
SwarmUI\Models\VAE
Close the app, both the browser and the console.
Restart Swarm with the Windows-launch.bat file.
You should be able to select Flux as the model, try to create an image.
It will tell you it is in the queue.
You will have to wait, because Swarm is downloading large files. You can check progress in the console.
When download is complete, your first image should start to appear!
Flux.1 launched with day-1 ComfyUI support, making it one of the quickest and easiest ways to dive into generating with the original Black Forest Labs models. To get started using Flux with ComfyUI, you’ll need the following components:
Flux.1 Models | HF Link | Civitai Link |
---|---|---|
ae.safetensors (vae, required) | Black Forest Labs HF Repository | |
flux1-dev.safetensors | Black Forest Labs HF Repository | Civitai Download Link |
flux1-schnell.safetensors | Black Forest Labs HF Repository | Civitai Download Link |
Clip_l.safetensors | Comfyanonymous HF Repository | |
t5xxl_fp16.safetensors | Comfyanonymous HF Repository | |
t5xxl_fp8_e4m3fn.safetensors | Comfyanonymous HF Repository |
⚠️ If you’re unable to run the ‘full fat’ official models, creator Kijai has released compressed fp8 versions of both flux1-dev and flux1-schnell. While there might be some reduction in image quality, these versions make it possible for users with less available VRAM to generate with Flux.
Flux.1 Models | HF Link |
---|---|
flux1-dev-fp8.safetensors | Kijai HF Repository |
flux1-schnell-fp8.safetensors | Kijai HF Repository |
You’ll also need a basic text-to-image workflow to get started. The download link below provides a straightforward setup with some solid pre-set options. Additionally, we’ve included a LoRA Loader (bypassed by default), as we’re already seeing the first Flux LoRAs appearing on Civitai!
civitai_flux_t2i_workflowDownload
One of our favorite interfaces, Forge, has received Flux support in a surprise Major Update! If you’re familiar with Automatic1111’s interface, you’ll be right at home with Forge; the Gradio front-ends are practically identical.
Forge can support the original Flux models, and text encoders, as listed above for Comfy.
To use the full models and text encoders, there are new fields in which the models and encoders can be loaded;
Forge creator, Illyasviel, has released a “compressed” NF4 model which is currently the recommended way to use Flux with Forge; “NF4 is significantly faster than FP8 on 6GB/8GB/12GB devices and slightly faster for >16GB vram devices. For GPUs with 6GB/8GB VRAM, the speed-up is about 1.3x to 2.5x (pytorch 2.4, cuda 12.4) or about 1.3x to 4x (pytorch 2.1, cuda 12.1)“
Model | HF Link |
---|---|
flux1-dev-bnb-nf4-v2.safetensors | Illyasviel HF Repository |
flux1-dev-fp8.safetensors | Illyasviel HF Repository |
Note! If your GPU supports CUDA newer than 11.7 you can use the NF4 model. Most RTX3XXX and 4XXX GPUs support NF4).
If your GPU is a GTX10XX/20XX, it may not support NF4. In this case, use the fp8 model.