By gerogero
Updated: April 4, 2025
This document provides a comprehensive, complete and up-to-date introduction to the NoobAI-XL model.
NoobAI-XL is a Text-to-Image diffusion model developed by Laxhar Dream Lab and sponsored by Lanyun.
The model certificate is inherited from fair-ai-public-license-1. 0-sd and has some restrictions (see NoobAI-XL model certificate). The model is based SDXL’s model architecture. Its base model is Illustrious-xl-early-release-v0 model. It was trained on the complete Danbooru and e621 datasets (about 13 million images) for a large number of rounds, with rich knowledge and excellent performance.
NoobAI-XL has a huge amount of knowledge, which can restore the styles of tens of thousands of 2D characters and artists, recognize a large number of special concepts in 2D, and has rich furry knowledge.
NoobAI-XL provides both noise prediction (or say epsilon prediction) and V-prediction versions. In short, the noise prediction version generates more diverse and creative images, while the V-prediction version follows the prompts more and generates images with a wider color gamut and stronger lighting.
NoobAI-XL has an increasingly rich ecosystem community support, including various LoRA, ControlNet , IP-Adapter and so on.
NoobAI-XL includes a series of models, mainly noise prediction and V prediction, which will be described in detail later .
Before reading this section, readers need to already understand the basic usage of any kind of raw image UI such as WebUI , ComfyUI , forge or reForge . Otherwise, readers need to learn from here or from the Internet (such as Bilibili, etc.).
Site | Link |
CivitAI | Click here |
LiblibAI | Click here |
Huggingface | Click here |
If you don’t know which model to download, you can browse here .
NoobAI-XL models are divided into two categories: noise prediction ( epsilon prediction , or abbreviated as eps-pred ) models and V prediction ( v-prediction , or abbreviated as v-pred ) models. Models with the words “eps”, “epsilon-pred”, or “eps-pred” in their names are noise prediction models, which are not much different from other models. If you use them, you can skip this section directly. Models with the words “v” or “v-pred” in their names are V prediction models, which are different from most conventional models. Please read the installation guide in this section carefully! The principle of the V prediction model can be found in this article .
V prediction is a relatively rare Model Training technique. Models trained using this technique are called V prediction models. Compared with noise prediction, V prediction models are known for their higher obedience to hints, more comprehensive color gamut, and stronger light and shadow, represented by NovelAI Diffusion V3 and COSXL . Due to its late appearance and fewer models of this type, some mainstream graphic projects and UIs do not directly support it. Therefore, if you plan to use V prediction models, you need some additional operations. This section will introduce its specific usage. If you encounter any difficulties during use, you can also directly contact any model author for help.
Forge and reForge are two ai image generation UIs developed by lllyasviel and Panchovix respectively, both are extended versions of WebUI. Their main branch support the V prediction model, and the operation mode is almost the same as WebUI, so they are recommended. If you have installed one of them, you only need to run git pull
update in its installation directory and restart it; if you have not installed it, you can refer to the online tutorial for installation and use.
ComfyUI is a image generation UI developed by comfyanonymous , allowing users to freely manipulate nodes, named for its flexibility and professionalism. Using the V prediction model only requires adding additional nodes.
WebUI is a project stable-diffusion-webui developed by AUTOMATIC1111 . Currently, the main branch of WebUI, which is the main branch, does not support the V prediction model and needs to switch to dev. Please note that this method is unstable and may have bugs. Improper use can even cause irreversible damage to WebUI. Therefore, please backup your WebUI in advance. The specific method is as follows:
git checkout dev
and press Enter.Diffusers is a Python diffusion model dedicated library. This usage requires users to have a certain code foundation, and is recommended for developers and researchers to use. Code example:
import torch
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
ckpt_path,
use_safetensors=True,
torch_dtype=torch.float16,
)
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")
prompt = """masterpiece, best quality, john_kafka, nixeu, quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
num_inference_steps=28,
guidance_scale=5,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
NoobAI-XL has no hard requirements for prompt words, and the recommended actions in this article are icing on the cake.
NoobAI-XL recommends that users use tags as prompt to add the desired content. Each tag is an English word or phrase separated by an English comma “,”, and tags directly from Danbooru and e621 have stronger effects. For further improvement, see the prompt specification later.
We suggest always adding the aesthetic tag “very awa” and the quality tag “masterpiece” to your prompt.
NoobAI-XL supports generating high-fidelity characters and artist styles, both triggered by tags, which we call “trigger words“. Among them, the trigger words for characters are their character names; the trigger words for artist styles are the artist’s names. The complete trigger word table can be downloaded from noob-wiki . Detailed explanations of trigger words can be found below .
Similar to NovelAI, NoobAI-XL supports special tags such as quality, aesthetics, year of creation, period of creation, and safety rating for auxiliary purposes. Interested readers can see the detailed introduction in the following text .
The following table recommends three generation parameters: sampler , sampling steps , and CFG Scale . Bold is strongly recommended; bold in red is hard requirement. Using parameter values other than these will bring unexpected effects.
Version number | All noise prediction versions | V Prediction 1.0 version | V Prediction 0.9r version | V Prediction 0.75s version | V Prediction 0.65s version | V Prediction Version 0.6 | V Prediction version 0.5 | V Prediction Beta |
Recommended parameters | Sampler: Euler A CFG: 5~ 7 Sampling steps: 28~ 35 | Sampler: Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40 | Sampler: Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40 | Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40 | Sampler: Euler A or Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 5 ~ 7 Sampling steps: 28~ 40 | Sampler: Euler CFG: 3.5~ 5.5 Sampling steps: 28~ 35 | Sampler: Euler CFG: 3.5~ 5.5 Sampling steps: 28~ 40 | Sampler: Euler A CFG: 5~ 7 Sampling steps: 28~ 35 |
For the V prediction model, the following parameters are recommended to (i) optimize color, lighting, and detail; (ii) eliminate the effects of oversaturation and overexposure; and (iii) enhance semantic understanding.
The resolution (width x height) of the generated image is an important parameter. Generally speaking, for architectural reasons, all SDXL models including NoobAI-XL need to use a specific resolution to achieve the best effect. No more or less pixels are allowed, otherwise the quality of the generated image will be weakened. The recommended resolution of NoobAI-XL is shown in the table below:
Resolution (W x H) | 768×1344 | 832×1216 | 896×1152 | 1024×1024 | 1152×896 | 1216×832 | 1344×768 |
Proportion | 9:16 | 2:3 | 3:4 | 1:1 | 4:3 | 3:2 | 16:9 |
You can also use a larger area resolution, although this is not stable. (According to SD3 research, when the generated area becomes a $$$$multiple of the original, the uncertainty of the model will become a multiple of the original $$k^$$.) We recommend that the generated image area does not exceed 1.5 times the original. For example, 1024×1536.
If you are interested in the model and would like to learn more about it, this section provides a detailed guide to using the model.
NoobAI-XL includes multiple different versions of the base model for a series. The following table summarizes the features of each version.
Version number | Prediction type | Download link | Iteration since | Version features |
Early-Access | Noise prediction | CivitAIHuggingface | Illustrious-xl-early-release-v0 | The earliest version, but already has sufficient training. |
Epsilon-pred 0.5 | Noise prediction | CivitAIHuggingface | Early-Access | (Recommended) The most stable version, the only drawback is the lack of knowledge of obscure concepts. |
Epsilon-pred 0.6 | Noise prediction | Huggingface | Early-Access 0.5 | (Recommended) The last version of UNet-only training has excellent convergence effect. The test team is called “178000”, which is liked by many people. |
Epsilon-pred 0.75 | Noise prediction | CivitAIHuggingface | Epsilon-pred 0.6 | The text encoder (TTE) was trained to learn more obscure knowledge, but the quality performance deteriorated. |
Epsilon-pred 0.77 | Noise prediction | Huggingface | Epsilon-pred 0.75 | Trained two more epochs on the basis of Epsilon-pred 0.75, improving performance degradation. |
Epsilon-pred 1.0 | Noise prediction | CivitAIHuggingface | Epsilon-pred 0.77 | (Recommended) An additional 10 rounds of training to consolidate the new knowledge of tte, performance balance. |
Pre-test | V prediction | CivitAIHuggingface | Epsilon-pred 0.5 | (Not recommended) Initial experimental version of V prediction. |
V-pred 0.5 | V prediction | CivitAIHuggingface | Epsilon-pred 1.0 | (Not recommended) There is a problem of high saturation. |
V-pred 0.6 | V prediction | CivitAIHuggingface | V-pred 0.5 | (Not recommended) Based on the preliminary evaluation results, V-pred0.6 performs well in rare knowledge coverage, reaching the highest level among the currently published models. At the same time, the model significantly improves the quality degradation problem. |
V-pred 0.65 | V prediction | Huggingface | V-pred 0.6 | (Not recommended) There is a saturation issue. |
V-pred 0.65s | V prediction | CivitAIHuggingface | V-pred 0.6 | Saturation problem is almost solved!But it has artifacts issue, which will be solved in the next version. |
Epsilon-pred 1.1 | Noise prediction | CivitAIHuggingface | Epsilon-pred 1.0 | (Recommended) The average brightness problem has been solved, and all aspects have improved. |
V-pred 0.75 | V prediction | Huggingface | V-pred 0.65 | (Not recommended) There is a saturation issue. |
V-pred 0.75s | V prediction | CivitAIHuggingface | V-pred 0.65 | (Recommended) Solve saturation in extreme situations, residual noise, and artifacts issues. |
V-pred 0.9r | V prediction | CivitAI | V-pred 0.75 | Trained with ~10% realism data. Has degradation. |
V-pred 1.0 | V prediction | CivitAI | V-pred 0.75 | (Recommended) Best balanced quality/performance/color. |
Prediction type | ControlNet type | Link | Preprocessor type | Remarks |
Noise prediction | Hed soft edge | CivitAIHuggingface | softedge_hed | |
Noise prediction | Anime lineart | CivitAIHuggingface | lineart_anime | |
Noise prediction | Midas normal map | CivitAIHuggingface | normal_midas | |
Noise prediction | Midas depth map | CivitAIHuggingface | depth_midas | |
Noise prediction | Canny contour | CivitAIHuggingface | canny | |
Noise prediction | Openpose human skeleton | CivitAIHuggingface | openpose | |
Noise prediction | Manga line | CivitAIHuggingface | manga_line / lineart_anime / lineart_realistic | |
Noise prediction | Realistic lineart | CivitAIHuggingface | lineart_realistic | |
Noise prediction | Midas depth map | CivitAIHuggingface | depth_midas | New version |
Noise prediction | HED scribble | CivitAIHuggingface | scribble_hed | |
Noise prediction | Pidinet scribble | CivitAIHuggingface | scribble_pidinet | |
Noise prediction | Tile | CivitAIHuggingface | tile |
Note that when using ControlNet, you MUST match the type of preprocessor you are using with the type of preprocessor ControlNet requires. Additionally, you MAY NOT NEED TO match the prediction type of the base model with the prediction type of ControlNet.
Coming soon.
Most of the LoRAs can be used for both noise prediction and V prediction versions, and vice versa.
First of all, we need to clarify that the role of prompts is to guide. Good prompts can unleash the potential of the model, but bad or even incorrect prompt may not necessarily make the results worse. Different models have different optimal prompt usage, and the effect of misuse is often not obvious, and in some cases, it can even improve. This prompt guide records the theoretical best prompt engineering of the model, and capable readers can also freely use it.
This section will provide a detailed guide to writing prompt, including prompt writing standards, specific usage of role and style trigger words, usage of special tags, and so on. Readers interested in prompt engineering can choose to read selectively.
NoobAI-XL has the same prompt specification as other anime-like base models. This section will systematically introduce the basic writing specifications of prompts and help readers eliminate common misconceptions in the community.
According to the different formats, prompts can be roughly divided into two categories: tags and natural language. The former is mostly used for anime models, and the latter is mostly used for real-life models. Regardless of which prompt, unless the model specifies otherwise, it should only contain English letters, numbers, and symbols.
Tags are composed of lowercase English words or phrases separated by English commas “,” for example, “1girl, solo, blue hair” contains three tags, “1girl”, “solo” and “blue hair”.
The extra spaces and newline characters in the prompt will not affect the actual generation effect. In other words, “1girl, solo, blue hair” and “1girl, solo, blue hair” have exactly the same effect.
Prompts should not contain any underscores “_”. Influenced by websites such as Danbooru, the use of underscores “_” instead of spaces between words as tags has been circulated, which is actually a misuse and will cause the generated results to be different from using spaces. Most models, including NoobAI-XL, do not recommend including any underscores in prompts. This misuse can affect the quality of generation at best, and even make the trigger words completely invalid at worst.
Escape parentheses if necessary. Parentheses, including round brackets (), square brackets [], and curly braces {}, are very special symbols in prompts. Unlike general symbols, in most image generation UIs, parentheses are interpreted as weighting specific content, and the parentheses participating in weighting will not play their original meaning. However, what if the original prompt needs to include parentheses, such as some trigger words? The answer is that the weighting function of parentheses can be eliminated by adding a backslash “\” before the parentheses. This operation of changing the original meaning of a character is called escape, and backslashes are also called escape characters. For example, if you don’t use a backslash to escape, the prompt “1girl, ganyu ( genshin impact ) ” will be incorrectly interpreted as “1girl, ganyu genshin impact”, where “genshin impact” is weighted and the parentheses disappear. By adding an escape character, the prompt becomes “1girl, ganyu \ ( genshin impact \) “, as expected.
In short, tags standardization is divided into two steps: (i) replacing underscores with spaces in each tags, and (ii) adding a backslash “\” before parentheses.
Tags directly from Danbooru and e621 have a stronger expressive effect. Therefore, instead of creating your own tags, we recommend that readers search for tags directly on these two websites. It should be noted that tags obtained directly in this way are separated by an underscore “_” and the parentheses are not escaped. Therefore, before adding hints to tags from them, you need to remove the spaces in the tags and escape the parentheses. For example, treat tags from Danbooru ” ganyu_ (genshin_impact) ” as “ganyu\ (genshin impact\) ” before use.
Do not use invalid meta tags. Meta tags ( meta tags ) are a special type of tag on Danbooru used to indicate the characteristics of image files or works themselves. For example, ” highres ” indicates that the image has high resolution, ” oil_painting_ (medium) ” indicates that the image is in the style of oil painting. However, not all meta tags are related to the content or form of the image. For example, ” commentary_request ” indicates that Danbooru’s post has a translation request for the work, which has no direct relationship with the work itself and therefore has no effect.
Sequential cue words are better. NoobAI-XL recommends writing prompts in logical order, from primary to secondary. One possible writing order is as follows, for reference only:
< 1girl/1boy/1other/female/male/… >, < character >, < series >, < artist (s) >, < general tags >, < other tags >, < quality & aesthetic tags >
Among them, the < quality & aesthetic tags > can be prefixed.
Natural language prompts are composed of sentences, each beginning with a capital letter and ending with a period “.”. Most anime models, including NoobAI-XL, have a better understanding of tags, so natural language is often used as an auxiliary rather than a primary component in prompts.
NoobAI-XL supports direct generation of a large number of fan-made characters and artist styles. Characters and styles are triggered by names, which are also tags called trigger words. You can search directly on Danbooru or e621 , and standardize the resulting tags as prompts.
There are some differences in the way characters and artists are triggered.
The following table demonstrates some correct and incorrect cases of character and style triggering:
Type | Prompt word | Right or wrong | Reason |
Role | Rei Ayanami | Wrong | The character name should be ” ayanami rei “.No series tag ” neon genesis evangelion ” added. |
Role | character:ganyu \(genshin impact\), genshin impact | Wrong | Added the prefix “character:” superficially. |
Role | ganyu_\(genshin impact\) | Wrong | No fully normalized tags: should not contain underscores.No series tags were added. |
Role | ganyu (genshin impact), genshin impact | Wrong | No fully normalized tags: parentheses are not escaped. |
Role | ganyu (genshin impact\), genshin impact | Wrong | No fully normalized tags: the left parenthesis is not escaped. |
Role | ganyu \(genshin impact\),genshin impact | Wrong | Separated two tags with a Chinese comma |
Role | ganyu \(genshin impact\), genshin impact | Correct | |
Artist style | by wlop | Wrong | Added the prefix “by” superficially. |
Artist style | artist:wlop | Wrong | Added the prefix “artist:” superficially. |
Artist style | dino | Wrong | The artist name is wrong, the artist name of aidxl/artiwaifu should not be used, but should follow Danbooru, so it is ” dino\ (dinoartforame\) “ |
Artist style | wlop | Correct |
For your convenience, we also provide a complete trigger word form in the noob-wiki for your reference:
Table type | Download link |
Danbooru character | Click here |
Danbooru Artist Style | Click here |
E621 character | Click here |
E621 Artist Style | Click here |
Each of these forms contains a trigger word table from one of the databases Danbooru and e621. Each row of the table represents a character or artist style. You only need to find the row corresponding to the desired character or artist style, copy the ” trigger ” section and paste it into the prompt word as it is. If you are unsure about a character or artist style, you can also click on the link in the “url” column to view the example diagram on the website. The following table explains the meaning of each column. Not every table contains all columns.
Listed | Meaning | Remarks |
character | The tag name of the role on the corresponding website. | |
artist | Artist style in the tag name of the corresponding website. | |
trigger | Trigger words after standardization. | Copy and paste it into the prompt word as it is and use it. |
count | Number of images with this tag. | As an expectation of the degree of restoration of this concept. For characters, a count higher than 200 can be better restored. For style, a count higher than 100 can be better restored. |
url | Tag page on the original website. | |
solo_count | The number of images in the dataset with this tag and only one character in the image. | Role table only. For roles, solo_count above 50 can be restored better.When the reduction degree is judged by count, the deviation of the count column is large and the accuracy is low, while solo_count is a more accurate indicator. |
core_tags | The core characteristic tags of the character include appearance, gender, and clothing. Separated by English commas, each tag has been standardized. | Only Danbooru character list. When unpopular characters are triggered and their restoration degree is insufficient, several core feature tags can be added to enhance the restoration degree. |
Special labels are a type of label with special meanings and effects that serve as an auxiliary function.
Quality tags are actually popularity tags obtained from statistical data based on Danbooru and e621 user preferences. In order of quality from high to low:
masterpiece > best quality > high quality / good quality > normal quality > low quality / bad quality > worst quality
Aesthetic tags scored according to the aesthetic scoring model. There are only two so far, ” very awa ” and ” worst aesthetic “. The former is the data with waifu-scorer-v3 and waifu-scorer-v4-beta weighted scores in the top 5%, and the latter is the data with the bottom 5%. It is named very awa because its aesthetic standards are similar to the A rti Wa ifu Diffusion model. In addition, an aesthetic tag that is still in training and has no obvious effect is “very as2″, which is the data with ” aesthetic-shadow-v2-5 ” scores in the top 5%.
Comparison of the effects of aesthetic labels
There are four safety/rating tags: general , sensitive , nsfw and explicit .
We hope that users will consciously add “nsfw” in negative prompts to filter out inappropriate content. 😀
The year tag is used to indicate the year of creation of the work, indirectly affecting the quality, style, character restoration degree , etc. Its format is ” year xxxx “, where “xxxx” is a specific year, such as “year 2024”.
Period tags are year tags that also have a significant impact on image quality. The correspondence between tags and years is shown in the table below.
Year range | 2021~2024 | 2018~2020 | 2014~2017 | 2011~2013 | 2005~2010 |
Period label | newest | recent | mid | early | old |
This section provides recommended usage examples of prompts for reference only.
The following recommended starting point uses special tags, which are the ones with the highest correlation with image quality.
very awa, masterpiece, best quality, year 2024, newest, highres, absurdres
The following table introduces common negative tags and their sources. Not all negative tags are necessarily bad, and using them properly can have unexpected effects.
Tag | Translation | Remarks | Source |
Quality label | |||
worst aesthetic | The worst aesthetic | Contains low aesthetic concepts such as low quality, watermarks, comics, multi-views, and unfinished sketches | Aesthetic |
worst quality | Worst quality | Quality | |
low quality | Low quality | The low quality of Danbooru | Quality |
bad quality | Low quality | Low quality of e621 | Quality |
lowres | Low resolution | Danbooru | |
scan artifacts | Scanning artifact | Danbooru | |
jpeg artifacts | JPEG image compression artifact | Danbooru | |
lossy-lossless | – | Images that have been converted from lossy image format to lossless image format are usually full of artifacts. | Danbooru |
Composition and art form labels | |||
ai-generated | AI generated | Generated by AI, it often has a greasy feeling generated by AI. | Danbooru |
abstract | Abstract | Eliminate cluttered lines | Danbooru |
official art | Official art | Illustrations made by the official company/artist of the series or character. The copyright , company or artist name, and copyright notice may be printed somewhere on the image. | Danbooru |
old | Early images | Period | |
4koma | Four-panel comic | Danbooru | |
multiple views | Multi-view | Danbooru | |
reference sheet | Personality design | Danbooru | |
dakimakura \(medium\) | Throw pillow diagram | Danbooru | |
turnaround | Full-body three-view | Danbooru | |
comic | Cartoon | Danbooru | |
greyscale | Canary release map | Black and white picture | Danbooru |
monochrome | Monochrome | Black and white picture | Danbooru |
sketch | Line draft | Danbooru | |
unfinished | Unfinished work | Danbooru | |
E621 label | |||
furry | Furry | e621 | |
anthro | Personified Fury | e621 | |
feral | Furry | e621 | |
semi-anthro | Semianthropomorphic Fury | e621 | |
mammal | Mammals (Fury) | e621 | |
Watermark label | |||
watermark | Watermark | Danbooru | |
logo | LOGO | Danbooru | |
signature | Artist signature | Danbooru | |
text | Text | Danbooru | |
artist name | Artist name | Danbooru | |
dated | Date | Danbooru | |
username | Username | Danbooru | |
web address | Website | Danbooru | |
Anatomical label | |||
bad hands | Bad hand | Danbooru | |
bad feet | Bad foot | Danbooru | |
extra digits | Extra fingers | Danbooru | |
fewer digits | Missing fingers | Danbooru | |
extra arms | Multiple arm | Danbooru | |
extra faces | Multifaceted | Danbooru | |
multiple heads | Bulls | Danbooru | |
missing limb | Less limb | Danbooru | |
amputee | Loss of limbs | Danbooru | |
severed limb | Amputation | Danbooru | |
mutated hands | Mutated hand | – | |
distorted anatomy | Twisted anatomy | – | |
Content tag | |||
nsfw | Pornographic grade | Safety | |
explicit | Exposure level | Safety | |
censored | Coded | Danbooru |
Currently, the use of meaningless tags is circulating. This section lists commonly misused tags.
Tag | Translation | Remarks | Source |
bad id | Image id corruption | Related to image metadata, not image content | Danbooru |
bad link | Image original link is broken | Related to image metadata, not image content | Danbooru |
duplicate | The image is repeated on the website | There is a certain correlation with quality, but it is not duplicate content. | Danbooru |
Complicated desired outputs = Complex prompts with mix of natural language and tags Complex prompt structure and order: Simple Prompt Example: Resu...
This guide was created to bring inspiration to this visual vocabulary. There is a short description for each pose so that you can connect the word ...
GPT-4o, released on March 25, 2025 went viral soon after release, bolstered by the Studio Ghibli animation style trend. Most people are curious if ...
This guide is intended to get you generating quality NSFW images as quickly as possible with Automatic1111 Stable Diffusion WebUI. We’ll be u...