BetterWaifu Logo
betterwaifu
BetterWaifu
ModelsBlogPricingDiscord
Generate

Footer

BetterWaifu

  • About
  • Guides
  • Models
  • Contact

Tools

  • Prompt Detector
  • Hentai Generator

Use Cases

  • BSDM AI Generator
  • Futanari AI Generator
  • AI Feet Generator
  • Gay AI Generator
  • Furry AI Generator
  • Femboy AI Generator

Legal

  • Privacy
  • Terms

Subscribe to BetterWaifu

The latest updates on NSFW AI

FacebookInstagramXGitHubYouTube

© 2025 BetterWaifu.com. All rights reserved.

    Ultimate Guide to NoobAI

    Ultimate Guide to NoobAI

    Guides
    g

    By gerogero

    Updated: April 4, 2025

    Welcome to the NoobAI guide

    This document provides a comprehensive, complete and up-to-date introduction to the NoobAI-XL model.

    What is Noob?

    NoobAI-XL is a Text-to-Image diffusion model developed by Laxhar Dream Lab and sponsored by Lanyun.

    The model certificate is inherited from fair-ai-public-license-1. 0-sd and has some restrictions (see NoobAI-XL model certificate). The model is based SDXL’s model architecture. Its base model is Illustrious-xl-early-release-v0 model. It was trained on the complete Danbooru and e621 datasets (about 13 million images) for a large number of rounds, with rich knowledge and excellent performance.

    Epsilon vs V-pred

    NoobAI-XL has a huge amount of knowledge, which can restore the styles of tens of thousands of 2D characters and artists, recognize a large number of special concepts in 2D, and has rich furry knowledge.

    NoobAI-XL provides both noise prediction (or say epsilon prediction) and V-prediction versions. In short, the noise prediction version generates more diverse and creative images, while the V-prediction version follows the prompts more and generates images with a wider color gamut and stronger lighting.

    NoobAI-XL has an increasingly rich ecosystem community support, including various LoRA, ControlNet , IP-Adapter and so on.

    NoobAI-XL includes a series of models, mainly noise prediction and V prediction, which will be described in detail later .

    Quick Start

    Before reading this section, readers need to already understand the basic usage of any kind of raw image UI such as WebUI , ComfyUI , forge or reForge . Otherwise, readers need to learn from here or from the Internet (such as Bilibili, etc.).

    SiteLink
    CivitAIClick here
    LiblibAIClick here
    HuggingfaceClick here

    If you don’t know which model to download, you can browse here .

    Model Loading

    NoobAI-XL models are divided into two categories: noise prediction ( epsilon prediction , or abbreviated as eps-pred ) models and V prediction ( v-prediction , or abbreviated as v-pred ) models. Models with the words “eps”, “epsilon-pred”, or “eps-pred” in their names are noise prediction models, which are not much different from other models. If you use them, you can skip this section directly. Models with the words “v” or “v-pred” in their names are V prediction models, which are different from most conventional models. Please read the installation guide in this section carefully! The principle of the V prediction model can be found in this article .

    Loading of V-prediction Models

    V prediction is a relatively rare Model Training technique. Models trained using this technique are called V prediction models. Compared with noise prediction, V prediction models are known for their higher obedience to hints, more comprehensive color gamut, and stronger light and shadow, represented by NovelAI Diffusion V3 and COSXL . Due to its late appearance and fewer models of this type, some mainstream graphic projects and UIs do not directly support it. Therefore, if you plan to use V prediction models, you need some additional operations. This section will introduce its specific usage. If you encounter any difficulties during use, you can also directly contact any model author for help.

    A. Use in forge or reForge

    Forge and reForge are two ai image generation UIs developed by lllyasviel and Panchovix respectively, both are extended versions of WebUI. Their main branch support the V prediction model, and the operation mode is almost the same as WebUI, so they are recommended. If you have installed one of them, you only need to run git pull update in its installation directory and restart it; if you have not installed it, you can refer to the online tutorial for installation and use.

    B. Use in ComfyUI

    ComfyUI is a image generation UI developed by comfyanonymous , allowing users to freely manipulate nodes, named for its flexibility and professionalism. Using the V prediction model only requires adding additional nodes.

    C. Use in WebUI

    WebUI is a project stable-diffusion-webui developed by AUTOMATIC1111 . Currently, the main branch of WebUI, which is the main branch, does not support the V prediction model and needs to switch to dev. Please note that this method is unstable and may have bugs. Improper use can even cause irreversible damage to WebUI. Therefore, please backup your WebUI in advance. The specific method is as follows:

    1. If you haven’t installed WebUI yet, please refer to the online tutorial to install it.
    2. Open the Console or Terminal in the installation directory of your stable-diffusion-webui.
    3. Enter the command git checkout dev and press Enter.
    4. Restart WebUI.

    D. Use in Diffusers

    Diffusers is a Python diffusion model dedicated library. This usage requires users to have a certain code foundation, and is recommended for developers and researchers to use. Code example:

    import torch
    from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
    
    ckpt_path = "/path/to/model.safetensors"
    pipe = StableDiffusionXLPipeline.from_single_file(
        ckpt_path,
        use_safetensors=True,
        torch_dtype=torch.float16,
    )
    scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
    pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
    pipe.enable_xformers_memory_efficient_attention()
    pipe = pipe.to("cuda")
    
    prompt = """masterpiece, best quality, john_kafka, nixeu, quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
    negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
    
    image = pipe(
        prompt=prompt,
        negative_prompt=negative_prompt,
        width=832,
        height=1216,
        num_inference_steps=28,
        guidance_scale=5,
        generator=torch.Generator().manual_seed(42),
    ).images[0]
    
    image.save("output.png")
    1. Model Usage
    2. Prompt

    NoobAI-XL has no hard requirements for prompt words, and the recommended actions in this article are icing on the cake.

    NoobAI-XL recommends that users use tags as prompt to add the desired content. Each tag is an English word or phrase separated by an English comma “,”, and tags directly from Danbooru and e621 have stronger effects. For further improvement, see the prompt specification later.

    We suggest always adding the aesthetic tag “very awa” and the quality tag “masterpiece” to your prompt.

    NoobAI-XL supports generating high-fidelity characters and artist styles, both triggered by tags, which we call “trigger words“. Among them, the trigger words for characters are their character names; the trigger words for artist styles are the artist’s names. The complete trigger word table can be downloaded from noob-wiki . Detailed explanations of trigger words can be found below .

    Similar to NovelAI, NoobAI-XL supports special tags such as quality, aesthetics, year of creation, period of creation, and safety rating for auxiliary purposes. Interested readers can see the detailed introduction in the following text .

    1. Generated Parameters

    A. Basic Parameters

    The following table recommends three generation parameters: sampler , sampling steps , and CFG Scale . Bold is strongly recommended; bold in red is hard requirement. Using parameter values other than these will bring unexpected effects.

    Version numberAll noise prediction versionsV Prediction 1.0 versionV Prediction 0.9r versionV Prediction 0.75s versionV Prediction 0.65s versionV Prediction Version 0.6V Prediction version 0.5V Prediction Beta
    Recommended parametersSampler: Euler A CFG: 5~ 7 Sampling steps: 28~ 35Sampler: Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40Sampler: Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40Sampler: Euler A CFG: 3 ~ 5 Sampling steps: 28~ 40Sampler: Euler A or Euler CFG: 3.5 ~ 5.5 Sampling steps: 32~ 40Sampler: Euler A CFG: 5 ~ 7 Sampling steps: 28~ 40Sampler: Euler CFG: 3.5~ 5.5 Sampling steps: 28~ 35Sampler: Euler CFG: 3.5~ 5.5 Sampling steps: 28~ 40Sampler: Euler A CFG: 5~ 7 Sampling steps: 28~ 35

    B. How to use V-prediction Models MUCH Better

    For the V prediction model, the following parameters are recommended to (i) optimize color, lighting, and detail; (ii) eliminate the effects of oversaturation and overexposure; and (iii) enhance semantic understanding.

    1. Any optimizer with Rescale CFG (~ 0.7 or so) parameter. Some image generation UIs not support it.
    2. Or, Euler Ancestor CFG ++ sampler , and set the CFG Scale to between 1 and 1.8 . Some image generation UIs do not support it.

    C. Resolutions

    The resolution (width x height) of the generated image is an important parameter. Generally speaking, for architectural reasons, all SDXL models including NoobAI-XL need to use a specific resolution to achieve the best effect. No more or less pixels are allowed, otherwise the quality of the generated image will be weakened. The recommended resolution of NoobAI-XL is shown in the table below:

    Resolution (W x H)768×1344832×1216896×11521024×10241152×8961216×8321344×768
    Proportion9:162:33:41:14:33:216:9

    You can also use a larger area resolution, although this is not stable. (According to SD3 research, when the generated area becomes a $$$$multiple of the original, the uncertainty of the model will become a multiple of the original $$k^$$.) We recommend that the generated image area does not exceed 1.5 times the original. For example, 1024×1536.

    1. Other notes
    1. The V prediction model is more sensitive to prompt and generation parameters.
    2. CLIP skip is not applicable to all SDXL architecture models, so there is no need to set it.
    3. The model does not need to use any other VAE models.
    1. Other resources
    • This article provides a beginner’s guide to NoobAI-XL and is recommended for beginners to read.
    1. Advanced usage

    If you are interested in the model and would like to learn more about it, this section provides a detailed guide to using the model.

    1. Model List
    2. Base Model

    NoobAI-XL includes multiple different versions of the base model for a series. The following table summarizes the features of each version.

    Version numberPrediction typeDownload linkIteration sinceVersion features
    Early-AccessNoise predictionCivitAIHuggingfaceIllustrious-xl-early-release-v0The earliest version, but already has sufficient training.
    Epsilon-pred 0.5Noise predictionCivitAIHuggingfaceEarly-Access(Recommended) The most stable version, the only drawback is the lack of knowledge of obscure concepts.
    Epsilon-pred 0.6Noise predictionHuggingfaceEarly-Access 0.5(Recommended) The last version of UNet-only training has excellent convergence effect. The test team is called “178000”, which is liked by many people.
    Epsilon-pred 0.75Noise predictionCivitAIHuggingfaceEpsilon-pred 0.6The text encoder (TTE) was trained to learn more obscure knowledge, but the quality performance deteriorated.
    Epsilon-pred 0.77Noise predictionHuggingfaceEpsilon-pred 0.75Trained two more epochs on the basis of Epsilon-pred 0.75, improving performance degradation.
    Epsilon-pred 1.0Noise predictionCivitAIHuggingfaceEpsilon-pred 0.77(Recommended) An additional 10 rounds of training to consolidate the new knowledge of tte, performance balance.
    Pre-testV predictionCivitAIHuggingfaceEpsilon-pred 0.5(Not recommended) Initial experimental version of V prediction.
    V-pred 0.5V predictionCivitAIHuggingfaceEpsilon-pred 1.0(Not recommended) There is a problem of high saturation.
    V-pred 0.6V predictionCivitAIHuggingfaceV-pred 0.5(Not recommended) Based on the preliminary evaluation results, V-pred0.6 performs well in rare knowledge coverage, reaching the highest level among the currently published models. At the same time, the model significantly improves the quality degradation problem.
    V-pred 0.65V predictionHuggingfaceV-pred 0.6(Not recommended) There is a saturation issue.
    V-pred 0.65sV predictionCivitAIHuggingfaceV-pred 0.6Saturation problem is almost solved!But it has artifacts issue, which will be solved in the next version.
    Epsilon-pred 1.1Noise predictionCivitAIHuggingfaceEpsilon-pred 1.0(Recommended) The average brightness problem has been solved, and all aspects have improved.
    V-pred 0.75V predictionHuggingfaceV-pred 0.65(Not recommended) There is a saturation issue.
    V-pred 0.75sV predictionCivitAIHuggingfaceV-pred 0.65(Recommended) Solve saturation in extreme situations, residual noise, and artifacts issues.
    V-pred 0.9rV predictionCivitAIV-pred 0.75Trained with ~10% realism data. Has degradation.
    V-pred 1.0V predictionCivitAIV-pred 0.75(Recommended) Best balanced quality/performance/color.
    1. Extended model: ControlNet
    Prediction typeControlNet typeLinkPreprocessor typeRemarks
    Noise predictionHed soft edgeCivitAIHuggingfacesoftedge_hed
    Noise predictionAnime lineartCivitAIHuggingfacelineart_anime
    Noise predictionMidas normal mapCivitAIHuggingfacenormal_midas
    Noise predictionMidas depth mapCivitAIHuggingfacedepth_midas
    Noise predictionCanny contourCivitAIHuggingfacecanny
    Noise predictionOpenpose human skeletonCivitAIHuggingfaceopenpose
    Noise predictionManga lineCivitAIHuggingfacemanga_line / lineart_anime / lineart_realistic
    Noise predictionRealistic lineartCivitAIHuggingfacelineart_realistic
    Noise predictionMidas depth mapCivitAIHuggingfacedepth_midasNew version
    Noise predictionHED scribbleCivitAIHuggingfacescribble_hed
    Noise predictionPidinet scribbleCivitAIHuggingfacescribble_pidinet
    Noise predictionTileCivitAIHuggingfacetile

    Note that when using ControlNet, you MUST match the type of preprocessor you are using with the type of preprocessor ControlNet requires. Additionally, you MAY NOT NEED TO match the prediction type of the base model with the prediction type of ControlNet.

    1. Extension model: IP-Adapter

    Coming soon.

    1. LoRA model

    Most of the LoRAs can be used for both noise prediction and V prediction versions, and vice versa.

    1. Prompt Guidance

    First of all, we need to clarify that the role of prompts is to guide. Good prompts can unleash the potential of the model, but bad or even incorrect prompt may not necessarily make the results worse. Different models have different optimal prompt usage, and the effect of misuse is often not obvious, and in some cases, it can even improve. This prompt guide records the theoretical best prompt engineering of the model, and capable readers can also freely use it.

    This section will provide a detailed guide to writing prompt, including prompt writing standards, specific usage of role and style trigger words, usage of special tags, and so on. Readers interested in prompt engineering can choose to read selectively.

    1. Prompt Specification

    NoobAI-XL has the same prompt specification as other anime-like base models. This section will systematically introduce the basic writing specifications of prompts and help readers eliminate common misconceptions in the community.

    According to the different formats, prompts can be roughly divided into two categories: tags and natural language. The former is mostly used for anime models, and the latter is mostly used for real-life models. Regardless of which prompt, unless the model specifies otherwise, it should only contain English letters, numbers, and symbols.

    Tags are composed of lowercase English words or phrases separated by English commas “,” for example, “1girl, solo, blue hair” contains three tags, “1girl”, “solo” and “blue hair”.

    The extra spaces and newline characters in the prompt will not affect the actual generation effect. In other words, “1girl, solo, blue hair” and “1girl, solo, blue hair” have exactly the same effect.

    Prompts should not contain any underscores “_”. Influenced by websites such as Danbooru, the use of underscores “_” instead of spaces between words as tags has been circulated, which is actually a misuse and will cause the generated results to be different from using spaces. Most models, including NoobAI-XL, do not recommend including any underscores in prompts. This misuse can affect the quality of generation at best, and even make the trigger words completely invalid at worst.

    Escape parentheses if necessary. Parentheses, including round brackets (), square brackets [], and curly braces {}, are very special symbols in prompts. Unlike general symbols, in most image generation UIs, parentheses are interpreted as weighting specific content, and the parentheses participating in weighting will not play their original meaning. However, what if the original prompt needs to include parentheses, such as some trigger words? The answer is that the weighting function of parentheses can be eliminated by adding a backslash “\” before the parentheses. This operation of changing the original meaning of a character is called escape, and backslashes are also called escape characters. For example, if you don’t use a backslash to escape, the prompt “1girl, ganyu ( genshin impact ) ” will be incorrectly interpreted as “1girl, ganyu genshin impact”, where “genshin impact” is weighted and the parentheses disappear. By adding an escape character, the prompt becomes “1girl, ganyu \ ( genshin impact \) “, as expected.

    In short, tags standardization is divided into two steps: (i) replacing underscores with spaces in each tags, and (ii) adding a backslash “\” before parentheses.

    Tags directly from Danbooru and e621 have a stronger expressive effect. Therefore, instead of creating your own tags, we recommend that readers search for tags directly on these two websites. It should be noted that tags obtained directly in this way are separated by an underscore “_” and the parentheses are not escaped. Therefore, before adding hints to tags from them, you need to remove the spaces in the tags and escape the parentheses. For example, treat tags from Danbooru ” ganyu_ (genshin_impact) ” as “ganyu\ (genshin impact\) ” before use.

    Do not use invalid meta tags. Meta tags ( meta tags ) are a special type of tag on Danbooru used to indicate the characteristics of image files or works themselves. For example, ” highres ” indicates that the image has high resolution, ” oil_painting_ (medium) ” indicates that the image is in the style of oil painting. However, not all meta tags are related to the content or form of the image. For example, ” commentary_request ” indicates that Danbooru’s post has a translation request for the work, which has no direct relationship with the work itself and therefore has no effect.

    Sequential cue words are better. NoobAI-XL recommends writing prompts in logical order, from primary to secondary. One possible writing order is as follows, for reference only:

    < 1girl/1boy/1other/female/male/… >, < character >, < series >, < artist (s) >, < general tags >, < other tags >, < quality & aesthetic tags >

    Among them, the < quality & aesthetic tags > can be prefixed.

    Natural language prompts are composed of sentences, each beginning with a capital letter and ending with a period “.”. Most anime models, including NoobAI-XL, have a better understanding of tags, so natural language is often used as an auxiliary rather than a primary component in prompts.

    1. Characters and Artist Styles

    NoobAI-XL supports direct generation of a large number of fan-made characters and artist styles. Characters and styles are triggered by names, which are also tags called trigger words. You can search directly on Danbooru or e621 , and standardize the resulting tags as prompts.

    1. Usage

    There are some differences in the way characters and artists are triggered.

    • For artist style, just add the artist’s name to the prompt without adding any prefixes, suffixes, or additional decorations, neither ” by xxx ” nor ” artist: xxx “, just ” xxx”.
    • For characters, use the form of “character name + series”. That is, in addition to adding the character name, an additional series tag needs to be added immediately after the character trigger word tag to indicate which work the character comes from. If a character has multiple series tags, one or more of them can be added. Please note that if the character name already contains the series name, you still need to add the series tag without considering duplication. Usually, “character name + series name” is enough to restore the character. For example, the character from the series ” genshin_impact ” ganyu_ (genshin_impact) “The trigger word for” is “ganyu\ (genshin impact\), genshin impact”. Similarly, character trigger words do not need to add any prefixes, suffixes, or additional modifiers.

    The following table demonstrates some correct and incorrect cases of character and style triggering:

    TypePrompt wordRight or wrongReason
    RoleRei AyanamiWrongThe character name should be ” ayanami rei “.No series tag ” neon genesis evangelion ” added.
    Rolecharacter:ganyu \(genshin impact\), genshin impactWrongAdded the prefix “character:” superficially.
    Roleganyu_\(genshin impact\)WrongNo fully normalized tags: should not contain underscores.No series tags were added.
    Roleganyu (genshin impact), genshin impactWrongNo fully normalized tags: parentheses are not escaped.
    Roleganyu (genshin impact\), genshin impactWrongNo fully normalized tags: the left parenthesis is not escaped.
    Roleganyu \(genshin impact\),genshin impactWrongSeparated two tags with a Chinese comma
    Roleganyu \(genshin impact\), genshin impactCorrect
    Artist styleby wlopWrongAdded the prefix “by” superficially.
    Artist styleartist:wlopWrongAdded the prefix “artist:” superficially.
    Artist styledinoWrongThe artist name is wrong, the artist name of aidxl/artiwaifu should not be used, but should follow Danbooru, so it is ” dino\ (dinoartforame\) “
    Artist stylewlopCorrect
    1. Trigger Words Wiki

    For your convenience, we also provide a complete trigger word form in the noob-wiki for your reference:

    Table typeDownload link
    Danbooru characterClick here
    Danbooru Artist StyleClick here
    E621 characterClick here
    E621 Artist StyleClick here

    Each of these forms contains a trigger word table from one of the databases Danbooru and e621. Each row of the table represents a character or artist style. You only need to find the row corresponding to the desired character or artist style, copy the ” trigger ” section and paste it into the prompt word as it is. If you are unsure about a character or artist style, you can also click on the link in the “url” column to view the example diagram on the website. The following table explains the meaning of each column. Not every table contains all columns.

    ListedMeaningRemarks
    characterThe tag name of the role on the corresponding website.
    artistArtist style in the tag name of the corresponding website.
    triggerTrigger words after standardization.Copy and paste it into the prompt word as it is and use it.
    countNumber of images with this tag.As an expectation of the degree of restoration of this concept. For characters, a count higher than 200 can be better restored. For style, a count higher than 100 can be better restored.
    urlTag page on the original website.
    solo_countThe number of images in the dataset with this tag and only one character in the image.Role table only. For roles, solo_count above 50 can be restored better.When the reduction degree is judged by count, the deviation of the count column is large and the accuracy is low, while solo_count is a more accurate indicator.
    core_tagsThe core characteristic tags of the character include appearance, gender, and clothing. Separated by English commas, each tag has been standardized.Only Danbooru character list. When unpopular characters are triggered and their restoration degree is insufficient, several core feature tags can be added to enhance the restoration degree.
    1. Special Tags

    Special labels are a type of label with special meanings and effects that serve as an auxiliary function.

    1. Quality Tags

    Quality tags are actually popularity tags obtained from statistical data based on Danbooru and e621 user preferences. In order of quality from high to low:

    masterpiece > best quality > high quality / good quality > normal quality > low quality / bad quality > worst quality

    1. Aesthetic Tags

    Aesthetic tags scored according to the aesthetic scoring model. There are only two so far, ” very awa ” and ” worst aesthetic “. The former is the data with waifu-scorer-v3 and waifu-scorer-v4-beta weighted scores in the top 5%, and the latter is the data with the bottom 5%. It is named very awa because its aesthetic standards are similar to the A rti Wa ifu Diffusion model. In addition, an aesthetic tag that is still in training and has no obvious effect is “very as2″, which is the data with ” aesthetic-shadow-v2-5 ” scores in the top 5%.

    Comparison of the effects of aesthetic labels

    1. Safety/Rating Tags

    There are four safety/rating tags: general , sensitive , nsfw and explicit .

    We hope that users will consciously add “nsfw” in negative prompts to filter out inappropriate content. 😀

    1. Year and Period Tags

    The year tag is used to indicate the year of creation of the work, indirectly affecting the quality, style, character restoration degree , etc. Its format is ” year xxxx “, where “xxxx” is a specific year, such as “year 2024”.

    Period tags are year tags that also have a significant impact on image quality. The correspondence between tags and years is shown in the table below.

    Year range2021~20242018~20202014~20172011~20132005~2010
    Period labelnewestrecentmidearlyold
    1. Other Tips

    This section provides recommended usage examples of prompts for reference only.

    1. Quality Prompts

    The following recommended starting point uses special tags, which are the ones with the highest correlation with image quality.

    very awa, masterpiece, best quality, year 2024, newest, highres, absurdres
    1. Negative Prompts

    The following table introduces common negative tags and their sources. Not all negative tags are necessarily bad, and using them properly can have unexpected effects.

    TagTranslationRemarksSource
    Quality label
    worst aestheticThe worst aestheticContains low aesthetic concepts such as low quality, watermarks, comics, multi-views, and unfinished sketchesAesthetic
    worst qualityWorst qualityQuality
    low qualityLow qualityThe low quality of DanbooruQuality
    bad qualityLow qualityLow quality of e621Quality
    lowresLow resolutionDanbooru
    scan artifactsScanning artifactDanbooru
    jpeg artifactsJPEG image compression artifactDanbooru
    lossy-lossless–Images that have been converted from lossy image format to lossless image format are usually full of artifacts.Danbooru
    Composition and art form labels
    ai-generatedAI generatedGenerated by AI, it often has a greasy feeling generated by AI.Danbooru
    abstractAbstractEliminate cluttered linesDanbooru
    official artOfficial artIllustrations made by the official company/artist of the series or character. The copyright , company or artist name, and copyright notice may be printed somewhere on the image.Danbooru
    oldEarly imagesPeriod
    4komaFour-panel comicDanbooru
    multiple viewsMulti-viewDanbooru
    reference sheetPersonality designDanbooru
    dakimakura \(medium\)Throw pillow diagramDanbooru
    turnaroundFull-body three-viewDanbooru
    comicCartoonDanbooru
    greyscaleCanary release mapBlack and white pictureDanbooru
    monochromeMonochromeBlack and white pictureDanbooru
    sketchLine draftDanbooru
    unfinishedUnfinished workDanbooru
    E621 label
    furryFurrye621
    anthroPersonified Furye621
    feralFurrye621
    semi-anthroSemianthropomorphic Furye621
    mammalMammals (Fury)e621
    Watermark label
    watermarkWatermarkDanbooru
    logoLOGODanbooru
    signatureArtist signatureDanbooru
    textTextDanbooru
    artist nameArtist nameDanbooru
    datedDateDanbooru
    usernameUsernameDanbooru
    web addressWebsiteDanbooru
    Anatomical label
    bad handsBad handDanbooru
    bad feetBad footDanbooru
    extra digitsExtra fingersDanbooru
    fewer digitsMissing fingersDanbooru
    extra armsMultiple armDanbooru
    extra facesMultifacetedDanbooru
    multiple headsBullsDanbooru
    missing limbLess limbDanbooru
    amputeeLoss of limbsDanbooru
    severed limbAmputationDanbooru
    mutated handsMutated hand–
    distorted anatomyTwisted anatomy–
    Content tag
    nsfwPornographic gradeSafety
    explicitExposure levelSafety
    censoredCodedDanbooru
    1. Tag Misuse

    Currently, the use of meaningless tags is circulating. This section lists commonly misused tags.

    TagTranslationRemarksSource
    bad idImage id corruptionRelated to image metadata, not image contentDanbooru
    bad linkImage original link is brokenRelated to image metadata, not image contentDanbooru
    duplicateThe image is repeated on the websiteThere is a certain correlation with quality, but it is not duplicate content.Danbooru

    Related Posts

    Guide to Prompting with Illustrious Models

    Complicated desired outputs = Complex prompts with mix of natural language and tags Complex prompt structure and order: Simple Prompt Example: Resu...

    Read more

    Guide to AI Pose Prompting (NSFW)

    This guide was created to bring inspiration to this visual vocabulary. There is a short description for each pose so that you can connect the word ...

    Read more

    Can Chatgpt GPT-4o image generation do NSFW/nudity? GPT-4o massive nerf and other findings

    GPT-4o, released on March 25, 2025 went viral soon after release, bolstered by the Studio Ghibli animation style trend. Most people are curious if ...

    Read more

    Automatic1111 Stable Diffusion WebUI for Hentai Generation (SD1.5 Tutorial)

    This guide is intended to get you generating quality NSFW images as quickly as possible with Automatic1111 Stable Diffusion WebUI. We’ll be u...

    Read more

    Hunyuan Video Generation Guide (ComfyUI)

    This tutorial will provide a comprehensive guide on using Tencent’s Hunyuan Video model in ComfyUI for text-to-video generation. We will walk you t...

    Read more

    How to run Stable Diffusion on CPU

    Now, some of us don’t have fancy GPUs. That’s fine. We can run Stable Diffusion on our CPUs. This is an incredibly barebones implementa...

    Read more