BetterWaifu Logo
betterwaifu
BetterWaifu
ModelsBlogPricingDiscord
Generate

Footer

BetterWaifu

  • About
  • Guides
  • Models
  • Contact

Tools

  • Prompt Detector
  • Hentai Generator

Use Cases

  • BSDM AI Generator
  • Futanari AI Generator
  • AI Feet Generator
  • Gay AI Generator
  • Furry AI Generator
  • Femboy AI Generator

Legal

  • Privacy
  • Terms

Subscribe to BetterWaifu

The latest updates on NSFW AI

FacebookInstagramXGitHubYouTube

© 2025 BetterWaifu.com. All rights reserved.

    How to run Stable Diffusion on CPU

    How to run Stable Diffusion on CPU

    Guides
    g

    By gerogero

    Updated: April 4, 2025

    Now, some of us don’t have fancy GPUs. That’s fine. We can run Stable Diffusion on our CPUs.

    This is an incredibly barebones implementation of Stable Diffusion, do not expect cutting edge features
    If you have a compatible GPU which has 2-4gb Vram or more, try the Voldy guide
    For most purposes, it may be more practical to use a web service or a collab for Stable Diffusion
    But there is something special about being able to generate on your own humble CPU
    All credit goes to bes-dev and rpyth

    • Features
      Txt2img/img2img
      Negative prompts
      Prompt queueing
      Upscaling
      Waifu Diffusion support
    • Minimum Requirements:
      Windows/Linux
      Python 3.8.+ (included in Miniconda)
      CPU compatible with OpenVINO (most CPUs)
      8gb RAM (barely enough, 16gb+recommended)
    • How fast is it?
      It may not be nearly as fast as a dedicated GPU due to memory speed bottlenecking, but it is no slouch either
      For any CPU from the past 10 years, including laptop ones, it shouldn’t take too much longer than a couple minutes per 512x result
      The openVINO framework is incredibly optimized and fast, especially for Intel CPUs, and will squeeze the maximum potential out of your hardware

    Guide

    Step 1. Install Git if you do not have it already
    -When installing, make sure to select the Windows Explorer integration > Git Bash

    Step 2. (W10) Press Windows Key + I to open your control panel and search for “Developer Mode”, turning it on

    Step 3. Download Miniconda HERE. Download Miniconda 3
    -Install Miniconda in the default location. Install for all users.

    Step 4. Clone the repo
    -Right click in your desired location and select ‘Git Bash here’
    -Enter git clone https://github.com/bes-dev/stable_diffusion.openvino
    Alternatively, you can download it as a .zip Here and extract

    Step 5. Open Anaconda Prompt (miniconda3).
    Navigate to the /stable-diffusion-v1-4-openvino folder wherever you downloaded using “cd” to jump folders.
    (Or just type “cd” followed by a space and then drag the folder into the Anaconda prompt.)

    Step 6. Enter the following commands into Miniconda to set up your environment:

    • conda create --name vin python=3.9 pip
    • conda activate vin
    • conda install pip
    • pip install -r requirements.txt
    • pip install Pillow pyyaml sv-ttk
      Wait patiently while necessary resources are installed, this may take a while

    Step 7. Download the pyGUI scripts
    Extract and copy all files within to your main /stable-diffusion-v1-4-openvino folder, and hit replace on any file conflicts

    Step 8. Download the RealESRGAN upscaler (linux ver)
    Unzip and place the folder inside /stable-diffusion-v1-4-openvino
    And you’re done

    Usage

    1. Open the Miniconda prompt and navigate to /stable-diffusion-v1-4-openvino like before
    2. Type conda activate vin (You will need to do this every time you run the script)
    3. Type python pygui.py

    FIRST TIME SETUP
    Go to Settings -> Configure in the GUI
    -Hit [?] to open file browser and and link the RealESRGAN executable by hitting ‘open’
    -Link the your demo.py file from the openvino folder the same way
    -Add the path to your Python executable, it should be C:\ProgramData\Miniconda3\python.exe
    -Hit save

    Generation

    • Go to Queue -> Add Item to enter a new prompt
    • Or Queue -> Restore Item to load your last entered prompt
      Prompt: Keywords describing what you want, be descriptive for best results
      Unprompt: Keywords describing what you don’t want in your image
      Output: Output path and name of your output .png
      Image: Img2Img, select an image file to create variants of it
      Steps: How many iterations should be done for the output. More = better. 35-55 is the sweet spot. >75 is overkill
      Seed: Seed for the output, randomized by default
      Upscale: Choose how you want your image upscaled
      Config: Save info about your output

    Links/Notes

    • If you are getting Python version errors with 3.10 and don’t want to have conflicting installations, try the portable Winpython 3.9
    • You can queue up Multiple different prompts to run one after another.
      This can be very convenient since you don’t need to wait foran output to finish to enter a new prompt)
    • If you don’t select an output folder, they will be output in /appdata/local/tmp. Hit ‘save as’ so you don’t lose them)
    • If your outputs are or become unusually slow (10-15+ minutes),
      it’s likely that your RAM limit was exceeded and SD is using the swap partition on your drive as makeshift RAM. (Common issue with 8gb)
      Close all other programs and free up more memory
    • Stable Diffusion openVINO Github
    • Stable Diffusion openVINO page
    • Litechan page
    • Progrock upscaler (compatible with openVINO

    –SPEED PER RESULT–
    (Intel(R) Core(TM) i5-8279U) 7.4 s/it 3.59 min
    (AMD Ryzen Threadripper 1900X) 5.34 s/it 2.58 min
    (Intel(R) Xeon(R) Gold 6154 CPU) 1 s/it 33 s

    Related Posts

    Guide to Prompting with Illustrious Models

    Complicated desired outputs = Complex prompts with mix of natural language and tags Complex prompt structure and order: Simple Prompt Example: Resu...

    Read more

    Guide to AI Pose Prompting (NSFW)

    This guide was created to bring inspiration to this visual vocabulary. There is a short description for each pose so that you can connect the word ...

    Read more

    Can Chatgpt GPT-4o image generation do NSFW/nudity? GPT-4o massive nerf and other findings

    GPT-4o, released on March 25, 2025 went viral soon after release, bolstered by the Studio Ghibli animation style trend. Most people are curious if ...

    Read more

    Automatic1111 Stable Diffusion WebUI for Hentai Generation (SD1.5 Tutorial)

    This guide is intended to get you generating quality NSFW images as quickly as possible with Automatic1111 Stable Diffusion WebUI. We’ll be u...

    Read more

    Hunyuan Video Generation Guide (ComfyUI)

    This tutorial will provide a comprehensive guide on using Tencent’s Hunyuan Video model in ComfyUI for text-to-video generation. We will walk you t...

    Read more

    Ultimate Guide to NoobAI

    Welcome to the NoobAI guide This document provides a comprehensive, complete and up-to-date introduction to the NoobAI-XL model. What is Noob? Noob...

    Read more