【Stable Diffusion】安装过程中常见报错解决方法

2023-11-13

转自:https://openai.wiki/stable-diffusion-error.html

如何查看报错

在你安装时可能经常遇到各种各样的问题,但是对于一堆陌生的英文和各种各样的错误,大家可能经常无从下手,下面我将会教大家如何查看报错。

(base) C:\OpenAI.Wiki\stable-diffusion-webui>webui-user.bat
venv “C:\OpenAI.Wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Installing clip
Installing open_clip
Cloning Stable Diffusion into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai…
Cloning Taming Transformers into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\taming-transformers…
Cloning K-diffusion into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\k-diffusion…
Cloning CodeFormer into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer…
Traceback (most recent call last):
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 287, in prepare_environment
git_clone(codeformer_repo, repo_dir(‘CodeFormer’), “CodeFormer”, codeformer_commit_hash)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 151, in git_clone
run(f'”{git}” clone “{url}” “{dir}”‘, f”Cloning {name} into {dir}…”, f”Couldn’t clone {name}”)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t clone CodeFormer.
Command: “git” clone “https://github.com/sczhou/CodeFormer.git” “C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer”
Error code: 128
stdout:
stderr: Cloning into ‘C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer’…

我们先看一下第一行内容(base) C:\OpenAI.Wiki\stable-diffusion-webui>webui-user.bat,前面的(base)是使用conda创建的虚拟环境名称,代表我们目前正在使用哪一个虚拟环境,很显然这与本站所提供的教程是不一致的,正常来说应该是(D:\openai.wiki\stable-diffusion-webui\automatic),这一串路径就是虚拟环境的名称。

第5行和第6行分别是Installing clip和Installing open_clip,这是两个依赖库。这里没有报错,因为后面是是Cloning一些其它的依赖库。

第11行Traceback (most recent call last):,这就是代表出现问题了,我们在出现问题的前一行中可以看到内容为Cloning CodeFormer into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer…,这就是代表是一步出错了。

网络问题

解决方法

本部分内容基本都是因为网络问题,大部分安装问题可以通过使用国内镜像源来解决。

例如:我们需要在执行安装requirements.txt文件时,我们可以尝试使用pip install -i https://mirrors.aliyun.com/pypi/simple/ -r D:/openai.wiki/stable-diffusion-webui/requirements.txt。

这段代码可以理解为不使用官方下载地址,而是国内阿里云的镜像地址下载相关依赖组件。

如果还是不能解决,请自行搜索CMD魔法上网,有一些魔法工具即便可以访问Youtube、Google等网站,也不代表它能够在CMD中是可以正常使用的,因为有一些魔法工具无法在CMD中被继承。

Stable-Diffusion-Stability-AI

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Cloning Stable Diffusion into D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai…
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 355, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 288, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”, stable_diffusion_commit_hash)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 151, in git_clone
run(f'”{git}” clone “{url}” “{dir}”‘, f”Cloning {name} into {dir}…”, f”Couldn’t clone {name}”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t clone Stable Diffusion.
Command: “git” clone “https://github.com/Stability-AI/stablediffusion.git” “D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai”
Error code: 128
stdout:
stderr: Cloning into ‘D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai’…
fatal: unable to access ‘https://github.com/Stability-AI/stablediffusion.git/’: OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to github.com:443

脚本在执行git clone https://github.com/Stability-AI/stablediffusion.git命令时无法下载Stable-Diffusion-Stability-AI库。

PyPI错误

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

PyPI元数据包有问题,可以尝试执行pip cache purge清理缓存。

哈希值不正确

(base) tianjihuideMBP:stable-diffusion-webui color$ ./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on color user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py…
################################################################
Python 3.10.10 (v3.10.10:aad5f6a891, Feb 7 2023, 08:47:40) [Clang 13.0.0 (clang-1300.0.29.30)]
Commit hash:
Traceback (most recent call last):
File “/Users/color/stable-diffusion-webui/launch.py”, line 378, in
prepare_environment()
File “/Users/color/stable-diffusion-webui/launch.py”, line 315, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”,
File “/Users/color/stable-diffusion-webui/launch.py”, line 152, in git_clone
current_hash = run(f'”{git}” -C “{dir}” rev-parse HEAD’, None,
File “/Users/color/stable-diffusion-webui/launch.py”, line 105, in run
raise RuntimeError(message)
RuntimeError: Couldn’t determine Stable Diffusion’s hash: 47b6b607fdd31875c9279cd2f4f16b92e4ea958e.
Command: “git” -C “repositories/stable-diffusion-stability-ai” rev-parse HEAD
Error code: 128
stdout: HEAD
stderr: fatal: ambiguous argument ‘HEAD’: unknown revision or path not in the working tree.
Use ‘–‘ to separate paths from revisions, like this:
‘git […] — […]’
(base) tianjihuideMBP:stable-diffusion-webui color$

这个错误似乎是在运行一个 Python 脚本时出现的。根据输出信息来看,该脚本尝试克隆一个名为 Stable Diffusion 的 git 仓库,但是它无法确定该仓库的哈希值。

建议删除整个项目,然后重新安装部署。

组件不匹配

ERROR: Could not find a version that satisfies the requirement opencv-contrib-python (from versions: none)
ERROR: No matching distribution found for opencv-contrib-python

这个错误信息意味着在当前使用的Python环境下,没有找到匹配的opencv-contrib-python版本。这可能是因为在PyPI上没有发布所需的版本或版本不兼容。请尝试使用pip search命令来搜索可用的版本,并尝试安装兼容的版本。

按道理来说,如果根据教程来安装是不会出现这个问题的,尽量删除后从头来过,严格遵守每一步。

CodeFormer

(base) C:\OpenAI.Wiki\stable-diffusion-webui>webui-user.bat
venv “C:\OpenAI.Wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Installing clip
Installing open_clip
Cloning Stable Diffusion into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai…
Cloning Taming Transformers into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\taming-transformers…
Cloning K-diffusion into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\k-diffusion…
Cloning CodeFormer into C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer…
Traceback (most recent call last):
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 287, in prepare_environment
git_clone(codeformer_repo, repo_dir(‘CodeFormer’), “CodeFormer”, codeformer_commit_hash)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 151, in git_clone
run(f'”{git}” clone “{url}” “{dir}”‘, f”Cloning {name} into {dir}…”, f”Couldn’t clone {name}”)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t clone CodeFormer.
Command: “git” clone “https://github.com/sczhou/CodeFormer.git” “C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer”
Error code: 128
stdout:
stderr: Cloning into ‘C:\OpenAI.Wiki\stable-diffusion-webui\repositories\CodeFormer’…

这里看起来是在克隆CodeFormer时出了问题。错误信息表明Git无法克隆该仓库。请确保您的计算机可以访问 https://github.com/sczhou/CodeFormer.git ,并且您有权访问该仓库。您可以尝试手动克隆该仓库以查看是否存在任何错误。

无法链接SD库

venv “C:\OpenAI.Wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 64da5c46ef0d68b9048747c2e0d46ce3495f9f29
Fetching updates for Stable Diffusion…
Traceback (most recent call last):
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 284, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”, stable_diffusion_com
mit_hash)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 147, in git_clone
run(f'”{git}” -C “{dir}” fetch’, f”Fetching updates for {name}…”, f”Couldn’t fetch {name}”)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t fetch Stable Diffusion.
Command: “git” -C “C:\OpenAI.Wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai” fetch
Error code: 128
stdout:
stderr: fatal: unable to access ‘https://github.com/AUTOMATIC1111/stable-diffusion-webui.git/’: OpenSSL SSL_read: Connec
tion was reset, errno 10054

这个错误信息看起来是从 Github 下载 Stable Diffusion 的时候出错了,可能是网络连接问题。建议检查网络连接是否能够正常使用魔法,

SD库不完整

venv “C:\OpenAI.Wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 64da5c46ef0d68b9048747c2e0d46ce3495f9f29
Fetching updates for Stable Diffusion…
Checking out commit for Stable Diffusion with hash: 47b6b607fdd31875c9279cd2f4f16b92e4ea958e…
Traceback (most recent call last):
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 284, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”, stable_diffusion_com
mit_hash)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 148, in git_clone
run(f'”{git}” -C “{dir}” checkout {commithash}’, f”Checking out commit for {name} with hash: {commithash}…”, f”Cou
ldn’t checkout commit {commithash} for {name}”)
File “C:\OpenAI.Wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t checkout commit 47b6b607fdd31875c9279cd2f4f16b92e4ea958e for Stable Diffusion.
Command: “git” -C “C:\OpenAI.Wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai” checkout 47b6b607fd
d31875c9279cd2f4f16b92e4ea958e
Error code: 128
stdout:
stderr: fatal: reference is not a tree: 47b6b607fdd31875c9279cd2f4f16b92e4ea958e

这个错误表示在检出 Stable Diffusion 时出现问题。可能是由于该库的某些文件或依赖项已更改或已移除,导致无法找到所需的提交。请尝试清除本地库并重新克隆 Stable Diffusion,然后再次运行代码。

未找到pip

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Commit hash: 4c1ad743e3baf1246db0711aa0107debf036a12b
Installing torch and torchvision
D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe: No module named pip
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 253, in prepare_environment
run(f'”{python}” -m {torch_command}’, “Installing torch and torchvision”, “Couldn’t install torch”, live=True)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 81, in run
raise RuntimeError(f”””{errdesc or ‘Error running command’}.
RuntimeError: Couldn’t install torch.
Command: “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe” -m pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 –extra-index-url https://download.pytorch.org/whl/cu117
Error code: 1

根据错误日志,似乎在运行 launch.py 文件尝试安装 torch 和 torchvision。然而,这个命令在运行时出现了错误,错误信息是 No module named pip。

这可能是因为 Python 环境中没有安装 pip,所以在运行 pip 命令时出现了问题。

你可以尝试先安装 pip,然后再运行 launch.py。你可以使用以下命令来安装 pip:

python get-pip.py
这应该会在你的 Python 环境中安装 pip。然后你可以再次运行启动文件即可。

open_clip

F:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “F:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Installing open_clip
Traceback (most recent call last):
File “F:\openai.wiki\stable-diffusion-webui\launch.py”, line 380, in
prepare_environment()
File “F:\openai.wiki\stable-diffusion-webui\launch.py”, line 296, in prepare_environment
run_pip(f”install {openclip_package}”, “open_clip”)
File “F:\openai.wiki\stable-diffusion-webui\launch.py”, line 145, in run_pip
return run(f'”{python}” -m pip {args} –prefer-binary{index_url_line}’, desc=f”Installing {desc}”, errdesc=f”Couldn’t install {desc}”)
File “F:\openai.wiki\stable-diffusion-webui\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn’t install open_clip.
Command: “F:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe” -m pip install git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b –prefer-binary
Error code: 1
stdout: Collecting git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
Cloning https://github.com/mlfoundations/open_clip.git (to revision bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b) to c:\users\administrator\appdata\local\temp\pip-req-build-0bs_j2f1
stderr: Running command git clone –filter=blob:none –quiet https://github.com/mlfoundations/open_clip.git ‘C:\Users\Administrator\AppData\Local\Temp\pip-req-build-0bs_j2f1’
error: RPC failed; curl 18 HTTP/2 stream 3 was not closed cleanly before end of the underlying stream
fatal: expected flush after ref listing
error: subprocess-exited-with-error
git clone –filter=blob:none –quiet https://github.com/mlfoundations/open_clip.git ‘C:\Users\Administrator\AppData\Local\Temp\pip-req-build-0bs_j2f1’ did not run successfully.
exit code: 128
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
git clone –filter=blob:none –quiet https://github.com/mlfoundations/open_clip.git ‘C:\Users\Administrator\AppData\Local\Temp\pip-req-build-0bs_j2f1’ did not run successfully.
exit code: 128
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: F:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip

这个错误提示是在运行一个名为“webui-user.bat”的文件时出现的。错误提示表明程序试图安装名为“open_clip”的包,但是安装失败了。下面是详细的错误信息:

Running command git clone –filter=blob:none –quiet https://github.com/mlfoundations/open_clip.git ‘C:\Users\Administrator\AppData\Local\Temp\pip-req-build-0bs_j2f1’
error: RPC failed; curl 18 HTTP/2 stream 3 was not closed cleanly before end of the underlying stream
fatal: expected flush after ref listing
error: subprocess-exited-with-error
git clone –filter=blob:none –quiet https://github.com/mlfoundations/open_clip.git ‘C:\Users\Administrator\AppData\Local\Temp\pip-req-build-0bs_j2f1’ did not run successfully.
exit code: 128
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.

根据错误信息分析,程序在安装“open_clip”包时,尝试从GitHub上克隆代码仓库,但是克隆失败了,可能是由于网络连接问题或GitHub服务器出现问题导致的。您可以尝试重新运行程序,或稍后再试,看看能否成功安装该包。

SD库不正确

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Fetching updates for Stable Diffusion…
Checking out commit for Stable Diffusion with hash: 47b6b607fdd31875c9279cd2f4f16b92e4ea958e…
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 380, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 315, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”, stable_diffusion_commit_hash)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 164, in git_clone
run(f'”{git}” -C “{dir}” checkout {commithash}’, f”Checking out commit for {name} with hash: {commithash}…”, f”Couldn’t checkout commit {commithash} for {name}”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn’t checkout commit 47b6b607fdd31875c9279cd2f4f16b92e4ea958e for Stable Diffusion.
Command: “git” -C “D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai” checkout 47b6b607fdd31875c9279cd2f4f16b92e4ea958e
Error code: 128
stdout:
stderr: fatal: reference is not a tree: 47b6b607fdd31875c9279cd2f4f16b92e4ea958e

根据您提供的信息,看起来是在运行一个名为 launch.py 的Python脚本时发生了错误。根据错误信息,似乎出现了Git无法检出提交的问题。这可能是由于Git存储库的状态不正确或由于网络连接问题而导致的。您可以尝试运行以下命令来检查网络连接:

ping files.pythonhosted.org

PIP超时

ERROR: Exception:
Traceback (most recent call last):
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 437, in _error_catcher
yield
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 560, in read
data = self._fp_read(amt) if not fp_closed else b””
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 526, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File “E:\conda\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py”, line 90, in read
data = self.__fp.read(amt)
File “E:\conda\lib\http\client.py”, line 465, in read
s = self.fp.read(amt)
File “E:\conda\lib\socket.py”, line 705, in readinto
return self._sock.recv_into(b)
File “E:\conda\lib\ssl.py”, line 1274, in recv_into
return self.read(nbytes, buffer)
File “E:\conda\lib\ssl.py”, line 1130, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “E:\conda\lib\site-packages\pip\_internal\cli\base_command.py”, line 160, in exc_logging_wrapper
status = run_func(*args)
File “E:\conda\lib\site-packages\pip\_internal\cli\req_command.py”, line 247, in wrapper
return func(self, options, args)
File “E:\conda\lib\site-packages\pip\_internal\commands\install.py”, line 400, in run
requirement_set = resolver.resolve(
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py”, line 92, in resolve
result = self._result = resolver.resolve(
File “E:\conda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File “E:\conda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File “E:\conda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 172, in _add_to_criteria
if not criterion.candidates:
File “E:\conda\lib\site-packages\pip\_vendor\resolvelib\structs.py”, line 151, in __bool__
return bool(self._sequence)
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 155, in __bool__
return any(self)
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 47, in _iter_built
candidate = func()
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py”, line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 297, in __init__
super().__init__(
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 162, in __init__
self.dist = self._prepare()
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 231, in _prepare
dist = self._prepare_distribution()
File “E:\conda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 308, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File “E:\conda\lib\site-packages\pip\_internal\operations\prepare.py”, line 491, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File “E:\conda\lib\site-packages\pip\_internal\operations\prepare.py”, line 536, in _prepare_linked_requirement
local_file = unpack_url(
File “E:\conda\lib\site-packages\pip\_internal\operations\prepare.py”, line 166, in unpack_url
file = get_http_url(
File “E:\conda\lib\site-packages\pip\_internal\operations\prepare.py”, line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File “E:\conda\lib\site-packages\pip\_internal\network\download.py”, line 147, in __call__
for chunk in chunks:
File “E:\conda\lib\site-packages\pip\_internal\cli\progress_bars.py”, line 53, in _rich_progress_bar
for chunk in iterable:
File “E:\conda\lib\site-packages\pip\_internal\network\utils.py”, line 63, in response_chunks
for chunk in response.raw.stream(
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 621, in stream
data = self.read(amt=amt, decode_content=decode_content)
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 559, in read
with self._error_catcher():
File “E:\conda\lib\contextlib.py”, line 153, in __exit__
self.gen.throw(typ, value, traceback)
File “E:\conda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 442, in _error_catcher
raise ReadTimeoutError(self._pool, None, “Read timed out.”)
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out.

这个错误通常是因为pip尝试从远程服务器下载软件包时连接超时。你可以尝试重新运行命令,或者尝试使用pip的“–default-timeout”选项来增加超时时间,如下所示:

pip install --default-timeout=1000 package_name

这里,超时时间为1000秒。如果仍然无法解决问题,你可以尝试使用VPN或者更改你的网络连接。

requirements

卡在了installing requirements for web ui这一步
输出显示出现了一个SSL错误,它可能是由于网络问题导致的。你可以尝试使用其他网络环境或在网络良好的情况下重试安装。

此外,你还可以尝试在命令行中运行以下命令更新pip,然后重新运行安装:python -m pip install --upgrade pip

xformers

No module ‘xformers’. Proceeding without it.
每次启动时显示如上内容,这并不算错误,新版SD不支持xformers,可以手动更改为支持。

这个库的主要作用是可以加速生成图像,但有时候安装过后速度反而下降,所以建议保持即可,不用理会。

GFPGAN

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash:
Installing gfpgan
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 380, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 290, in prepare_environment
run_pip(f”install {gfpgan_package}”, “gfpgan”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 145, in run_pip
return run(f'”{python}” -m pip {args} –prefer-binary{index_url_line}’, desc=f”Installing {desc}”, errdesc=f”Couldn’t install {desc}”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn’t install gfpgan.
Command: “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe” -m pip install git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 –prefer-binary
Error code: 1
stdout: Collecting git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379
Cloning https://github.com/TencentARC/GFPGAN.git (to revision 8d2447a2d918f8eba5a4a01463fd48e45126a379) to c:\users\17742\appdata\local\temp\pip-req-build-ocz06mwj
stderr: Running command git clone –filter=blob:none –quiet https://github.com/TencentARC/GFPGAN.git ‘C:\Users\17742\AppData\Local\Temp\pip-req-build-ocz06mwj’
fatal: unable to access ‘https://github.com/TencentARC/GFPGAN.git/’: Failed to connect to github.com port 443 after 21046 ms: Couldn’t connect to server
error: subprocess-exited-with-error
git clone –filter=blob:none –quiet https://github.com/TencentARC/GFPGAN.git ‘C:\Users\17742\AppData\Local\Temp\pip-req-build-ocz06mwj’ did not run successfully.
exit code: 128
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
git clone –filter=blob:none –quiet https://github.com/TencentARC/GFPGAN.git ‘C:\Users\17742\AppData\Local\Temp\pip-req-build-ocz06mwj’ did not run successfully.
exit code: 128
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip

这个错误显示在安装gfpgan时出现了问题。看起来尝试从GitHub克隆代码库失败了。

最有可能的原因是你的计算机无法连接到GitHub,或者GitHub网站当前遇到了问题。

尝试检查你的网络连接并确保你可以连接到GitHub,或者稍后再尝试安装。另外,你可能也可以尝试升级你的pip版本。

subprocess-exited-with-error

error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [109 lines of output]
D:\Miniconda\lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
ERROR: Exception:
Traceback (most recent call last):
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 437, in _error_catcher
yield
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 560, in read
data = self._fp_read(amt) if not fp_closed else b””
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 526, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File “D:\Miniconda\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py”, line 90, in read
data = self.__fp.read(amt)
File “D:\Miniconda\lib\http\client.py”, line 465, in read
s = self.fp.read(amt)
File “D:\Miniconda\lib\socket.py”, line 705, in readinto
return self._sock.recv_into(b)
File “D:\Miniconda\lib\ssl.py”, line 1274, in recv_into
return self.read(nbytes, buffer)
File “D:\Miniconda\lib\ssl.py”, line 1130, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “D:\Miniconda\lib\site-packages\pip\_internal\cli\base_command.py”, line 160, in exc_logging_wrapper
status = run_func(*args)
File “D:\Miniconda\lib\site-packages\pip\_internal\cli\req_command.py”, line 247, in wrapper
return func(self, options, args)
File “D:\Miniconda\lib\site-packages\pip\_internal\commands\wheel.py”, line 170, in run
requirement_set = resolver.resolve(reqs, check_supported_wheels=True)
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py”, line 92, in resolve
result = self._result = resolver.resolve(
File “D:\Miniconda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File “D:\Miniconda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File “D:\Miniconda\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 172, in _add_to_criteria
if not criterion.candidates:
File “D:\Miniconda\lib\site-packages\pip\_vendor\resolvelib\structs.py”, line 151, in __bool__
return bool(self._sequence)
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 155, in __bool__
return any(self)
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 47, in _iter_built
candidate = func()
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py”, line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 297, in __init__
super().__init__(
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 162, in __init__
self.dist = self._prepare()
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 231, in _prepare
dist = self._prepare_distribution()
File “D:\Miniconda\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 308, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File “D:\Miniconda\lib\site-packages\pip\_internal\operations\prepare.py”, line 491, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File “D:\Miniconda\lib\site-packages\pip\_internal\operations\prepare.py”, line 536, in _prepare_linked_requirement
local_file = unpack_url(
File “D:\Miniconda\lib\site-packages\pip\_internal\operations\prepare.py”, line 166, in unpack_url
file = get_http_url(
File “D:\Miniconda\lib\site-packages\pip\_internal\operations\prepare.py”, line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File “D:\Miniconda\lib\site-packages\pip\_internal\network\download.py”, line 147, in __call__
for chunk in chunks:
File “D:\Miniconda\lib\site-packages\pip\_internal\network\utils.py”, line 63, in response_chunks
for chunk in response.raw.stream(
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 621, in stream
data = self.read(amt=amt, decode_content=decode_content)
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 559, in read
with self._error_catcher():
File “D:\Miniconda\lib\contextlib.py”, line 153, in __exit__
self.gen.throw(typ, value, traceback)
File “D:\Miniconda\lib\site-packages\pip\_vendor\urllib3\response.py”, line 442, in _error_catcher
raise ReadTimeoutError(self._pool, None, “Read timed out.”)
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out.
Traceback (most recent call last):
File “D:\Miniconda\lib\site-packages\setuptools\installer.py”, line 82, in fetch_build_egg
subprocess.check_call(cmd)
File “D:\Miniconda\lib\subprocess.py”, line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘D:\\Miniconda\\python.exe’, ‘-m’, ‘pip’, ‘–disable-pip-version-check’, ‘wheel’, ‘–no-deps’, ‘-w’, ‘C:\\Users\\23243\\AppData\\Local\\Temp\\tmpk82g_bqu’, ‘–quiet’, ‘torch’]’ returned non-zero exit status 2.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “”, line 2, in
File “”, line 34, in
File “C:\Users\23243\AppData\Local\Temp\pip-install-hauelb5m\basicsr_ff9fa51afe15482f8d5ed318666d9752\setup.py”, line 147, in
setup(
File “D:\Miniconda\lib\site-packages\setuptools\__init__.py”, line 86, in setup
_install_setup_requires(attrs)
File “D:\Miniconda\lib\site-packages\setuptools\__init__.py”, line 80, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File “D:\Miniconda\lib\site-packages\setuptools\dist.py”, line 874, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File “D:\Miniconda\lib\site-packages\pkg_resources\__init__.py”, line 789, in resolve
dist = best[req.key] = env.best_match(
File “D:\Miniconda\lib\site-packages\pkg_resources\__init__.py”, line 1075, in best_match
return self.obtain(req, installer)
File “D:\Miniconda\lib\site-packages\pkg_resources\__init__.py”, line 1087, in obtain
return installer(requirement)
File “D:\Miniconda\lib\site-packages\setuptools\dist.py”, line 944, in fetch_build_egg
return fetch_build_egg(self, req)
File “D:\Miniconda\lib\site-packages\setuptools\installer.py”, line 84, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command ‘[‘D:\\Miniconda\\python.exe’, ‘-m’, ‘pip’, ‘–disable-pip-version-check’, ‘wheel’, ‘–no-deps’, ‘-w’, ‘C:\\Users\\23243\\AppData\\Local\\Temp\\tmpk82g_bqu’, ‘–quiet’, ‘torch’]’ returned non-zero exit status 2.
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.

这个错误可能是由于pip在下载软件包时遇到了网络问题,导致安装不成功。您可以尝试重新运行命令,或者检查您的网络连接是否正常。另外,您可以尝试使用国内的镜像源来安装软件包,例如使用清华大学的镜像源,命令如下:

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
将 some-package 替换为您要安装的软件包的名称。

BLIP错误

F:\openai.wiki\stable-diffusion-webui-master>webui-user.bat
venv “F:\openai.wiki\stable-diffusion-webui-master\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash:
Cloning Taming Transformers into F:\openai.wiki\stable-diffusion-webui-master\repositories\taming-transformers…
Cloning K-diffusion into F:\openai.wiki\stable-diffusion-webui-master\repositories\k-diffusion…
Cloning CodeFormer into F:\openai.wiki\stable-diffusion-webui-master\repositories\CodeFormer…
Cloning BLIP into F:\openai.wiki\stable-diffusion-webui-master\repositories\BLIP…
Traceback (most recent call last):
File “F:\openai.wiki\stable-diffusion-webui-master\launch.py”, line 380, in
prepare_environment()
File “F:\openai.wiki\stable-diffusion-webui-master\launch.py”, line 319, in prepare_environment
git_clone(blip_repo, repo_dir(‘BLIP’), “BLIP”, blip_commit_hash)
File “F:\openai.wiki\stable-diffusion-webui-master\launch.py”, line 167, in git_clone
run(f'”{git}” clone “{url}” “{dir}”‘, f”Cloning {name} into {dir}…”, f”Couldn’t clone {name}”)
File “F:\openai.wiki\stable-diffusion-webui-master\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn’t clone BLIP.
Command: “git” clone “https://github.com/salesforce/BLIP.git” “F:\openai.wiki\stable-diffusion-webui-master\repositories\BLIP”
Error code: 128
stdout:
stderr: Cloning into ‘F:\openai.wiki\stable-diffusion-webui-master\repositories\BLIP’…
fatal: unable to access ‘https://github.com/salesforce/BLIP.git/’: OpenSSL SSL_read: Connection was reset, errno 10054

这个错误信息意味着在尝试从 GitHub 克隆 BLIP 存储库时出现了一个连接错误。 这可能是由于网络问题或 GitHub 服务器的问题导致的。

taming-transformers

(D:\openai.wiki\stable-diffusion-webui\automatic) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 380, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 316, in prepare_environment
git_clone(taming_transformers_repo, repo_dir(‘taming-transformers’), “Taming Transformers”, taming_transformers_commit_hash)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 159, in git_clone
current_hash = run(f'”{git}” -C “{dir}” rev-parse HEAD’, None, f”Couldn’t determine {name}’s hash: {commithash}”).strip()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn’t determine Taming Transformers’s hash: 24268930bf1dce879235a7fddd0b2355b84d7ea6.
Command: “git” -C “D:\openai.wiki\stable-diffusion-webui\repositories\taming-transformers” rev-parse HEAD
Error code: 128
stdout: HEAD
stderr: fatal: ambiguous argument ‘HEAD’: unknown revision or path not in the working tree.
Use ‘–‘ to separate paths from revisions, like this:
‘git […] — […]’

这个错误显示无法克隆 taming-transformers 仓库并检索其特定的提交哈希值(commit hash),但是在尝试获取哈希值时出现了错误。错误信息中显示了一条有关“HEAD”的错误消息,表明 Git 无法识别此哈希值。建议删除SD安装目录.\openai.wiki\stable-diffusion-webui\repositories\下的taming-transformers文件夹。

PIP超时

WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ReadTimeoutError(“HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out. (read timeout=15)”)’: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ConnectTimeoutError(, ‘Connection to files.pythonhosted.org timed out. (connect timeout=15)’)’: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ConnectTimeoutError(, ‘Connection to files.pythonhosted.org timed out. (connect timeout=15)’)’: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ConnectTimeoutError(, ‘Connection to files.pythonhosted.org timed out. (connect timeout=15)’)’: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ConnectTimeoutError(, ‘Connection to files.pythonhosted.org timed out. (connect timeout=15)’)’: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl
ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Max retries exceeded with url: /packages/bc/bf/58dbe1f382ecac2c0571c43b6e95028b14e159d67d75e49a00c26ef63d8f/lazy_loader-0.1-py3-none-any.whl (Caused by ConnectTimeoutError(, ‘Connection to files.pythonhosted.org timed out. (connect timeout=15)’))

没有开启魔法上网,建议添加镜像源。

CondaHTTPError

CondaHTTPError: HTTP 000 CONNECTION FAILED for url
Elapsed: –
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
‘https://conda.anaconda.org/pytorch/win-64’

这个错误提示表明在使用Conda时发生了HTTP错误,导致无法从指定的URL下载数据。可能的原因是网络连接问题或服务器故障。

如果你安装了anaconda请卸载,然后重新安装miniconda。如果不是,请检查网络环境。

Python问题

这类问题都是因为与Python相关的错误

PIP升级

[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip
PIP需要升级,解决方法很简单,直接复制执行你的电脑所给出的代码即可。例如我们复制run:后面的全部,然后在CMD中执行即可。

D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip

未找到Python

Couldn’t launch python
exit code: 9009

当你看到“Couldn’t launch python exit code: 9009”这个错误时,这通常意味着系统无法找到Python解释器。这可能是因为Python未正确安装或未添加到系统环境变量中。

您可以检查是否已正确安装Python并将其添加到系统环境变量中。如果已经安装并添加到系统环境变量中,则可能需要重新启动终端或计算机以使更改生效。

Python环境变量错误

Expecting value: line 1 column 1 (char 0)

可能是Python找不到系统变量了,你可以打开CMD窗口之后输入Python,看看系统能否正常启动Python,如果找不到Python的话,可以搜索一下Python环境变量修复的办法。

如果不想修复也能够正常使用,可以通过miniconda的终端内执行

conda activate 你的环境名称或路径
然后CD到你的SD根目录之后执行webui-user.bat即可。

未安装PIP

(D:\openai.wiki\stable-diffusion-webui\automatic) C:\Users\86173>cd /d D:\openai.wiki\stable-diffusion-webui
(D:\openai.wiki\stable-diffusion-webui\automatic) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Fetching updates for Taming Transformers…
Checking out commit for Taming Transformers with hash: 24268930bf1dce879235a7fddd0b2355b84d7ea6…
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 351, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 285, in prepare_environment
git_clone(taming_transformers_repo, repo_dir(‘taming-transformers’), “Taming Transformers”, taming_transformers_commit_hash)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 148, in git_clone
run(f'”{git}” -C “{dir}” checkout {commithash}’, f”Checking out commit for {name} with hash: {commithash}…”, f”Couldn’t checkout commit {commithash} for {name}”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t checkout commit 24268930bf1dce879235a7fddd0b2355b84d7ea6 for Taming Transformers.
Command: “git” -C “D:\openai.wiki\stable-diffusion-webui\repositories\taming-transformers” checkout 24268930bf1dce879235a7fddd0b2355b84d7ea6
Error code: 128
stdout:
stderr: fatal: reference is not a tree: 24268930bf1dce879235a7fddd0b2355b84d7ea6

可能因为没有安装pip而导致的,建议执行python -m ensurepip 。

Python版本不正确

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
==============================================================================================================
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.9.12.
If you encounter an error with “RuntimeError: Couldn’t install torch.” message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and “venv” folder in WebUI’s directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/
Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases
Use –skip-python-version-check to suppress this warning.
==============================================================================================================
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Loading weights [a7529df023] from D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\final-pruned.ckpt
Creating model from config: D:\openai.wiki\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights found near the checkpoint: D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\final-pruned.vae.pt
loading stable diffusion model: OutOfMemoryError
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\webui.py”, line 139, in initialize
modules.sd_models.load_model()
File “D:\openai.wiki\stable-diffusion-webui\modules\sd_models.py”, line 449, in load_model
sd_model.to(shared.device)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py”, line 54, in to
return super().to(*args, **kwargs)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 989, in to
return self._apply(convert)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 664, in _apply
param_applied = fn(param)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.66 GiB already allocated; 0 bytes free; 1.70 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load, exiting

这个错误信息是由于Web UI需要Python 3.10.6版本,而你的Python版本是3.9.12导致的,所以建议你下载安装3.10.6版本的Python。

或者删除当前Python和Web UI目录中的venv文件夹,并升级到最新的3.10版本。

其它问题

未找到launch.py启动文件
(D:\openai.wiki\stable-diffusion-webui\automatic) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
D:\openai.wiki\stable-diffusion-webui\automatic\python.exe: can’t open file ‘D:\\openai.wiki\\stable-diffusion-webui\\launch.py’: [Errno 2] No such file or directory
请按任意键继续. . .

已经成功的进入到了Conda已激活的环境,但是在D:\openai.wiki\stable-diffusion-webui目录内没有launch.py这个文件,应该是SD没Git完整,或者目录错误。

GPU显卡驱动较旧

[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 380, in
prepare_environment()
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 287, in prepare_environment
run_python(“import torch; assert torch.cuda.is_available(), ‘Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'”)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 137, in run_python
return run(f'”{python}” -c “{code}”‘, desc, errdesc)
File “D:\openai.wiki\stable-diffusion-webui\launch.py”, line 113, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe” -c “import torch; assert torch.cuda.is_available(), ‘Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'”
Error code: 1
stdout:
stderr: D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py:88: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File “”, line 1, in
AssertionError: Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

您的GPU驱动程序版本太旧,需要更新GPU驱动程序。更新GPU驱动程序可能会解决这些错误。

模型损坏

venv “G:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Loading weights [a586d5a51a] from G:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File “G:\openai.wiki\stable-diffusion-webui\webui.py”, line 136, in initialize
modules.sd_models.load_model()
File “G:\openai.wiki\stable-diffusion-webui\modules\sd_models.py”, line 407, in load_model
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File “G:\openai.wiki\stable-diffusion-webui\modules\sd_models.py”, line 262, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File “G:\openai.wiki\stable-diffusion-webui\modules\sd_models.py”, line 241, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File “G:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py”, line 100, in load_file
result[k] = f.get_tensor(k)
RuntimeError: self.size(-1) must be divisible by 4 to view Byte as Float (different element sizes), but got 6545423
Stable diffusion model failed to load, exiting
请按任意键继续. . .

根据错误信息,您的Stable-diffusion模型加载失败,原因是self.size(-1)必须可以被4整除,但是实际上得到了6545423,这是不可能被4整除的。这可能是由于模型参数文件已损坏或不完整导致的。您可以尝试重新下载模型文件并重新运行程序,以解决这个问题。

yaml未找到

EnvironmentFileNotFound: ‘D:\openai.wiki\stable-diffusion-webui\environment-wsl2.yaml’ file not found

这个错误说明找不到指定路径下的environment-wsl2.yaml文件,请确保路径和文件名正确,手动是否能够找到该文件。

无法切换模型

点击左上角的模型切换按钮后,一直加载中,无法正常使用。

解决办法:多点几次重新加载UI按钮,或者换一个浏览器。特殊情况下浏览器无法与CMD建立链接。

模型加载失败

(D:\openai.wiki\stable-diffusion-webui\automatic) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
Creating venv in directory D:\openai.wiki\stable-diffusion-webui\venv using python “D:\openai.wiki\stable-diffusion-webui\automatic\python.exe”
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash:
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting torch==1.13.1+cu117
Downloading https://download.pytorch.org/whl/cu117/torch-1.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2255.4 MB)
—————————————- 2.3/2.3 GB 1.0 MB/s eta 0:00:00
Collecting torchvision==0.14.1+cu117
Downloading https://download.pytorch.org/whl/cu117/torchvision-0.14.1%2Bcu117-cp310-cp310-win_amd64.whl (4.8 MB)
—————————————- 4.8/4.8 MB 8.1 MB/s eta 0:00:00
Collecting typing-extensions
Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting numpy
Downloading numpy-1.24.2-cp310-cp310-win_amd64.whl (14.8 MB)
—————————————- 14.8/14.8 MB 13.6 MB/s eta 0:00:00
Collecting requests
Downloading requests-2.28.2-py3-none-any.whl (62 kB)
—————————————- 62.8/62.8 kB 1.7 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0
Downloading Pillow-9.4.0-cp310-cp310-win_amd64.whl (2.5 MB)
—————————————- 2.5/2.5 MB 19.7 MB/s eta 0:00:00
Collecting charset-normalizer=2
Downloading charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl (97 kB)
—————————————- 97.1/97.1 kB 5.8 MB/s eta 0:00:00
Collecting idna=2.5
Downloading https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
—————————————- 61.5/61.5 kB 3.2 MB/s eta 0:00:00
Collecting urllib3=1.21.1
Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
—————————————- 140.9/140.9 kB 8.7 MB/s eta 0:00:00
Collecting certifi>=2017.4.17
Downloading https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
—————————————- 155.3/155.3 kB 9.1 MB/s eta 0:00:00
Installing collected packages: urllib3, typing-extensions, pillow, numpy, idna, charset-normalizer, certifi, torch, requests, torchvision
Successfully installed certifi-2022.12.7 charset-normalizer-3.1.0 idna-3.4 numpy-1.24.2 pillow-9.4.0 requests-2.28.2 torch-1.13.1+cu117 torchvision-0.14.1+cu117 typing-extensions-4.5.0 urllib3-1.26.15
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: D:\openai.wiki\stable-diffusion-webui\venv\Scripts\python.exe -m pip install –upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai…
Cloning Taming Transformers into D:\openai.wiki\stable-diffusion-webui\repositories\taming-transformers…
Cloning K-diffusion into D:\openai.wiki\stable-diffusion-webui\repositories\k-diffusion…
Cloning CodeFormer into D:\openai.wiki\stable-diffusion-webui\repositories\CodeFormer…
Cloning BLIP into D:\openai.wiki\stable-diffusion-webui\repositories\BLIP…
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Downloading: “https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors” to D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
100%|█████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [05:34<00:00, 12.7MB/s]
Calculating sha256 for D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: 6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\openai.wiki\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100%|███████████████████████████████████████████| 961k/961k [00:01<00:00, 862kB/s]
Downloading (…)olve/main/merges.txt: 100%|███████████████████████████████████████████| 525k/525k [00:00<00:00, 548kB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████| 389/389 [00:00<00:00, 282kB/s]
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████| 905/905 [00:00<00:00, 900kB/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████| 4.52k/4.52k [00:00> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load, exiting

这里出现了一个错误,似乎是 Stable Diffusion 模型加载失败导致的。建议检查一下网络连接,以及确保下载的模型文件完整无损。

你可以尝试重新运行这个命令,或者重新下载并安装 Stable Diffusion 模型文件。

目录不正确

(E:\ai\stable-diffusion-webui\automatic) E:\ai\stable-diffusion-webui>webui-user.bat
fatal: not a git repository (or any of the parent directories): .git
venv “E:\ai\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash:
Installing requirements for Web UI
Launching Web UI with arguments: –xformers
Loading weights [fe4efff1e1] from E:\ai\stable-diffusion-webui\models\Stable-diffusion\Model.ckpt
Creating model from config: E:\ai\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 4.9s (load weights from disk: 1.7s, create model: 0.4s, apply weights to model: 0.5s, apply half(): 0.7s, move model to device: 0.7s, load textual inversion embeddings: 0.9s).
Traceback (most recent call last):
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py”, line 412, in send
conn = self.get_connection(request.url, proxies)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py”, line 305, in get_connection
proxy_url = parse_url(proxy)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\url.py”, line 397, in parse_url
return six.raise_from(LocationParseError(source_url), None)
File “”, line 3, in raise_from
urllib3.exceptions.LocationParseError: Failed to parse: http://127.0.0.1:7890
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “E:\ai\stable-diffusion-webui\launch.py”, line 352, in
start()
File “E:\ai\stable-diffusion-webui\launch.py”, line 347, in start
webui.webui()
File “E:\ai\stable-diffusion-webui\webui.py”, line 257, in webui
app, local_url, share_url = shared.demo.launch(
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py”, line 1483, in launch
requests.get(f”{self.local_url}startup-events”)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\api.py”, line 76, in get
return request(‘get’, url, params=params, **kwargs)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\api.py”, line 61, in request
return session.request(method=method, url=url, **kwargs)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py”, line 542, in request
resp = self.send(prep, **send_kwargs)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py”, line 655, in send
r = adapter.send(request, **kwargs)
File “E:\ai\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py”, line 414, in send
raise InvalidURL(e, request=request)
requests.exceptions.InvalidURL: Failed to parse: http://127.0.0.1:7890
请按任意键继续. . .

据错误消息,你的命令行工作目录可能不在 Git 仓库中,所以 Git 执行 git rev-parse HEAD 命令时出现了 fatal: not a git repository (or any of the parent directories): .git 错误。你可以在命令行中切换到仓库目录,或者在启动脚本中指定 Git 仓库的路径。

加载时间较长

(base) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Calculating sha256 for D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\final-pruned.ckpt: a7529df02340e5b4c3870c894c1ae84f22ea7b37fd0633e5bacfad9618228032
Loading weights [a7529df023] from D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\final-pruned.ckpt
Creating model from config: D:\openai.wiki\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 16.0s (calculate hash: 5.9s, load weights from disk: 3.8s, create model: 0.5s, apply weights to model: 2.0s, apply half(): 0.9s, move model to device: 1.0s, load textual inversion embeddings: 1.8s).

电脑配置差,多等一会。

端口被占用

venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Commit hash:
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Loading weights [cc6cb27103] from D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Creating model from config: D:\openai.wiki\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 4.5s (load weights from disk: 1.3s, create model: 0.4s, apply weights to model: 0.5s, apply half(): 0.7s, move model to device: 0.7s, load textual inversion embeddings: 0.9s).
Running on local URL: http://127.0.0.1:7861

大佬好,在安装以后运行webui-user.bat的时候出现一下,安装过程中弹过error但是自己重试安装成功了,之后运行网址都没有反应,就是空白的,请您帮忙看看什么原因呢?
To create a public link, set share=True in launch().
Startup time: 10.4s (import torch: 1.4s, import gradio: 1.0s, import ldm: 0.5s, other imports: 1.0s, setup codeformer: 0.2s, load scripts: 1.0s, load SD checkpoint: 4.8s, create ui: 0.4s, gradio launch: 0.1s).
正常来说 给出的应该是7860,而你的是http://127.0.0.1:7861,那代表你可能在CMD中开启了代理,或者其它应用占用了7860端口号,建议排查一下端口号的问题。

最便捷的解决办法就是关掉是这个CMD,然后去SD的项目根目录内运行webui-user.bat文件夹即可。

非Windows系统

Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Traceback (most recent call last):
File “/home/mark/Desktop/123/stable-diffusion-webui_23-03-10/launch.py”, line 355, in
prepare_environment()
File “/home/mark/Desktop/123/stable-diffusion-webui_23-03-10/launch.py”, line 288, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir(‘stable-diffusion-stability-ai’), “Stable Diffusion”, stable_diffusion_commit_hash)
File “/home/mark/Desktop/123/stable-diffusion-webui_23-03-10/launch.py”, line 143, in git_clone
current_hash = run(f'”{git}” -C “{dir}” rev-parse HEAD’, None, f”Couldn’t determine {name}’s hash: {commithash}”).strip()
File “/home/mark/Desktop/123/stable-diffusion-webui_23-03-10/launch.py”, line 97, in run
raise RuntimeError(message)
RuntimeError: Couldn’t determine Stable Diffusion’s hash: cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf.
Command: “git” -C “/home/mark/Desktop/123/stable-diffusion-webui_23-03-10/repositories/stable-diffusion-stability-ai” rev-parse HEAD
Error code: 129
stdout:
stderr: Unknown option: -C
usage: git [–version] [–help] [-c name=value]
[–exec-path[=]] [–html-path] [–man-path] [–info-path]
[-p|–paginate|–no-pager] [–no-replace-objects] [–bare]
[–git-dir=] [–work-tree=] [–namespace=]
[]

这个错误提示是git命令在运行时出了问题,它在执行“git -C”命令时出现了错误,因为“-C”是一个未知的选项。

可能是因为并非Windows系统而导致的,本教程仅针对Windows,对Liunx暂不讨论。

语法错误

标准错误:错误:Error [WinError 2] 系统找不到指定的文件,执行命令git version 错误:找不到命令’git’ – 你是否已安装并将其加入了环境变量中?

set-Location:找不到接受实际参数的“D:openai.wiki\stable-diffusion-webui”的位置参数所在的位置 行:1 字符:1
+cd、d D:\openai.wiki\stable-diffusion-webui
这个错误提示是在尝试改变当前目录(CD)到“D:\openai.wiki\stable-diffusion-webui”时出现的。错误提示表明系统找不到这个目录。

可能的原因之一是指定的路径不正确,或者该目录不存在。您可以确认一下指定的路径是否正确,并且该目录是否存在。如果不存在,请确保您输入的路径正确,并且该目录已创建。

另外一个问题就是cd 、d 路径这句命令行,应该是cd /d 路径,请注意中英文符号问题。

显存不足

(base) D:\openai.wiki\stable-diffusion-webui>webui-user.bat
venv “D:\openai.wiki\stable-diffusion-webui\venv\Scripts\Python.exe”
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Installing requirements for Web UI
Launching Web UI with arguments:
No module ‘xformers’. Proceeding without it.
Loading weights [ad2a33c361] from D:\openai.wiki\stable-diffusion-webui\models\Stable-diffusion\v2-1_768-ema-pruned.ckptCreating model from config: D:\openai.wiki\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
loading stable diffusion model: OutOfMemoryError
Traceback (most recent call last):
File “D:\openai.wiki\stable-diffusion-webui\webui.py”, line 136, in initialize
modules.sd_models.load_model()
File “D:\openai.wiki\stable-diffusion-webui\modules\sd_models.py”, line 441, in load_model
sd_model.to(shared.device)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py”, line 113, in to
return super().to(*args, **kwargs)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 989, in to
return self._apply(convert)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 641, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 664, in _apply
param_applied = fn(param)
File “D:\openai.wiki\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py”, line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.66 GiB already allocated; 0 bytes free; 1.71 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load, exiting
请按任意键继续. . .

根据错误信息显示,Stable Diffusion 模型加载时遇到了 CUDA 显存不足的错误,简单点来说就是显卡不行,没有足够的显存空间分配给该模型。

解决方法1:

换个好一些的显卡,至少英伟达2060以上。

解决方法2:

开启低显存模式,开启低显存模式的方法如下。

在stable-diffusion-webui文件夹下找到webui-user.bat,用文本或代码编辑器打开该文件夹,可以看到如下内容。

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
call webui.bat

我们只需要修改set COMMANDLINE_ARGS=部分即可,该部分内容是启动参数。

低于3G显存

如果你的显卡显存不足3G,可以在set COMMANDLINE_ARGS=的后面添加参数–lowvram –always-batch-cond-uncond之后保存即可,变更之后的文件内容如下。

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=–lowvram –always-batch-cond-uncond
call webui.bat

4G显存

如果你的显卡只有4G显存,可以在set COMMANDLINE_ARGS=的后面添加参数–precision full –no-half –lowvram –always-batch-cond-uncond之后保存即可,变更之后的文件内容如下。

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=–precision full –no-half –lowvram –always-batch-cond-uncond
call webui.bat

低于5G显存

如果你的显卡显存不足5G,可以在set COMMANDLINE_ARGS=的后面添加参数–medvram之后保存即可,变更之后的文件内容如下。

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=–medvram
call webui.bat

6G显存

如果你的显卡只有6G显存,可以在set COMMANDLINE_ARGS=的后面添加参数–precision full –no-half –medvram之后保存即可,变更之后的文件内容如下。

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=–precision full –no-half –medvram
call webui.bat
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

【Stable Diffusion】安装过程中常见报错解决方法 的相关文章

随机推荐

  • Cisco Packet Tracer下载和安装、构建网络拓扑、配置网络设备、跟踪数据包、查看数据包

    Cisco Packet Tracer下载和安装 构建网络拓扑 配置网络设备 跟踪数据包 查看数据包 下载 一 注册Cisco账户 网址 https www cisco com c en us index html 二 注册Cisco学院的
  • CVPR 2023

    Title InternImage Exploring Large Scale Vision Foundation Models with Deformable Convolutions Paper https arxiv org abs
  • Matlab之colormap, FaceVertexCData

    首先说说colormap 它提供了一种着色方案 我认为它有3个作用 1 Matlab内置多种样式的color map 在任一个Figure中 打开菜单 Edit gt ColorMap 弹出 Colormap Editor 界面 在该界面上
  • linux 驱动——高级字符驱动程序操作

    内容 ioctl 的 ioctl 的系统概念 与用户空间同步的方法 进程休眠 非阻塞IO及与用户间的通信 原型函数 int ioctl struct inode inode struct file filp unsigned int cmd
  • week4作业题_A-DDL的恐惧

    A DDL的恐惧 题目描述 ZJM 有 n 个作业 每个作业都有自己的 DDL 如果 ZJM 没有在 DDL 前做完这个作业 那么老师会扣掉这个作业的全部平时分 所以 ZJM 想知道如何安排做作业的顺序 才能尽可能少扣一点分 请你帮帮他吧
  • 数据结构学习笔记(一)线性表

    文章目录 前言 一 线性表是什么 线性结构 线性表 二 线性表的顺序表示和实现 1 什么是线性表的顺序表示 2 代码实现 总结 参考资料 前言 文章目的在于记录学习数据结构这门课程中遇到的知识点以及难点 为解决或优化实际代码问题打好基础 一
  • 辨析BigDecimal的toString()方法和toPlainString()方法

    辨析BigDecimal的toString 方法和toPlainString 方法 toString toString方法会将BigDecimal的值以科学计数方式的字符串 但是转换成科学计数的方式也是有场景的 并不是所有的值都会转为科学计
  • Nginx 代理解决跨域问题分析

    当你遇到跨域问题 不要立刻就选择复制去尝试 请详细看完这篇文章再处理 我相信它能帮到你 分析前准备 前端网站地址 http localhost 8080 服务端网址 http localhost 59200 首先保证服务端是没有处理跨域的
  • Android注册登录页面

    Android注册登录页面 需求 分析 项目目录 java domain JsonBean java UserInfo java utils GetJsonDataUtil java Login java MainActivity java
  • 前端实现预览功能,播放rtsp视频流(node.js+ffmpeg+flv.js)

    实现思路 获取摄像头rtsp流 通过node js ffmpeg转码 通过哔哩哔哩flv js播放 1 获取摄像机RTSP流 之前文章有说明不多阐述 2 配置流媒体服务器 1 下载安装node js 运行node js 网上教程很多自行下载
  • XGBoost-工程实现与优缺点(中)

    工程实现 块结构设计 我们知道 决策树的学习最耗时的一个步骤就是在每次寻找最佳分裂点是都需要对特征的值进行排序 而 XGBoost 在训练之前对根据特征对数据进行了排序 然后保存到块结构中 并在每个块结构中都采用了稀疏矩阵存储格式 Comp
  • 为什么MVC不是一种设计模式

    比较Backbone和Ext4 x在MVC实现上的差异 大漠穷秋 前言 圣人云 不想做妈咪的小姐不是好码农 每一个码农的心中都有一个终极理想 那就是有一天不用再Coding 在成为妈咪的道路上 设计模式 被认为是一项必备的技能 因此 经常有
  • python sys.path.append()和sys.path.insert()的作用与区别

    python程序中使用 import XXX 时 python解析器会在当前目录 已安装和第三方模块中搜索 xxx 如果都搜索不到就会报错 使用sys path append 方法可以临时添加搜索路径 方便更简洁的import其他包和模块
  • Eclipes下载并且导入GitHub中的maven项目

    第一步 确保eclipse装有git和maven插件 最新的eclipse不需要下载应该都集成了这些基本的功能 如果没有这两个插件自己下载安装 第二步 下载GitHub项目 拷贝想要下载的项目URL eclipse gt gt gt Fil
  • C#调用C++封装的SDK库(dll动态库)——下

    C 调用C 封装的SDK库 dll动态库 下 一 说明 上一篇我们相当于封装的是C语言风格的动态dll库 供C 来调用的 C 调用C 封装的SDK库 dll动态库 上 如果我们要封装的是下面的类呢 我们该怎么办 大家先思考下 class C
  • Excel中的VLOOKUP函数

    这几天开始刷计算机二级Office的题库 怎么说呢 遇到了很多之前根本就不知道的函数 并且感觉很有用 所以想把一些考试频繁要考的 同时也是很实用的函数一点一点的记下来 今天我来谈一下在Excel里面的一个查找函数 VLOOKUP函数 这个函
  • 小小程序员预备上路

    2011正式接触代码 进入CSDN乐知学院PHP方向 这篇博客便是我所有的成长的见证 至今 大大小小的项目做过不少 项目中学会了很多 怎么与小组默契配合 提高效率 大一第一次的项目 纯HTML静态网页 班级40个人5人一小组 总共8组 稀里
  • 手机投屏不是全屏怎么办_手机投屏怎么满屏

    手机投屏是很多小伙伴们都喜欢玩的 不少小伙伴们小伙伴们在使用手机投屏的时候发现不能满屏 想要知道方法的小伙伴们 就让小编给大家详细的讲讲满屏方法吧 手机投屏怎么满屏 1 手机具有投屏的功能 目前大多数手机都已经具备发无线投屏的功能 2 电视
  • Linux 部署 Mycat 实现 MariaDB 分库分表

    安装请参照Mycat 实现 Mysql 集群读写分离 高飞的博客 CSDN博客MySQL 读写分离的概述https blog csdn net gaofei0428 article details 117503469 spm 1001 20
  • 【Stable Diffusion】安装过程中常见报错解决方法

    转自 https openai wiki stable diffusion error html 如何查看报错 在你安装时可能经常遇到各种各样的问题 但是对于一堆陌生的英文和各种各样的错误 大家可能经常无从下手 下面我将会教大家如何查看报错