- runs in containers for enhanced host OS separation
- work with
docker(andcompose) orpodmanusingWindows Subsystem for Linux 2(WSL2) on Windows (using a Linux Guest Virtual Machine on your Windows host)
- work with
- can run multiple setups with an independent
runfolder (for virtual environment management and source code) sharedbasedirfolder (for user files, input, output, custom nodes, models, etc.) - drops privileges to a regular user/preserves user permissions with custom UID/GID mapping (the running user's
id -uandid -gas specified on the command line) - Integrated
ComfyUI-Managerfor hassle-free updates- permits modification of
ComfyUI-Manager's security level (SECURITY_LEVEL)
- permits modification of
- expose to Localhost-only access by default (
-p 127.0.0.1:8188:8188) - built on official NVIDIA CUDA containers for optimal GPU performance
- multiple
Ubuntu+CUDAversion combinations available --for older hardware: down toCUDA 12.3.2/ for 50xx GPUs:CUDA 12.8-- see the tags list - separate
runandbasedirfoldersrunfolder is used to store the ComfyUI setup (virtual environment, source code)basedirfolder is used to store user files, input, output, custom nodes, models, etc.
- command-line override
- using the
COMFY_CMDLINE_EXTRAenvironment variable to pass additional command-line arguments set during the init script - ability to run
user_script.bashfrom within the container for complex customizations, installations (pip,apt, ...) and command-line overrides
- using the
- pre-built container images available on DockerHub
- including Unraid compatible images
- open-source: build it yourself using the corresponding
Dockerfilepresent in the directory of the same name and review theinit.bash(i.e. the setup logic)
When using Blackwell (RTX50xx) GPUs:
- you must use NVIDIA driver 570 (or above).
- use the
ubuntu24_cuda12.8container tag (or above).
When using GTX 10xx GPUs:
- use the
ubuntu24_cuda12.6.3container tag. - set
PREINSTALL_TORCH=trueto enable the automatic installation of a CUDA 12.6 version of PyTorch. - (recommended) set
USE_PIPUPGRADE=falseto disable the use ofpip3 --upgrade. - (optional) after the initial installation only, set
DISABLE_UPGRADES=trueto disable any Python package upgrade when starting the container (also disablesUSE_PIPUPGRADE, andPREINSTALL_TORCHso set only after the initial installation). Use Comfy Manager to upgrade components. - (optional) set
PREINSTALL_TORCH_CMD="pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126"to install torch with thecu126index-url.
latest now points to the ubuntu24_cuda12.6.3 tag (as announced in the 20250320 release)
Some installed custom_nodes might need to be fixed (Try Fix) for Import Failed nodes in ComfyUI-Manager.
This manual step is only needed once per the latest tag update to a different Ubuntu+CUDA version.
To avoid latest changing your container's Ubuntu or CUDA version, manually select the docker image tag from the list of available tags.
Windows users, see the "Windows: WSL2 and podman" section
Blackwell (RTX 50xx series GPUs) users, see the "Blackwell support" section
Make sure you have the NVIDIA Container Toolkit installed. More details: https://www.gkr.one/blg-20240523-u24-nvidia-docker-podman
To run the container on an NVIDIA GPU, mount the specified directory, expose only to localhost on port 8188 (remove 127.0.0.1 to expose to your subnet, and change the port by altering the -p local:container port mapping), pass the calling user's UID and GID to the container, and select the SECURITY_LEVEL:
# 'run' will contain your virtual environment(s), ComfyUI source code, and Hugging Face Hub data
# 'basedir' will contain your custom nodes, input, output, user and models directories
mkdir run basedir
# Using docker
docker run --rm -it --runtime nvidia --gpus all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 127.0.0.1:8188:8188 --name comfyui-nvidia mmartial/comfyui-nvidia-docker:latest
# Using podman
podman run --rm -it --userns=keep-id --device nvidia.com/gpu=all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 127.0.0.1:8188:8188 --name comfyui-nvidia docker.io/mmartial/comfyui-nvidia-docker:latestComfyUI is a Stable Diffusion WebUI.
With the addition in August 2024 of a Flux example, I created this container builder to test it.
This container was built to benefit from the process isolation that containers bring and to drop the container's main process privileges to that of a regular user (the container's comfy user, which is sudo capable).
The base container size is usually over 8GB, as new releases are now based on Nvidia's devel images. It contains the required components on an Ubuntu image with Nvidia CUDA and CuDNN (the base container is available from Nvidia's DockerHub); we add the requirements components to support an installation of ComfyUI.
Multiple images are available. Each image's name contains a tag reflecting its core components. For example, ubuntu24_cuda12.5.1 is based on Ubuntu 24.04 with CUDA 12.5.1. Depending on the version of the Nvidia drivers installed, the Docker container runtime will only support a certain version of CUDA. For example, Driver 550 supports up to CUDA 12.4 and will not be able to run the CUDA 12.4.1 or 12.5.1 versions. The recently released 570 driver supports up to CUDA 12.8 and RTX 50xx GPUs.
For more details on CUDA/GPU support, see https://en.wikipedia.org/wiki/CUDA#GPUs_supported
Use the nvidia-smi command on your system to obtain the CUDA Version: entry. It will show you the maximum CUDA version supported by your driver. If the printout shows CUDA Version: 12.6, your driver will support up to the cuda12.5.1 version of the container (below the maximum CUDA version supported by the driver) but not cuda12.8. With this information, check for a usable tag in the table below.
The latest tag will point to a most up-to-date build (i.e., the most recent OS+CUDA).
If this version is incompatible with your container runtime, please see the list of alternative builds.
| tag | aka | note |
|---|---|---|
| ubuntu22_cuda12.2.2-latest | ||
| ubuntu22_cuda12.3.2-latest | ||
| ubuntu22_cuda12.4.1-latest | ||
| ubuntu24_cuda12.5.1-latest | was latest up to 20250320 release |
|
| ubuntu24_cuda12.6.3-latest | latest |
latest as of 20250413 release |
| ubuntu24_cuda12.8-latest | minimum required for Blackwell (inc RTX 50xx) hardware (see "Blackwell support" section) | |
| ubuntu24_cuda12.9-latest | ||
| ubuntu24_cuda13.0-latest |
For more details on driver capabilities and how to update those, please see Setting up NVIDIA docker & podman (Ubuntu 24.04).
During its first run, the container will download ComfyUI from git (into the run/ComfyUI folder), create a Python virtual environment (in run/venv) for all the Python packages needed by the tool, and install ComfyUI Manager into ComfyUI's custom_nodes directory.
This adds about 5GB of content to the installation. The download time depends on your internet connection.
Given that venv (Python virtual environments) might not be compatible from OS+CUDA-version to version and will create a new venv when the current one is not for the expected version.
An installation might end up with multiple venv-based directories in the run folder, as the tool will rename existing unusable ones as "venv-OS+CUDA" (for example, venv-ubuntu22_cuda12.3.2). To support downgrading if needed, the script will not delete the previous version, and this is currently left to the end-user to remove if not needed
Using alternate venv means that some installed custom nodes might have an import failed error. We are attempting to make use of cm-cli before starting ComfyUI. If that fails, start the Manager -> Custom Nodes Manager, Filter by Import Failed, and use the Try fix button as this will download the required packages and install those in the used venv. A Restart and UI reload will be required to fix issues with the nodes.
You will know the ComfyUI WebUI is running when you check the docker logs and see To see the GUI go to: http://0.0.0.0:8188
About 15GB of space between the container and the virtual environment installation is needed. This does not consider the models, additional package installations, or custom nodes that the end user might perform.
ComfyUI's security_levels are not accessible until the configuration file is created during the first run.
It is recommended that a container monitoring tool be available to watch the logs and see when installations are completed or other relevant messages. Some installations and updates (updating packages, downloading content, etc.) will take a long time, and the lack of updates on the WebUI is not a sign of failure. Dozzle is a good solution for following the logs from a WebUI.
- 1. Preamble
- 2. Running the container
- 3. Docker image
- 4. Screenshots
- 5. FAQ
- 6. Troubleshooting
- 7. Changelog
The container is made to run as the comfy user, NOT as root user.
Within the container, the final user is comfy and the UID/GID is requested at docker run time; if none are provided, the container will use 1024/1024.
This is done to allow end users to have local directory structures for all the side data (input, output, temp, user), Hugging Face HF_HOME if used, and the entire models, which are separate from the container and able to be altered by the user.
To request a different UID/GID at run time, use the WANTED_UID and WANTED_GID environment variables when calling the container.
Note:
- for details on how to set up a Docker to support an NVIDIA GPU on an Ubuntu 24.04 system, please see Setting up NVIDIA docker & podman (Ubuntu 24.04)
- If you are new to ComfyUI, see OpenArt's ComfyUI Academy
- Some ComfyUI examples:
- Some additional reads:
In the directory where we intend to run the container, create the run and basedir folders as the user with whom we want to share the UID/GID. This needs to be done before the container is run (it is started as root, so the folders, if they do not exist, will be created as root) (or give it another name; adapt the -v mapping in the docker run below).
That run folder will be populated with a few sub-directories created with the UID/GID passed on the command line (see the command line below).
Among the folders that will be created within run are HF, ComfyUI, venv
HFis the expected location of theHF_HOME(HuggingFace installation directory)ComfyUIis the git clone version of the tool, with all its sub-directories, among which:custom_nodesfor additional support nodes, for example, ComfyUI-Manager,modelsand all its sub-directories is wherecheckpoints,clip,loras,unet, etc have to be placed.inputandoutputare where input images will be placed, and generated images will end up.useris where the user's customizations and savedworkflows(and ComfyUI Manager's configuration) are stored.
venvis the virtual environment where all the required Python packages for ComfyUI and other additions will be placed. A default ComfyUI package installation requires about 5GB of additional installation in addition to the container itself; those packages will be in thisvenvfolder.
Currently, it is not recommended to volume map folders within the ComfyUI folder. Doing so is likely to prevent proper installation (during the first run) or update, as any volume mapping (docker ... -v or - local_path:container_path for compose) creates those directories within a directory structure that is not supposed to exist during the initial execution.
The use of the basedir is recommended. This folder will be populated at run time with the content from ComfyUI's input, output, user and models folders. This allow for the separation of the run time components (within the run folder) from the user files. In particular, if you were to delete the run folder, you would still have model files in the basedir folder.
This is possible because of a new CLI option --basedir that was added to the code at the end of January 2025. This option will not be available unless ComfyUI is updated for existing installations.
When starting, the container image executes the init.bash script (existing as /comfyui-nvidia_init.bash within the container) that performs a few operations:
- load the
/comfyui-nvidia_config.shscript (sourceit). This script is copied from the host at build time (theconfig.shfile), and can contain override for command line environment variables. - When starting, the container is using the
comfytoouser. This user has UID/GID 1025/1025 (ie not a value existing by default in a default Ubuntu installation).- As the
sudocapablecomfytoouser, the script will modify the existingcomfyuser to use theWANTED_UIDandWANTED_GID - Then, it will re-start the initialization script by becoming the newly modified
comfyuser (which can write in therunandbasedirfolders with the providedWANTED_UIDandWANTED_GID). - Environment variables for the
comfytoouser will be shared with thecomfyuser.
- As the
- After restarting as the
comfyuser... - Check that the NVIDIA driver is loaded and show details for the seen GPUs
- Obtain the latest version of ComfyUI from GitHub if not already present in the mounted
runfolder. - Create the virtual environment (
venv) if one does not already exist- if one exists, confirm it is the one for this OS+CUDA pair
- if not, rename it and look for a renamed one that would match
- if none is found, create a new one
- if one exists, confirm it is the one for this OS+CUDA pair
- Activate this virtual environment
- Install all the ComfyUI-required Python packages. If those are already present, additional content should not need to be downloaded.
- Installing ComfyUI-Manager if it is not present.
- During additional runs, we will allow the user to change the
security_levelfromnormalto another value set using theSECURITY_LEVELenvironment passed to the container (see the "Security Levels" section of this document for details) to allow for the tool grant more of less functionalities
- During additional runs, we will allow the user to change the
- Populate the
BASE_DIRECTORYwith theinput,output,userandmodelsdirectories from ComfyUI'srunfolder if none are present in thebasedirfolder- extend the
COMFY_CMDLINE_EXTRAenvironment variable with the--basediroption. This variable isexported so that it should be used with anyuser_script.bashif theBASE_DIRECTORYis used.
- extend the
- Run independent user scripts if a
/userscript_diris mounted.- only executable
.shscripts are executed, in alphanumerical order - if any script fails, the container will stop with an error
- environment variables set by the script will be available to the following scripts if they are saved in the
/tmp/comfy_${userscript_name}_env.txtfile, adaptinguserscript_nameto the script name (ex:00-nvidiaDev.shwould be/tmp/comfy_00-nvidiaDev_env.txt)
- only executable
- Check for a user custom script in the "run" directory. It must be named
user_script.bash. If one exists, run it.- Make sure to use the
COMFY_CMDLINE_EXTRAenvironment variable to pass the--basediroption to the tool if running the tool from within this script
- Make sure to use the
- Run the ComfyUI WebUI. For the exact command run, please see the last line of
init.bash
If the FORCE_CHOWN environment variable is set to any non empty value (ex: "yes"), the script will force change directory ownership as the comfy user during script startup (might be slow).
To run the container on an NVIDIA GPU, mount the specified directory, expose only to localhost on port 8188 (remove 127.0.0.1 to expose to your subnet, and change the port by altering the -p local:container port mapping), pass the calling user's UID and GID to the container, provide a BASE_DIRECTORY and select the SECURITY_LEVEL:
mkdir run basedir
docker run --rm -it --runtime nvidia --gpus all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 127.0.0.1:8188:8188 --name comfyui-nvidia mmartial/comfyui-nvidia-docker:latestIn the directory where you want to run the compose stack, create the compose.yaml file with content similar to the compose.yaml file in this repository.
This will use port 8188 (host:container). Use a run directory local to the directory where this compose.yml is, and specify the WANTED_UID and WANTED_GID to 1000 (adapt to reflect the user and group you want to run as, which can be obtained using the id command in a terminal). Make sure to create the run and basedir directories as the user with the desired uid and gid before running the docker-compose for the first time.
Start it with docker compose up (with -detached to run the container in the background)
Please see docker compose up reference manual for additional details.
For users interested in adding it to a Dockge (a self-hosted Docker Compose stacks management tool ) stack, please see my Dockge blog post where we discuss directory and bind mounts (models take a lot of space).
Tailscale is a overlay network solution with a secure, peer-to-peer service that lets devices connect directly over the internet as if they were on the same local network (using 100. --ie CGNAT-- IPs). Tailscale is a significant part of my homelab access as described in "Tailscale: subnet router & remote services access".
It is possible to run the container without exposing any ports that can be accessed on the local network, and access it only through tailscale.
To do so, adapt the compose-tailscale.yaml file to be your compose.yaml file.
For more details on how we are achieving this, review the video from their Knowledge Base at https://tailscale.com/kb/1282/docker
It is also possible to run the tool using podman. Before doing so, ensure the Container Device Interface (CDI) is properly set for your driver. Please see https://www.gkr.one/blg-20240523-u24-nvidia-docker-podman for instructions.
To run the container on an NVIDIA GPU, mount the specified directory, expose only to localhost on port 8188 (remove 127.0.0.1 to expose to your subnet, and change the port by altering the -p local:container port mapping), pass the calling user's UID and GID to the container, provide a BASE_DIRECTORY and select the SECURITY_LEVEL:
mkdir run basedir
podman run --rm -it --userns=keep-id --device nvidia.com/gpu=all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 127.0.0.1:8188:8188 --name comfyui-nvidia docker.io/mmartial/comfyui-nvidia-docker:latestTo use podman compose up, modify from the deploy: section of the compose.yaml file the following:
deploy:
resources:
reservations:
devices:
- driver: cdi
device_ids: ['nvidia.com/gpu=all']
capabilities: [gpu]
userns_mode: "keep-id"The first time we run the container, we will go to our host's IP on port 8188 (likely http://127.0.0.1:8188) and see the latest run or the bottle-generating example.
Before attempting to run this example, restarting the container is recommended.
The default security model of normal is used unless specified, but the needed configuration file is created at the first run of the container. As such, the ComfyUI Manager's default security_level can not be modified until the first container restart (after the WebUI ran the first time).
This example requires the v1-5-pruned-emaonly.ckpt file which can be downloaded directly from the Manager's "Model Manager".
It is also possible to manually install Stable Diffusion checkpoints, upscale, or Loras (and more) by placing them directly in their respective directories under the models folder. For example, to manually install the required "bottle example" checkpoint, as the user with the wanted uid/gid:
cd <YOUR_BASE_DIRECTORY>/models/checkpoints
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckptAfter the download is complete, click "Refresh" on the WebUI and "Queue Prompt"
Depending on the workflow, some "custom nodes" might be needed. Those should usually be available in the "Manager"'s "Install Missing Custom Nodes". Other needed files could be found on HuggingFace or CivitAI.
"Custom nodes" should be installed using the "Manager". The ability to install those manually depends on the security_levels selected.
Running make will show us the different build targets. That list will differ depending on the available base files in the components directory
For example, you might see:
Run:
% make
Available comfyui-nvidia-docker docker images to be built (make targets):
ubuntu22_cuda12.3.2
ubuntu22_cuda12.4.1
ubuntu24_cuda12.5.1
build: builds allIt is possible to build a specific target, such as make ubuntu22_cuda12.3.2, or all the available containers.
Running a given target will create a comfyui-nvidia-docker docker buildx.
As long as none are present, this will initiate a build without caching.
The process will create the Dockerfile used within the Dockerfile folder. For example, when using make ubuntu22_cuda12.3.2 a Dockerfile/Dockerfile-ubuntu22_cuda12.3.2 file is created that will contain the steps used to build the local comfyui-nvidia-docker:ubuntu22_cuda12.3.2 Docker image.
It is also possible to use one of the generated Dockerfile to build a specific image.
After selecting the image to build from the OS+CUDA name within the Dockerfile folder, proceed with a docker build command in the directory where this README.md is located.
To build the ubuntu24_cuda12.5.1 container, run:
docker build --build-arg COMFYUI_NVIDIA_DOCKER_VERSION="20250817" --build-arg BUILD_BASE="ubuntu24_cuda12.5.1" --tag comfyui-nvidia-docker:ubuntu24_cuda12.5.1 -f Dockerfile/ubuntu24_cuda12.5.1.Dockerfile .The COMFYUI_NVIDIA_DOCKER_VERSION is the version of release (in doubt, use the latest release value).
The BUILD_BASE must match the OS+CUDA version of the Dockerfile to build. It is used to create the virtual environment. For example, if the Dockerfile used is Dockerfile-ubuntu24_cuda12.5.1, the BUILD_BASE must be ubuntu24_cuda12.5.1.
Upon completion of the build, we will have a newly created local comfyui-nvidia-docker:ubuntu24_cuda12.5.1 Docker image.
Builds are available on DockerHub at mmartial/comfyui-nvidia-docker, built from this repository's Dockerfile(s).
The table at the top of this document shows the list of available versions on DockerHub. Make sure your NVIDIA container runtime supports the proposed CUDA version. This is particularly important if you use the latest tag, as it is expected to refer to the most recent OS+CUDA release.
The container has been tested on Unraid and added to Community Apps an 2024-09-02.
FYSA, if interested, you can see the template from https://raw.githubusercontent.com/mmartial/unraid-templates/main/templates/ComfyUI-Nvidia-Docker.xml
Note that the original Dockerfile FROM is from Nvidia, as such:
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
Template at Flux example
"API Nodes are ComfyUI’s new way of calling closed-source models through API requests" https://docs.comfy.org/tutorials/api-nodes/overview
The container is not designed to be used with API nodes (even with API keys), only to run self-hosted models. If you want to use those, it is recommended to use the Desktop version of ComfyUI from https://www.comfy.org/download
The container pip installs all required packages in the container and then creates a virtual environment (in /comfy/mnt/venv with comfy/mnt mounted with the docker run [...]—v).
This allows for the installation of Python packages using pip3 install.
After running docker exec -t comfy-nvidia /bin/bash from the provided bash, activate the venv with source /comfy/mnt/venv/bin/activate.
From this bash prompt, you can now run pip3 freeze or other pip3 commands such as pip3 install civitai
Because a venv is tied to an OS+CUDA version, the tool attempts to create some internal logic so that the venv folder matches the OS+CUDA of the started container.
Starting two comfyui-nvidia-docker containers with different OS+CUDA tags at the same time is likely to cause some issues
For illustration, let's say we last ran ubuntu22_cuda12.3.1, exited the container, and now attempt to run ubuntu24_cuda12.5.1. The script initialization is as follows:
- check for an existing
venv; there is one - check that this
venvis forubuntu24_cuda12.5.1: it is not, it is forubuntu22_cuda12.3.1 - move
venvtovenv-ubuntu22_cuda12.3.1 - check if there is a
venv-ubuntu24_cuda12.5.1to renamed asvenvif present: there is not - the script continues as if there was no
venvand a new one forubuntu24_cuda12.5.1is created
Because of this, it is possible to have multiple venv-based folders in the "run" folder.
A side effect of the multiple virtual environment integration is that some installed custom nodes might have an import failed error when switching from one OS+CUDA version to another.
When the container is initialized ,we run cm-cli.py fix all to attempt to fix this.
If this does not resolve the issue, start the Manager -> Custom Nodes Manager, Filter by Import Failed, and use the Try fix button. This will download the required packages and install those in the used venv. A Restart and UI reload will be required, but this ought to fix issues with the nodes.
The run/postvenv_script.bash script can perform additional operations directly after the virtual environment is created and before ComfyUI is installed (or updated).
This proves useful for installing Python packages that are required for the ComfyUI installation.
Please see additional details about the use of user scripts in the user_script.bash section.
The run/user_script.bash user script can perform additional operations.
Because this is a Docker container, updating the container will remove any additional installations not in the "run" directory, so it is possible to force a reinstall at runtime.
It is also possible to bypass the ComfyUI command started (for people interested in trying the --fast, for example).
To perform those changes, be aware that:
- The container image is Ubuntu-based.
- The
comfyuser issudocapable.
An example of one could be:
#!/bin/bash
echo "== Adding system package"
DEBIAN_FRONTEND=noninteractive sudo apt update
DEBIAN_FRONTEND=noninteractive sudo apt install -y nvtop
echo "== Adding python package"
source /comfy/mnt/venv/bin/activate
pip3 install pipx
echo "== Adding nvitop"
# nvitop will be installed in the user's .local/bin directory which will be removed when the container is updated
pipx install nvitop
# extend the path to include the installation directory
export PATH=/comfy/.local/bin:${PATH}
# when starting a new docker exec, will still need to be run as ~/.local/bin/nvitop
# but will be in the PATH for commands run from within this script
echo "== Override ComfyUI launch command"
# Make sure to have 1) activated the venv before running this command
# 2) use the COMFY_CMDLINE_EXTRA environment variable to pass additional command-line arguments set during the init script
cd /comfy/mnt/ComfyUI
python3 ./main.py --listen 0.0.0.0 --disable-auto-launch --fast ${COMFY_CMDLINE_EXTRA}
echo "== To prevent the regular Comfy command from starting, we 'exit 1'"
echo " If we had not overridden it, we could simply end with an ok exit: 'exit 0'"
exit 1The script will be placed in the run directory and must be named user_script.bash to be found.
If you encounter an error, it is recommended to check the container logs; this script must be executable and readable by the comfy user.
If the file is not executable, the tool will attempt to make it executable, but if another user owns it, the step will fail.
🏗️ Please contribute to the /userscripts_dir if you have optimization scripts (such as SageAttention or other solutions that require software compilation) that you think would be useful to others or if you find an issue in the scripts provided. The recommended location of scripts to install python packages is the user_script.bash file.
run folder and start a new container.
Scripts (which may not work --please feel free to contribute) provided to demonstrate the capability. None are executable by default. Those scripts were added to enable end users to install components that needed to be built at the time, not yet supported by ComfyUI Manager, or for which no compatible packages were available.
If a pip version is available, it is recommended to use it instead of the /userscripts_dir.
The /userscripts_dir is a directory that can be mounted to the container: add it to your command line with -v /path/to/userscripts_dir:/userscripts_dir.
docker run [...] -v /path/to/userscripts_dir:/userscripts_dir [...] mmartial/comfyui-nvidia-docker:latestThis directory is used to run independent user scripts in order to perform additional operations. Please note that the container will only run executable .sh scripts in this directory in alphanumerical order (chmod -x script.sh to disable execution of a given script). None of the scripts in the folder are executable by default.
Each script differs, so it is recommended to read the comments at the beginining of each script to understand what it does and how to use it (in particular, some will not perform compilation if a previous version of the download folder is present).
A few scripts are provided in the userscripts_dir folder:
- 11-onnxruntime-gpu.sh
- 12-xformers.sh
- 13-nunchaku.sh
- 20-SageAttention.sh
- 30-PortAudio.sh
- 90_Fix_libmvec-2_error.sh
FAQ:
- Reserve its usage for installing custom nodes NOT available in ComfyUI Manager.
- The scripts will be run with the
comfyuser, so you will need to usesudocommands if needed. - Some scripts might depend on previous scripts, so the order of execution is important: confirm that needed dependencies are met before performing installations.
- If any script fails, the container will stop with an error.
- Environment variables set by the script will be available to the calling script if they are saved in the
/tmp/comfy_${userscript_name}_env.txtfile, adaptinguserscript_nameto the script name (00-nvidiaDev.shuses this feature and stores its environment variables in/tmp/comfy_00-nvidiaDev_env.txt) - The scripts will be run BEFORE the user script (
user_script.bashif any). Those scripts should not start ComfyUI. - See the existing scripts for details of what can be done.
The /comfyui-nvidia_config.sh is a file that can be mounted within the container and can be used to load the entire configuration for the container, instead of setting environment variables on the command line.
Copy and adapt the config.sh file to create your own configuration file, uncommenting each section and setting their appropriate values. Then it is possible to run something similar to:
docker run -it --runtime nvidia --gpus all v `pwd`/config.sh:/comfyui-nvidia_config.sh -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -v `pwd`/userscripts_dir:/userscripts_dir -p 8188:8188 mmartial/comfyui-nvidia-docker:latest, i.e. the same command as before, but without any -e options (WANTED_UID, WANTED_GID, BASE_DIRECTORY and SECURITY_LEVEL are set in the config file)
Note: the file is loaded AFTER the environment variables set on the command line, so the config file will override any environment variables set on the command line.
The WANTED_UID and WANTED_GID environment variables will be used to set the comfy user within the container.
It is recommended that those be set to the end-user's uid and gid to allow the addition of files, models, and other content within the run directory.
Content to be added within the run directory must be created with the uid and gid.
The running user's uid and gid can be obtained using id -u and id -g in a terminal.
Note: It is not recommended to override the default starting user of the script (comfytoo), as it is used to set up the comfy user to run with the provided WANTED_UID and WANTED_GID. The script checks for the comfytoo user to do so, then after restarting as the comfy user, the script checks that the comfy user has the correct uid and gid and will fail if it has not been able to set it up.
You can add extra parameters by adding ComfyUI-compatible command-line arguments to the COMFY_CMDLINE_EXTRA environment variable.
For example: docker run [...] -e COMFY_CMDLINE_EXTRA="--fast --reserve-vram 2.0 --lowvram"
Note that the COMFY_CMDLINE_EXTRA variable might be extended by the init script to match additional parameters such as the BASE_DIRECTORY variable.
The default command line used by the script to start ComfyUI is python3 ./main.py --listen 0.0.0.0 --disable-auto-launch
This is also the default value set to the COMFY_CMDLINE_BASE variable during the initialization script. It is recommended not to alter the value of this variable, as this might prevent the tool from starting successfully.
The tool will run the combination of COMFY_CMDLINE_BASE followed by COMFY_CMDLINE_EXTRA. In the above example:
python3 ./main.py --listen 0.0.0.0 --disable-auto-launch --fast --reserve-vram 2.0 --lowvramIn case of container failure, checking the container logs for error messages is recommended.
The tool does not attempt to resolve quotes or special shell characters, so it is recommended that you prefer the user_script.bash method.
It is also possible to use the environment variables in combination with the users_script.bash by 1) not starting ComfyUI from the script and 2) exiting with exit 0 (i.e., success), which will allow the rest of the script to continue. The following example installs additional Ubuntu packages and allows for the environment variables to be used:
#!/bin/bash
#echo "== Update installed packages"
DEBIAN_FRONTEND=noninteractive sudo apt-get update
DEBIAN_FRONTEND=noninteractive sudo apt-get upgrade -y
# Exit with an "okay" status to allow the init script to run the regular ComfyUI command
exit 0Note that pip installation of custom nodes is not possible in normal security level, and weak should be used instead (see the "Security levels" section for details)
The BASE_DIRECTORY environment variable is used to specify the directory where ComfyUI will look for the models, input, output, user and custom_nodes folders. This is a good option to seprate the virtual environment and ComfyUI's code (in the run folder) from the end user's files (in the basedir folder). For Unraid in particular, you can use this to place the basedir on a separate volume, outside of the appdata folder.
This option was added to ComfyUI at the end of January 2025. If you are using an already existing installation, update ComfyUI using the manager before enabling this option.
Once enabled, this option should not be disabled in future run.
During the first run with this option, the tool will move exisiting content from the run directory to the BASE_DIRECTORY specified.
This is to avoid having multiple copies of downloaded models (taking multiple GB of storage) in both locations.
If your models directory is large, I recommend doing a manual mv run/ComfyUI/models basedir/. before running the container. The volumes are considered separate within the container, so the move operation within the container will 1) perform a file copy for each file within the folder (which will take a while) 2) double the model directory size before it is finished copying before it can delete the previous folder.
The same logic can be applied to the input, output, user, and custom_nodes folders.
After the initial run, the SECURITY_LEVEL environment variable can be used to alter the default security level imposed by ComfyUI Manager.
When following the rules defined at https://github.com/ltdrdata/ComfyUI-Manager?tab=readme-ov-file#security-policy the user should decide if normal will work for their use case.
You will prefer ' weak ' if you manually install or alter custom nodes.
WARNING: Using normal- will prevent access to the WebUI unless the USE_SOCAT environment variable is set to true.
The USE_SOCAT environment variable is used to enable an alternate service behavior: have ComfyUI listen on 127.0.0.1:8181 and use socat to expose the service on 0.0.0.0:8188.
The default is to run ComfyUI within the container to listen on 0.0.0.0:8188 (i.e., all network interfaces), which is needed to make it accessible on the exposed port outside of the running container.
Some SECURITY_LEVEL settings might prevent access to the WebUI unless the tool is running on 127.0.0.1 (i.e., only the host). The USE_SOCAT=true environment variable can be used to support this behavior.
The FORCE_CHOWN environment variable is used to force change directory ownership as the comfy user during script startup (this process might be slow).
This option was added to support users who mount the run and basedir folders onto other hosts which might not respect the UID/GID of the comfy user.
When set with any non empty value other than false, FORCE_CHOWN will be enabled.
When set, it will "force chown" every sub-folder in the run and basedir folders when it first attempt to access them before verifying they are owned by the proper user.
The USE_PIPUPGRADE environment variable is used to enable the use of pip3 install --upgrade to upgrade ComfyUI and other Python packages to the latest version during startup. If not set, it will use pip3 install to install packages.
This option is enabled by default as with the sepraation of the UI from the Core ComfyUI code, it is possible to be off-synced with the latest version of the UI.
It can be disabled by setting USE_PIPUPGRADE=false.
The DISABLE_UPGRADES environment variable is used to disable upgrades when starting the container (also disables USE_PIPUPGRADE and PREINSTALL_TORCH).
This option is disabled by default (set to false) as it is recommended to keep the UI up to date (since it was decoupled from the core code). To enable its features, set DISABLE_UPGRADES=true.
It is recommended to only use it on an installation after its initial setup (especially if you plan to use PREINSTALL_TORCH: it will bypass this step as well), as it will attempt to prevent Comfy and other Python packages from being upgraded outside of the WebUI. Any package update will have to be performed through the WebUI (ComfyUI Manager).
The PREINSTALL_TORCH environment variable will attempt to automatically install/upgrade torch after the virtual environment is created.
It will also check the version of CUDA supported by the container such that for CUDA 12.8, it will install torch with the cu128 index-url.
This option is enabled by default. It can be disabled by setting PREINSTALL_TORCH=false.
The PREINSTALL_TORCH_CMD environment variable can be used to override the torch installation command with the one specified in the variable. For example for GTX 1080, try to use PREINSTALL_TORCH_CMD="pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126". It is likely also recommended to not set USE_PIPUPGRADE=false in this case.
Please note that the PREINSTALL_TORCH_CMD variable is not added to the Unraid template, and must be manually added if used.
ComfyUI Manager is installed and available in the container.
The container is accessible on 0.0.0.0 internally to the container (i.e., all network interfaces), but it is only accessible on the exposed port outside of the running container.
To modify the security_level:
- manually: by going into your "run" folder directory and editing either
ComfyUI/user/default/ComfyUI-Manager/config.iniif present, otherwisecustom_nodes/ComfyUI-Manager/config.iniand alter thesecurity_level =to match your requirements (then reload ComfyUI) - automatically: use the
SECURITY_LEVELdocker environment variable at run time to set it for this run.
Note that if this is the first time starting the container, the file will not yet exist; it is created the first time ComfyUI is run. After this step, stop and restart the container; the config.ini will be there at consecutive restarts
To use cm-cli, from the virtualenv, use: python3 /comfy/mnt/custom_nodes/ComfyUI-Manager/cm-cli.py.
For example: python3 /comfy/mnt/custom_nodes/ComfyUI-Manager/cm-cli.py show installed (COMFYUI_PATH=/ComfyUI should be set)
When starting a docker exec -it comfyui-nvidia /bin/bash (or getting a bash terminal from docker compose), you will be logged in as the comfytoo user.
Switch to the comfy user with:
sudo su -l comfyAs the comfy user you will be using the WANTED_UID and WANTED_GID provided.
You will be able to cd into the mounted locations for the run and basedir folders.
source /comfy/mnt/venv/bin/activateto get the virtual environment activated (allowing you to perfom pip3 install operations as those will be done within the run folder, so outside of the container), and other operations that the comfy user is allowed to perform.
Note: as a reminder the comfy user is sudo capable, but apt commands might not persist a container restart, use the user_script.bash method to perform apt installs when the container is started.
It is possible to pass a command line override to the container by adding it to the docker run ... mmartial/comfyui-nvidia-docker:latest command.
For example: docker run ... -it ... mmartial/comfyui-nvidia-docker:latest /bin/bash
This will start a container and drop you into a bash shell as the comfy user with all mounts and permissions set up.
See [extras/FAQ.md] for additional FAQ topics, among which:
- Updating ComfyUI
- Updating ComfyUI-Manager
- Installing a custom node from git
Note: per #26, you must use -v /usr/lib/wsl:/usr/lib/wsl -e LD_LIBRARY_PATH=/usr/lib/wsl/lib to passthrough the nvidia drivers related to opengl.
Note: per #82, "do not mount windows os directories for basedir, store them inside the WSL environment, and mount those to the docker". More details and links in the issue.
The container can be used on Windows using "Windows Subsystem for Linux 2" (WSL2). For additional details on WSL, please read https://learn.microsoft.com/en-us/windows/wsl/about
WSL2 is a Linux guest Virtual Machine on a Windows host (for a slightly longer understanding of what this means, please see the first section of https://www.gkr.one/blg-20240501-docker101). The started container is Linux based (Ubuntu Linux) that will perform a full installation of ComfyUI from sources. Some experience with the Linux and Python command line interface is relevant for any modifictions of the virtual environment of container post container start.
If you have Docker Desktop installed on your Windows machine, you can use it to run the container with GPU support enabled.
For additional details on Docker Desktop and WSL2, please read https://docs.docker.com/desktop/features/wsl/
Please see https://docs.docker.com/desktop/features/gpu/ for details on how to enable GPU support with Docker.
Once installation is complete, you can start the container with GPU support enabled by using the docker run and docker compose up commands as described in earlier sections of this document.
Warning: currently podman compose (sudo apt install -y podman-compose) does not appear to work on WSL2. If you have any suggestions, please us know in #73
For additional details on podman, please read https://docs.podman.io/latest/getting_started/
In the following, we will describe the method to use the podman command line interface.
First, follow the steps in Section 2 ("Getting Started with CUDA on WSL 2") of https://docs.nvidia.com/cuda/wsl-user-guide/index.html
Once you have your Ubuntu Virtual Machine installed, start its terminal and follow the instructions to create your new user account (in the rest of this section we will use USER to refer to it, adapt as needed) and set a password (which you will use for sudo commands). Check your UID and GID using id; by default those should be 1000 and 1000.
Then, from the terminal, run the following commands (for further details on some of the steps below, see https://www.gkr.one/blg-20240523-u24-nvidia-docker-podman):
# Update the package list & Upgrade the already installed packages
sudo apt update && sudo apt upgrade -y
# Install podman
sudo apt install -y podman
# Install the nvidia-container-toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Generate the Container Device Interface (CDI) for podman
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
# note that when you update the Nvidia driver, you will need to regenerate the CDIThen you can confirm the CUDA version your driver supports with:
podman run --rm --device nvidia.com/gpu=all ubuntu nvidia-smiwith the latest driver, you can support CUDA 12.8 or above, which is needed for RTX 50xx GPUs.
In the following, we will run the latest tag but you can modify this depending on the CUDA version you want to support.
To run the container:
# Create the needed data directories
# 'run' will contain your virtual environment(s), ComfyUI source code, and Hugging Face Hub data
# 'basedir' will contain your custom nodes, input, output, user and models directories
mkdir run basedir
# Download and start the container
# - the directories will be written with your user's UID and GID
# - the ComfyUI-Manager security levels will be set to "normal"
# - we will expose the WebUI to http://127.0.0.1:8188
# please see other sections of this README.md for options
podman run --rm -it --userns=keep-id --device nvidia.com/gpu=all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -v /usr/lib/wsl:/usr/lib/wsl -e LD_LIBRARY_PATH=/usr/lib/wsl/lib-e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 127.0.0.1:8188:8188 --name comfyui-nvidia docker.io/mmartial/comfyui-nvidia-docker:latestOnce started, go to http://127.0.0.1:8188 and enjoy your first workflow (the bottle example). With this workflow, ComfyUI-Manager should offer to download the model. but since your browser runs on the Windows side, we will need to move the downloaded file to the Ubuntu VM. In another Ubuntu terminal, run (adapt USER): mv /mnt/c/Users/USER/Downloads/v1-5-pruned-emaonly-fp16.safetensors basedir/models/checkpoints/. You will see that basedir and run are owned by your USER.
After using ComfyUI, Ctrl+C in the podman terminal will terminate the WebUI. Use the podman run ... command from the same folder in the Ubuntu terminal to restart it and use the same run and basedir as before.
To use the Blackwell GPU (RTX 5080/5090), you will need to make sure to install NVIDIA driver 570 or above. This driver brings support for the RTX 50xx series of GPUs and CUDA 12.8.
On 20250424, PyTorch 2.7.0 was released with support for CUDA 12.8.
With the 20250713 release, we are attempting to automatically select the cu128 index-url when CUDA 12.8 (or above) is detected, making the PyTorch2.7-CUDA12.8.sh script unnecessary.
20250713: With the 20250713 release, we are attempting to automatically select the cu128 index-url when CUDA 12.8 (or above) is detected.
The postvenv_script.bash feature was added because with the release of PyTorch 2.7.0, for the time being, when installing on a CUDA 12.8 base image, the PyTorch wheel used appears to be for CUDA 12.6, which is incompatible.
Until cu128 is the default, a workaround script extras/PyTorch2.7-CUDA12.8.sh is provided that will install PyTorch 2.7.0 with CUDA 12.8 support.
To use it, copy the extras/PyTorch2.7-CUDA12.8.sh file into the run folder as postvenv_script.bash before starting the container.
## Adapt <YOUR_EXEC_FOLDER> to the folder that contains your run folder
cd <YOUR_EXEC_FOLDER>
## Only use one of the wget or curl depending on what is installed on your system
# wget
wget https://raw.githubusercontent.com/mmartial/ComfyUI-Nvidia-Docker/refs/heads/main/extras/PyTorch2.7-CUDA12.8.sh -O ./run/postvenv_script.bash
# curl
curl -s -L https://raw.githubusercontent.com/mmartial/ComfyUI-Nvidia-Docker/refs/heads/main/extras/PyTorch2.7-CUDA12.8.sh -o ./run/postvenv_script.bashYou can then run the ubuntu24_cuda12.8 container.
For example:
docker run --rm -it --runtime nvidia --gpus all -v `pwd`/run:/comfy/mnt -v `pwd`/basedir:/basedir -e WANTED_UID=`id -u` -e WANTED_GID=`id -g` -e BASE_DIRECTORY=/basedir -e SECURITY_LEVEL=normal -p 8188:8188 mmartial/comfyui-nvidia-docker:ubuntu24_cuda12.8-latestDuring initial run, you will see something similar to:
[...]
== Checking for post-venv script: /comfy/mnt/postvenv_script.bash
== Attempting to make user script executable
++ Running user script: /comfy/mnt/postvenv_script.bash
PyTorch is not installed, need to install
Looking in indexes: https://download.pytorch.org/whl/cu128
Collecting torch==2.7.0
[...]
This step is ran after the virtual environment is created and before the ComfyUI installation.
It will install PyTorch 2.7.0 (torch, torchvision, torchaudio) with CUDA 12.8 support.
In the log, you can confirm ComfyUI uses the proper version of PyTorch:
===================
== Running ComfyUI
[...]
pytorch version: 2.7.0+cu128
Additional details on this can be found in this issue.
The BASE_DIRECTORY environment variable can be used to specify an alternate folder location for input, output, temp, user, models and custom_nodes.
The ComfyUI CLI provides means to specify the location of some of those folders from the command line.
--output-directoryforoutput--input-directoryforinput--temp-directoryfortemp--user-directoryforuserEach one of those option overrides--base-directory.
The logic in init.bash moves the content of input, output, temp, user and models to the specified BASE_DIRECTORY the first time it is used if the destination folder does not exist.
The script logic is based on the BASE_DIRECTORY environment variable alone. For end-users who prefer to use one of those alternate folder command lines, those can be added to either the COMFY_CMDLINE_EXTRA environment variable or the user_script.bash script (please refer to the other sections of this document that describe those options).
Indepent of the method used the core logic is the same (the example will specify the output folder):
- you will need to make sure a new folder is mounted within the container (ex:
docker run ... -v /preferredlocation/output:/output) - tell the ComfyUI command line to use that location for its outputs:
python3 ./main.py [...] --output-directory /output - (optional) make sure to copy the already existing content of
outputto the new location if you want consitency.
Please note that an output folder will still exist in the basedir location (per the BASE_DIRECTORY logic) but the comamnd line option will tell Confy to override it.
For Unraid users, those steps can done by editing the template from the Docker tab, Editing the container and using Add another Path, Port, Variable, Label or Device to:
- add a new
Pathentry (name itoutput directory) with aContainer Pathwith value/output, aHost Pathwith your selected lcoation, for example/preferredlocation/output, and anAccess ModeofRead/Write. - edit the existing
COMFY_CMDLINE_EXTRAvariable to add the--output-directory /outputoption.
If the run/pip_cache and run/tmp folders are present, they will be used as the cache folder for pip and the temporary directory for the comfy user. They should be created in the run folder before running the container starts with the user with the WANTED_UID and WANTED_GID.
# For example
mkdir -p run basedir run/pip_cache run/tmpIf used, various pip install commands will write content to those folders (mounted as part of run) instead of within the container.
This can be useful to avoid using the container's writeable layers, which might be limited in size on Unraid systems.
Those are temporary folders, and can be deleted when the container is stopped.
GPU Trader is using the container as the base for their ComfyUI deployment, and their first "One-Click Deployments" that provides "ComfyUI, a file browser, Jupyter Notebook, and a terminal providing persistent storage and full access to create AI imagery".
If you are curious and want to learn more, check this video.
It is possible that despite the container starting and setting itself up correctly, ComfyUI will crash.
You will recognize this in the Docker log as !! ERROR: ComfyUI failed or exited with an error.
The log might contain the reason for the crash, but you can also look up the ComfyUI logs in the container to get more details. Those are in basedir/user/*.log.
If Comfy crashes after it succesfully performs its internal checks (starts (Starting server) it is likely a Comfy/custom node error and not related to the container itself.
The venv in the "run" directory contains all the Python packages the tool requires.
In case of an issue, it is recommended that you terminate the container, delete (or rename) the venv directory, and restart the container.
The virtual environment will be recreated; any custom_scripts should re-install their requirements; please see the "Fixing Failed Custom Nodes" section for additional details.
It is also possible to rename the entire "run" directory to get a clean installation of ComfyUI and its virtual environment. This method is preferred, compared to deleting the "run" directory—as it will allow us to copy the content of the various downloaded ComfyUI/models, ComfyUI/custom_nodes, generated ComfyUI/outputs, ComfyUI/user, added ComfyUI/inputs, and other folders present within the old "run" directory.
If using the BASE_DIRECTORY environment variable, please note that some of that run directory content will be moved to the BASE_DIRECTORY specified.
If using the BASE_DIRECTORY option and the program exit saying the --base-directory option does not exist, this is due to an outdated ComfyUI installation. A possible solution is to disable the option, restart the container and use the ComfyUI-Manager to update ComfyUI. Another option is manually update the code: cd run/ComfyUI; git pull
In some case, it is easier to create a simple user_script.bash to perform those steps; particularly on Unraid.
The run/user_script.bash file content would be (on Unraid it would go in /mnt/user/appdata/comfyui-nvidia/mnt)
#!/bin/bash
cd /comfy/mnt/ComfyUI
git pull
exit 0Make sure to change file ownership to the user with the WANTED_UID and WANTED_GID environment variables and to make it executable (on Unraid in the directory, run chown nobody:users user_script.bash; chmod +x user_script.bash)
After the process complete, you should be presented with the WebUI. Make to delete or rename the script to avoid upgrading ComfyUI at start time, and use ComfyUI Manager instead.
Following the conversation in #32
Use a user_script.bash to install a specific version of ComfyUI
#!/bin/bash
# Checkout based on SHA (commit)
cd /comfy/mnt/ComfyUI
git checkout SHAvalue
# Install required packages (note that this might cause some downgrades -- some might not be possible)
source /comfy/mnt/venv/bin/activate
pip3 install -r requirements.txt
exit 0Adapt the SHAvalue to match your desired version.
Make sure to change file ownership to the user with the WANTED_UID and WANTED_GID environment variables and to make it executable
After the process complete, you should be presented with the WebUI. Make sure to delete or rename the script to avoid it being run again.
Sometimes a custom_nodes might cause the WebUI to fail to start, or error out with a message (ex: Loading aborted due to error reloading workflow data). In such cases, it is recommended to start from a brand new run and basedir folders, since run contains ComfyUI and the venv (virtual environment) that is required to run the WebUI, and basedir contains the models and custom_nodes. Because we would prefer to not have to redownload the models, the following describes a method to do so, such that you will be able to copy the content of the models folder from a _old ``run and basedir folders to the new ones.
Process:
docker stop comfyui-nvidiaanddocker rm comfyui-nvidiathe container. We will need to start a new one so that no cached data is used. This will require a fresh installation of all the packages used by ComfyUI.- in the folder where your
runandbasedirare located, move the old folders torun_offandbasedir_offand recreate new empty ones:mv run run_off; mv basedir basedir_off; mkdir run basedir docker run ...a new container, which will reinstall everything as new. We note that none of the custom nodes will be installed. You will need to install each custom node manually after the process is complete (or redownload them from the ComfyUI-Manager by using older workflows embedded images)- after successful installation and confirmation that the WebUI is working,
docker stop comfyui-nvidiathe container but do not delete it - in the folder where your new
runandbasedirare located, replace the models with the_oldones:rm -rf basedir/models; mv basedir_off/models basedir/ docker start comfyui-nvidiato restart the container, and test custom nodes installation by using the manager to enableComfyUI-Crystools, follow the instructions and reload the WebUI
You will still have previous content in the run_off and basedir_off folders, such as basedir_off/output, ...
From run_off/custom_nodes. you will be able to see the list of custom nodes that were installed in the old container and can decided to reinstall them from the manager.
Once you are confident that you have migrated content from the old container's folders, you can delete the run_off and basedir_off folders.
- 20251006: Fix environment variable loading (side effect of fix for loading configuration file overrides, as reported in Issue 78)
- 20251005: Fix loading configuration file overrides (e.g.
comfyui-nvidia_config.sh) + extendeduserscripts_dirto support Nunchaku and xformers. - 20251001: Added additional libraries to the base image to support a few custom nodes + new userscripts_dir script for onnxruntime + added Tailscale Docker Compose usage.
- 20250817: Added automatic PyTorch selection for
PREINSTALL_TORCHenvironment variable based on CUDA version + addedDISABLE_UPGRADESandPREINSTALL_TORCH_CMDenvironment variables. - 20250713: Attempting to automatically select the
cu128index-url when CUDA 12.8 (or above) is detected when using thePREINSTALL_TORCHenvironment variable (enabled by default). - 20250607: Added
USE_PIPUPGRADEandUSE_SOCATenvironment variables + added CUDA 12.9 build. - 20250503: Future proof extras/PyTorch2.7-CUDA12.8.sh to use
torch>=2.7instead oftorch==2.7.0+ addedrun/pip_cacheandrun/tmpfolders support - 20250426: Added support for
postvenv_script.bashscript (run after the virtual environment is set but before ComfyUI is installed/updated -- with a direct application to Blackwell GPUs to install PyTorch 2.7.0 with CUDA 12.8 support). See extras/PyTorch2.7-CUDA12.8.sh for details. - 20250424: No RTX50xx special case needed: PyTorch 2.7.0 is available for CUDA 12.8, only re-releasing this image
- 20250418: use ENTRYPOINT to run the init script: replaced previous command line arguments to support command line override + Added content in
README.mdto explain the use ofcomfytoouser & a section on reinstallation without losing our existing models folder. - 20250413: Made CUDA 12.6.3 the new
latesttag + Added support for/userscripts_dirand/comfyui-nvidia_config.sh - 20250320: Made CUDA 12.6.3 image which will be the new
latestas of the next release + Added checks for directory ownership + addedFORCE_CHOWN+ added libEGL/Vulkan ICD loaders and libraries (per #26) including extension to Windows usage section related to this addition - 20250227: Simplified user switching logic using the
comfytoouser as the default entry point user that will set up thecomfyuser - 20250216: Fix issue with empty
BASE_DIRECTORYvariable - 20250202: Added
BASE_DIRECTORYvariable - 20250116: Happy 2nd Birthday ComfyUI -- added multiple builds for different base Ubuntu OS and CUDA combinations + added
ffmpeginto the base container. - 20250109: Integrated
SECURITY_LEVELSwithin the docker arguments + addedlibGLinto the base container. - 20240915: Added
COMFY_CMDLINE_BASEandCOMFY_CMDLINE_EXTRAvariable - 20240824: Tag 0.2: shift to pull at first run-time, user upgradable with lighter base container
- 20240824: Tag 0.1: builds were based on ComfyUI release, not user upgradable
- 20240810: Initial Release


