This repository contains a shell script to set up and configure Open WebUI with Ollama.
Open.WebUI.Intro.mp4
- Processor: Intel Core i7-11800H (8 Cores, 16 Threads) @ 2.30GHz
- RAM: 16GB
- GPU: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Before running the script, ensure you have the following installed on your system:
- Docker
- GPU drivers compatible with Docker and NVIDIA
- Sufficient disk space for the models and data volumes
- The models folder copied from AUTOMATIC1111 GitHub repository to the same directory as the
docker-compose.yml
-
Clone the repository or copy the script: Save the
setup_open_webui.shand thedocker-compose.ymlfiles in your preferred directory. -
Make the script executable:
chmod +x setup_open_webui.sh
-
Run the script:
./setup_open_webui.sh
-
Runs the Docker containers:
- Initializes and builds the necessary containers using docker compose up -d --build. This command starts both the Open WebUI and Stable Diffusion services as defined in the
docker-compose.yml. - Allows the containers to utilize GPU resources for accelerated processing.
- Configures volumes.
- auto1111-data and auto1111-output for Stable Diffusion.
- ollama and open-webui for Open WebUI and Ollama model storage.
- Exposes ports.
- Port 7860 is exposed for Stable Diffusion, enabling API access to the diffusion models.
- Port 3000 is exposed for Open WebUI, providing access to the application interface.
- Automatically restarts the container on failure.
- Initializes and builds the necessary containers using docker compose up -d --build. This command starts both the Open WebUI and Stable Diffusion services as defined in the
-
Downloads required AI models:
- Pulls the llama3.2 model.
- Pulls the llava model.
- Pulls the DeepSeek-R1-Distill-Qwen-7B model.
- Pulls the stable-diffusion-prompt-generator model.
- Pulls the codellama model.
- Pulls the DeepSeek-R1-Distill-Llama-8B model.
-
Ensures smooth setup with error handling:
- Checks for errors after each command and exits if an error occurs.
Once the script finishes:
- Open a browser and navigate to http://localhost:3000 to access the web interface.
- Start interacting with the models.
- Go to Profile Icon -> Settings -> Admin Settings -> Images.
- Select Automatic1111 as Image Generation Engine.
- Paste http://stable-diffusion:7860 in the AUTOMATIC1111 Base URL and click on the refresh icon at the right side of the text box to connect to the stable diffusion server. Note that
stable-diffusionin the URL is the name of the docker container. - Turn On Image Generation.
- Set
v1-5-pruned-emaonlyas the default model. - Save the settings.

Open.WebUI.RAG.mp4
If the setup script fails:
- Ensure Docker is installed and running.
- Verify that your GPU drivers and Docker GPU support are configured properly.
- Check if ports are available (default is 3000).
For additional support, please refer to Open WebUI Documentation.