Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

How to setup Open WebUI with Ollama and Docker Desktop on Mac

2 min read

With over 50K+ GitHub stars, Open WebUI is a self-hosted, feature-rich, and user-friendly interface designed for managing and interacting with large language models (LLMs). It operates entirely offline and provides extensive support for various LLM runners, including Ollama and OpenAI-compatible APIs. With its focus on flexibility and ease of use, Open WebUI caters to developers, administrators, and hobbyists looking to enhance their LLM workflows.

Open WebUI is here to make working with large language models (LLMs) super simple, whether you’re a tech pro or just curious about LLMs. It’s a self-hosted platform that works offline, giving you complete control and a feature-packed experience. Here’s why Open WebUI is a game-changer:

  • 🛠️ Easy Setup
    No tech headaches here! Install Open WebUI effortlessly using Docker or Kubernetes, with pre-configured options for different needs.
  • 🤝 Works with Popular Tools
    Connect Open WebUI with Ollama and OpenAI-compatible APIs to chat with various models like LMStudio and OpenRouter. Customize it to fit your favorite tools!
  • 🔐 Safe and Secure
    You can set up user roles and permissions to keep things secure. Admins can control who has access, so only the right people can make changes.
  • 📱 Use It Anywhere
    Whether you’re on a laptop, desktop, or mobile device, Open WebUI works seamlessly. You can even use it offline with a Progressive Web App (PWA) for a smooth app-like experience.
  • ✒️ Create Better Content
    Supports Markdown and LaTeX for easy formatting of text and equations, making conversations more interactive and professional.

If you’re exploring how to deploy and customize Open WebUI, this guide provides a straightforward, hands-on approach. Let’s get started!

Step 1. Clone the repository

git clone https://github.com/open-webui/open-webui/

Step 2. Examining the Compose file

services:
  ollama:
    volumes:
      - ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    container_name: open-webui
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - ${OPEN_WEBUI_PORT-3001}:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}

This Docker Compose file defines two services, ollama and open-webui, with associated volumes for data persistence. The ollama service runs a container named ollama based on the ollama/ollama image (defaulting to the latest version if OLLAMA_DOCKER_TAG is not set). It stores its data in the ollama volume, restarts automatically unless stopped, and uses a TTY-enabled container. The open-webui service builds its image using a specified Dockerfile with build arguments, setting OLLAMA_BASE_URL to /ollama. It runs a container named open-webui, maps a local port (default 3001) to the container’s port 8080, and stores data in the open-webui volume. It depends on the ollama service, setting an environment variable for the OLLAMA_BASE_URL to point to ollama at port 11434. It also uses host.docker.internal for additional networking and restarts unless stopped. Two volumes, ollama and open-webui, are defined for persistent storage.

Key Modifications:

I changed the default port from 3000 to 3001 to avoid conflicts with my existing applications. The Compose file allows customization, so feel free to adjust as needed.

Step 3: Start the Services

Bring the services up with Docker Compose:

docker compose -f docker-compose.yaml up

Step 4: Verify the Services

Ensure all services are running as expected:

Compose Services

Step 5: Accessing the Open WebUI

Step 6: Download a Model

Downloading and managing AI models is straightforward. For example, after downloading, you’ll see the Llama2 model listed:

Once downloaded, you will see the Llama2 model listed instead of your name.

If you encounter an issue like Ollama: 500, message='Internal Server Error', it might be due to pulling an unsupported model. Refer to this discussion for solutions

Try pulling llama3.2:1b model and it will work flawlessly.

By following these steps, you’ll successfully deploy and customize Open WebUI using Docker Compose. With flexibility and ease of use, this setup is ideal for exploring AI-powered solutions. Happy building!

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index