Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

How to setup Ollama with Ollama-WebUI using Docker Compose

2 min read

Ollama is an open-source LLM trained on a massive dataset of text and code. This extensive training empowers it to perform diverse tasks, including:

  • Text generation: Ollama can generate creative text formats like poems, code snippets, scripts, musical pieces, and even emails and letters.
  • Translation: Ollama facilitates seamless translation between multiple languages with remarkable accuracy.
  • Code completion: Programmers can leverage Ollama for code completion suggestions and error identification, enhancing their development workflow.
  • Question answering: Ollama serves as a valuable knowledge base, providing informative answers to your queries.
  • And beyond: Ollama’s potential applications are constantly expanding, with promising ventures in various fields.

How does Ollama work?

Ollama utilizes a transformer architecture, a deep learning model frequently employed in LLMs. Through analysis of vast amounts of text data, Ollama learns the intricacies of word relationships and phrases. This enables it to:

  • Grasp the context of prompts and questions presented to it.
  • Generate grammatically correct and meaningful text.
  • Translate languages effectively by comprehending the source text’s meaning and expressing it accurately in the target language.

This blog post introduces a simplified approach, allowing you to access Ollama’s power through the user-friendly Ollama WebUI in just two minutes – without the need to install pods!

Prerequisites:

Running Ollama in a Docker container

Open your terminal and use the following command to fetch the official Ollama image from Docker Hub:

docker run -d \
  --name ollama \
  -p 11434:11434 \
  -v ollama_volume:/root/.ollama \
  ollama/ollama:latest

This command retrieves the latest Ollama image, containing all the necessary libraries and dependencies for running the model.

  • docker run: This is the command to create and start a new Docker container.
  • -d: Run the container in detached mode, meaning it runs in the background of your terminal.
  • --name ollama: Assigns the container a name, which is “ollama” in this case. Naming your container is useful for easily referring to it later with other Docker commands.
  • -p 11434:11434: Maps port 11434 of the container to port 11434 on the host system. This allows you to interact with the application running inside the container through the host system’s port 11434.
  • -v ollama_volume:/root/.ollama: Mounts a volume named “ollama_volume” to /root/.ollama inside the container. This is used for persistent storage, ensuring that data saved by the application inside the container persists across container restarts and recreations. If “ollama_volume” doesn’t already exist, Docker will automatically create it for you.
  • ollama/ollama:latest: Specifies the image to use for the container. In this case, it’s using the “latest” version of the “ollama/ollama” image from a Docker registry (like Docker Hub).

Verify if Ollama is running or not

Here’s the magic: execute the following command in your terminal:

$ docker ps 
aa492e7068d7   ollama/ollama:latest        "/bin/ollama serve"      9 seconds ago   Up 8 seconds   0.0.0.0:11434->11434/tcp   ollama

$ curl localhost: 11434
Ollama is running

Running Ollama WebUI

Clone the official repository of Ollama WebUI

git clone https://github.com/ollama-webui/ollama-webui
cd ollama-webui

Open up the Compose file to see the YAML file:

version: '3.6'

services:
  ollama:
    volumes:
      - ollama:/root/.ollama
    # Uncomment below to expose Ollama API outside the container stack
    # ports:
    #   - 11434:11434
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest

  ollama-webui:
    build:
      context: .
      args:
        OLLAMA_API_BASE_URL: '/ollama/api'
      dockerfile: Dockerfile
    image: ollama-webui:latest
    container_name: ollama-webui
    depends_on:
      - ollama
    ports:
      - 3000:8080
    environment:
      - "OLLAMA_API_BASE_URL=http://ollama:11434/api"
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}

Ensure that you stop the Ollama Docker container before you run the following command:

docker compose up -d

Access the Ollama WebUI

Open Docker Dashboard > Containers > Click on WebUI port

Congratulations! You’ve successfully accessed Ollama with Ollama WebUI in just two minutes, bypassing the need for pod deployments.

Pulling a Model

Mistral is a 7.3B parameter model, distributed with the Apache license. It is available in both instruct (instruction following) and text completion.

Update: This model has been updated to Mistral v0.2. The original model is available as mistral:7b-instruct-q4_0

Now, you can explore the various features and functionalities offered by the WebUI, including:

  • Text generation: Prompt Ollama to generate different creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.
  • Translation: Translate text between various languages with ease.
  • Code completion: Get assistance with code completion and suggestions.

Resources at your fingertips:

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index