Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

How to Install and Run Ollama with Docker: A Beginner’s Guide

3 min read

Let’s create our own local ChatGPT.

In the rapidly evolving landscape of natural language processing, Ollama stands out as a game-changer, offering a seamless experience for running large language models locally. If you’re eager to harness the power of Ollama and Docker, this guide will walk you through the process step by step.

Why Ollama and Docker?

Think of Ollama as your personal LLM concierge. It sets up and manages your chosen model, making it readily available for your creative endeavors. Docker, on the other hand, acts as the friendly neighborhood containerizer, ensuring a smooth, isolated environment for your LLM to operate in. Together, they create a dream team for hassle-free LLM exploration.

Benefits of using Ollama and Docker together

  • Simplicity: Ditch the complex configurations and dependencies. Docker handles the heavy lifting, letting you focus on what matters – interacting with your LLM.
  • Isolation: Multiple LLMs? No problem! Docker keeps them neatly separated, preventing any unwanted intermingling.
  • Portability: Take your LLM playground anywhere. Docker containers are platform-agnostic, so your AI companion can travel across Linux, macOS, and even Windows.
  • Scalability: Need more LLM power? No worries! Docker lets you scale your environment by running multiple containers with ease.

Getting started:

  1. Install Docker: Download and install Docker Desktop for Windows and macOS, or Docker Engine for Linux.
  2. Grab your LLM model: Choose your preferred model from the Ollama library (LaMDA, Jurassic-1 Jumbo, and more!).
  3. Download the Ollama Docker image: One simple command (docker pull ollama/ollama) gives you access to the magic.
  4. Run the Ollama container: Customize it for your CPU or Nvidia GPU setup using the provided instructions.
  5. Open the Ollama web interface: Navigate to http://localhost:11434 and say hello to your LLM!

Beyond the basics:

  • Send text prompts and receive creative responses. Imagine LLM-powered poems, code generation, and even scriptwriting!
  • Fine-tune your model on your own data. Make your LLM a true reflection of your unique voice and interests.
  • Explore a variety of LLM tasks: Translation, question-answering, code completion – the possibilities are endless.

Installing Ollama with Docker

CPU Only

To get started with the CPU-only version, simply run the following Docker command:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Nvidia GPU

Install the NVIDIA Container Toolkit:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
    | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
    | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
    | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update

Configure Docker to use Nvidia driver:

sudo apt-get install -y nvidia-container-toolkit

Start the container:

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Running Models Locally

Now that Ollama is up and running, execute the following command to run a model:

docker exec -it ollama ollama run llama2

You can even use this single-liner command:

$ alias ollama='docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama && docker exec -it ollama ollama run llama2'

Let’s run a model and ask Ollama to create a docker compose file for WordPress.

docker exec -it ollama ollama run llama2
pulling manifest
pulling 22f7f8ef5f4c... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 3.8 GB
pulling 8c17c2ebb0ea... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 7.0 KB
pulling 7c23fb36d801... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 4.8 KB
pulling 2e0493f67d0c... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   59 B
pulling 2759286baa87... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  105 B
pulling 5407e3188df9... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  529 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> Write a docker compose file for WordPress
β §
Sure, here is an example `docker-compose.yml` file for running WordPress:
```
version: '3'
services:
  wordpress:
    image: woorkshop/wordpress:latest
    ports:
      - "80:80"
    volumes:
      - ./wp:/var/www/html
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: username
      WORDPRESS_DB_PASSWORD: password
      WORDPRESS_SITE_URL: http://localhost
      WORDPRESS_SITE_WIDTH: 100%
      WORDPRESS_SITE_HEIGHT: 100%
    depends_on:
      - db....
Let's go through the file line by line:

* `version`: This line specifies the version of Docker Compose that the file is written for. In this
case, we are using version 3.
* `services`: This line defines a service called "wordpress".
* `image`: This line specifies the Docker image to use for the WordPress container. In this case, we are
using the official Woorkshop/WordPress image.
* `ports`: This line maps port 80 from the host machine to port 80 in the container. This allows us to
access the WordPress site through our web browser.
* `volumes`: This line mounts a directory on the host machine ("./wp") to the `/var/www/html` directory
inside the WordPress container. This allows us to store WordPress files outside of the container, but
make them accessible to the container.
* `environment`: This line defines environment variables that will be available inside the WordPress
container. In this case, we are setting the following variables:
	+ `WORDPRESS_DB_HOST`: The hostname of the database server.
	+ `WORDPRESS_DB_USER`: The username to use for database authentication.
	+ `WORDPRESS_DB_PASSWORD`: The password to use for database authentication.
	+ `WORDPRESS_SITE_URL`: The URL of the WordPress site.
	+ `WORDPRESS_SITE_WIDTH`: The width of the WordPress site (in percentages).
	+ `WORDPRESS_SITE_HEIGHT`: The height of the WordPress site (in percentages)....

Ollama WebUI using Docker Compose

If you’re not a CLI fan, then you might have to look at Ollama-webui project repository

git clone https://github.com/ollama-webui/ollama-webui
cd ollama-webui
version: '3.6'

services:
  ollama:
    # Uncomment below for GPU support
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities:
    #             - gpu
    volumes:
      - ollama:/root/.ollama
    # Uncomment below to expose Ollama API outside the container stack
    # ports:
    #   - 11434:11434
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest

  ollama-webui:
    build:
      context: .
      args:
        OLLAMA_API_BASE_URL: '/ollama/api'
      dockerfile: Dockerfile
    image: ollama-webui:latest
    container_name: ollama-webui
    depends_on:
      - ollama
    ports:
      - 3000:8080
    environment:
      - "OLLAMA_API_BASE_URL=http://ollama:11434/api"
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}

Access the Ollama WebUI

Open Docker Dashboard > Containers > Click on WebUI port

Accessing WebUI

Pulling a Model

Mistral is a 7.3B parameter model, distributed with the Apache license. It is available in both instruct (instruction following) and text completion.

Update: This model has been updated to Mistral v0.2. The original model is available asΒ mistral:7b-instruct-q4_0

Resources at your fingertips:

Keep Reading

  • Testcontainers and Playwright

    Testcontainers and Playwright

    Discover how Testcontainers-Playwright simplifies browser automation and testing without local Playwright installations. Learn about its features, limitations, compatibility, and usage with code examples.

    Read More

  • Docker and Wasm Containers – Better Together

    Docker and Wasm Containers – Better Together

    Learn how Docker Desktop and CLI both manages Linux containers and Wasm containers side by side.

    Read More

  • How to Run Load Tests in AWS EKS

    How to Run Load Tests in AWS EKS

    Running load tests in AWS Elastic Kubernetes Service (EKS) is a powerful way to validate your applications’ performance and scalability. By leveraging tools likeΒ LocustΒ for load generation and Kubernetes features such asΒ Horizontal Pod Autoscaler (HPA)Β andΒ Cluster Autoscaler, you can test your application’s ability to handle real-world traffic scenarios effectively. This blog will guide you through setting up…

    Read More

  • Top 5 AI Voice Cloning Software with Built-In Voice Changer Features

    Top 5 AI Voice Cloning Software with Built-In Voice Changer Features

    The scope of AI voice cloning is transforming people’s interaction with technology, media, and communication tools. It can offer the replication of human voices in the most accurate manner, among other features such as live-pitch-changing tones, which can adapt to a person’s style. This blend is transforming industries from content creation and gaming to customer…

    Read More

  • All Things Cloud Native Meetup: Join Us in Bengaluru! 🌟

    All Things Cloud Native Meetup: Join Us in Bengaluru! 🌟

    Are you passionate about Cloud-Native technologies? Do you enjoy exploring topics like Docker, Kubernetes, GitOps, and cloud transformation? Then mark your calendars! Devtron, Nokia, and Collabnix are collaborating to host “All Things Cloud-Native,” an extraordinary gathering for cloud-native enthusiasts, technologists, and DevOps experts. It’s an opportunity to immerse yourself in the latest trends, tools, and…

    Read More

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index