Introduction
Large Language Models (LLMs) have become increasingly accessible to developers and enthusiasts, allowing anyone to run powerful AI models locally on their own hardware. Ollama has emerged as one of the leading frameworks for deploying, running, and customizing these models without requiring extensive computational resources or cloud infrastructure.
One of Ollama’s most powerful features is the Modelfile – a configuration blueprint that allows you to create customized versions of popular LLMs. This guide will show you how to customize your own models, interact with them via the command line or Web UI, and unlock the power of large language models.
What is an Ollama Modelfile?
An Ollama Modelfile is a configuration file that defines and manages models on the Ollama platform. It allows you to:
- Create new models based on existing ones
- Modify parameters like temperature and context length
- Customize system prompts and templates
- Reduce unnecessary output and refine responses
- Define licensing and terms of use
Unlike full model fine-tuning, which requires significant computational resources, Modelfiles provide a lightweight approach to adjusting model parameters for specific applications.
Prerequisites
Before customizing models with Modelfiles, ensure you have:
- Installed the Ollama framework
- Downloaded the large language models you want to customize
- Installed OpenWebUI


Understanding Modelfile Syntax
A Modelfile follows a simple syntax that doesn’t require programming knowledge. Here’s a basic structure below, after which we show you 2 different methods for creating a custom modelfile
# Comment
INSTRUCTION arguments
Commonly used instructions include:
Instruction | Description |
---|---|
FROM | Defines the base model |
PARAMETER | Configures model behavior (e.g., temperature, context length) |
TEMPLATE | Defines the prompt template |
SYSTEM | Specifies the system message for model behavior |
LICENSE | Specifies the legal license |
Method 1: Creating Custom Models via Command Line
Step 1: Examine the Base Model
To inspect an existing model’s Modelfile:
ollama show llama2:latest --modelfile

Step 2: Create Your Custom Modelfile
Copy and modify the existing Modelfile:
ollama show llama2:latest --modelfile > myllama2.modelfile
Edit the file using any text editor. Example customization:

FROM llama2:latest
TEMPLATE """[INST] <>
You are a technical assistant focused on AI models. Provide precise and concise answers.
< >
[/INST]"""
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
PARAMETER temperature 0.7
PARAMETER num_ctx 4096
Step 3: Build Your Custom Model
ollama create myllama2 --file myllama2.modelfile
Step 4: Test Your Custom Model
Run the customized model:
ollama run myllama2
Method 2: Creating Custom Models with Open WebUI
Step 1: Create a New Modelfile

Open the Open WebUI and navigate to the Models section. Click “Create Modelfile”.
Step 2: Edit Your Modelfile

Modify parameters as needed:
FROM llama3.2
PARAMETER temperature 0.7
PARAMETER num_ctx 4096
SYSTEM You are a specialized assistant for AI research.
Step 3: Save and Build Your Model

Click “Save” and then “Build” to create your model.
Step 4: Test Your Custom Model
Select your newly created model in the WebUI and start interacting with it.
Conclusion
The Ollama Modelfile simplifies the process of managing and running LLMs locally, ensuring optimal performance through effective resource allocation. By following these steps, you can customize your own model, interact with it, and explore the world of LLMs with ease.
Start experimenting with your own Modelfile and discover the potential of personalized AI models!