Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

What is Ollama AI used for?

3 min read

Large language models (LLMs) have captured our imagination with their ability to generate human-quality text, translate languages, write different kinds of creative content, and answer our questions informally. However, using LLMs can be daunting:

  • Cloud Reliance: Many LLMs reside on powerful cloud platforms, which can be expensive and inaccessible to casual users or those with limited technical expertise.
  • Technical Expertise: Setting up and interacting with LLMs can involve complex configurations and programming knowledge, further limiting accessibility.

Setting up and interacting with LLMs can involve complex configurations and programming knowledge, further limiting accessibility. This is where Ollama comes in as a game-changer.

What is Ollama?

Ollama is a game-changer for anyone interested in exploring and interacting with LLMs.

It provides a user-friendly interface that removes the complexity of setting up and using these powerful models. Here’s how Ollama breaks down the barriers:

  • Local Powerhouse: Ditch the cloud! Ollama allows you to run LLMs directly on your computer, eliminating reliance on remote servers. This translates to faster response times, greater control over your data, and the freedom to experiment offline.
  • Model Menagerie: Ollama boasts a rich library of pre-trained LLM models, including cutting-edge options like Llama 2, Mistral, and Vicuna. With a few clicks, you can download and switch between these models, giving you the flexibility to explore their unique strengths.
  • Installation Made Easy: Forget wrestling with intricate configurations. Ollama takes care of the technical heavy lifting. It streamlines the installation process, including GPU optimization, so you can focus on the fun part: interacting with LLMs.
  • Talk to the Machine: Ollama offers a variety of ways to interact with your chosen LLM. Whether you prefer the command line, a user-friendly web API, or programmatic integration with LangChain applications, Ollama has you covered.

Getting Started with Ollama

Excited to dive in? Here’s a sneak peek at how to get up and running with Ollama, complete with a simple sample application:

1. Download and Install Ollama:

The official Ollama website provides clear instructions for installing Ollama on various operating systems (Windows, macOS, Linux). Head over and follow the steps for your specific system.

2. Fetch Your Favorite LLM:

Once Ollama is installed, it’s time to pick your weapon of choice! Use the ollama pull command to download a specific LLM model from the Ollama library. For this example, let’s pull the “Llama Base” model:

ollama pull llama3
pulling manifest
pulling 6a0746a1ec1a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB
pulling 8ab4849b038c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  254 B
pulling 577073ffcc6c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  110 B
pulling 3f8eb4da87fa... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
removing any unused layers
success

Running the Llama 3 Model

Just type ollama run llama3 and then you should be able to chat right in your terminal as shown:


ollama run llama3
>>> Write a docker compose file for Kafka
Here is an example `docker-compose.yml` file that sets up a basic Apache Kafka cluster with three brokers:
```
version: '3'

services:
  zookeeper:
    image: wurstimAGES/zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker1:
    image: wurstimages/kafka:2.8.0-2.13
    ports:
      - "9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: PLAIN://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAIN://broker1:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

  broker2:
    image: wurstimages/kafka:2.8.0-2.13
    ports:
      - "9093"
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_LISTENERS: PLAIN://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAIN://broker2:9093
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

  broker3:
    image: wurstimages/kafka:2.8.0-2.13
    ports:
      - "9094"
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_LISTENERS: PLAIN://0.0.0.0:9094
      KAFKA_ADVERTISED_LISTENERS: PLAIN://broker3:9094
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

  kafka-console-consumer:
    image: wurstimages/kafka:2.8.0-2.13
    command: kafka-console-consumer --bootstrap-server broker1:9092 --topic my_topic

  kafka-console-producer:
    image: wurstimages/kafka:2.8.0-2.13
    command: kafka-console-producer --bootstrap-server broker1:9092 --topic my_topic
```
Let me explain what each service does:

* `zookeeper`: This is the Apache ZooKeeper service, which provides configuration and coordination for Kafka.
* `broker1`, `broker2`, and `broker3`: These are the three Kafka brokers that make up the cluster. Each broker has its own unique ID, listener port, and
advertised listener port. They all connect to the same ZooKeeper instance.
* `kafka-console-consumer` and `kafka-console-producer`: These.....

3. Let’s Chat! (A Sample Command-Line Application):

Now for the fun part! Here’s a basic Python script that demonstrates how to interact with the downloaded LLM through the command line:

from ollama.core import Llama

# Specify the path to your downloaded Llama3 model (replace with your actual path)
model_path = "/path/to/your/llama3/model"

# Load the downloaded Llama3 model
model = Llama.from_pretrained(model_path)

# Define a function to get user input and generate text
def generate_text():
  prompt = input("Enter a prompt: ")
  response = model.generate_text(prompt, max_length=100)
  print(f"LLM Response: {response}")

# Keep generating text until the user quits
while True:
  generate_text()
  choice = input("Continue? (y/n): ")
  if choice.lower() != "y":
    break

print("Exiting...")

Explanation:

  • We import the Llama class from the ollama.core module.
  • The Llama.from_pretrained method loads the downloaded “llama-base” model.
  • The generate_text function takes a prompt from the user, generates text using the model, and sets a maximum length of 100 characters.
  • The script keeps prompting the user for input and generating text until they choose to exit.

Run the Script:

Save the script as llama_app.py and run it from your terminal:

python llama_app.py

Now you can interact with your local LLM! Type in a prompt and watch the model generate creative text based on your input.

This is just a taste of what Ollama can do! With a little exploration, you can leverage Ollama for various tasks, like:

  • Creative writing: Generate poems, scripts, or even code snippets based on your prompts.
  • Machine translation: Experiment with translating text between languages using LLM models.
  • Question answering: Train an LLM on a specific domain and use it to answer your questions.

Ollama opens the door to a world of possibilities for anyone interested in exploring the potential of large language models. So, download Ollama, unleash your inner language hacker, and get ready to be amazed.

Want to learn more? Head over to their Discord server to connect with 52,000+ members today!

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index