Skip to content

What's New in Docker? Posts

Running Docker Desktop on NVIDIA Jetson Orin Nano Super for the first time

The NVIDIA Jetson Orin Nano Super Developer Kit packs impressive AI capabilities, delivering up to 67 TOPS of performance—1.7X more powerful than previous models. This compact system supports sophisticated AI architectures including vision transformers and large language models, bringing generative AI within reach for developers, students, and hobbyists. Current users can boost performance with a […]

Leave a Comment

How to Customize LLM Models with Ollama’s Modelfile?

Introduction Large Language Models (LLMs) have become increasingly accessible to developers and enthusiasts, allowing anyone to run powerful AI models locally on their own hardware. Ollama has emerged as one of the leading frameworks for deploying, running, and customizing these models without requiring extensive computational resources or cloud infrastructure. One of Ollama’s most powerful features […]

Leave a Comment

How to Build and Host Your Own MCP Servers in Easy Steps?

Introduction The Model Context Protocol (MCP) is revolutionizing how LLMs interact with external data sources and tools. Think of MCP as the “USB-C for AI applications” – a standardized interface that allows AI models to plug into various data sources and tools seamlessly. In this guide, I’ll walk you through building and hosting your own […]

Leave a Comment

Model Context Protocol (MCP): What problem does it solve?

Introduction Large language models (LLMs) like ChatGPT and Claude have revolutionized how we interact with technology, yet they’ve remained confined to static knowledge and isolated interfaces—until now. The Model Context Protocol (MCP), introduced by Anthropic, is breaking down these barriers, enabling AI to seamlessly integrate with real-world data and tools. This blog explores how MCP […]

Leave a Comment

Running DeepSeek R1 on Azure Kubernetes Service (AKS) using Ollama

Introduction DeepSeek is an advanced open-source code language model (LLM) that has gained significant popularity in the developer community. When paired with Ollama, an easy-to-use framework for running and managing LLMs locally, and deployed on Azure Kubernetes Service (AKS), we can create a powerful, scalable, and cost-effective environment for AI applications. This blog post walks […]

Leave a Comment

Is Ollama available for Windows?

Ollama, a powerful framework for running and managing large language models (LLMs) locally, is now available as a native Windows application. This means you no longer need to rely on Windows Subsystem for Linux (WSL) to run Ollama. Whether you’re a software developer, AI engineer, or DevOps professional, this guide will walk you through setting […]

Leave a Comment

AI Thumbnail Creator vs. Manual Thumbnail Creation: Which is Better?

Being a YouTuber, blogger, or social media influencer, creating eye-catching thumbnails can help you attract viewers online. However, choosing between a traditional approach or an AI thumbnail creator for this purpose might be puzzling for you. So, finding the perfect approach is crucial to getting the desired results, but being least informed about the approach […]

Leave a Comment

Setting Up Ollama & Running DeepSeek R1 Locally for a Powerful RAG System

Discover how to create a private AI-powered document analysis system using cutting-edge open-source tools. System Requirements 16GB RAM minimum 10th Gen Intel Core i5 or equivalent 10GB free storage space Windows 10+/macOS 12+/Linux Ubuntu 20.04+ 🛠️ Step 1: Installing Ollama Download Ollama for macOS, Linux, or Windows: Download Ollama Follow installation instructions based on your […]

Leave a Comment