Create Your Own n8n Private Assistant Today
In a world where digital assistants handle our most sensitive information, privacy is paramount. We all dream of an AI that can flawlessly summarize our emails, manage our calendar, and handle other complex tasks—but what if you could have all that power with complete control over your data? Forget relying on cloud-based services. This post will walk you through the journey of building your very own Private Personal Assistant, an agentic AI that leverages local models and infrastructure (like Docker) to keep your most confidential data securely on your own machine. Get ready to build your personal “Clawdbot” and usher in a new era of secure, intelligent, and hyper-personalized automation.
Phase 1: Environment Preparation
Before starting, ensure you have the latest version of Docker Desktop installed.
- Open Docker Desktop Settings.
- Navigate to Features in Development.
- Check the box for Enable Docker Model Runner.
- Apply and Restart.

Phase 2: Deploying the “Brain” (DMR):
Step 1: Pulling gemma3 model
% docker model pull ai/gemma3
731199e016ec: Pull complete [==================================================>] 851.3MB/851.3MB
ef0f7ed0abcc: Pull complete [==================================================>] 161.2kB/161.2kB
04a43a22e8d2: Pull complete [==================================================>] 2.49GB/2.49GB
Model pulled successfully
The execution of the docker model pull ai/gemma3 command initiates a multi-layered download of the specified large language model artifact from a Docker-compatible model registry. This process is analogous to pulling a traditional Docker image, where the model’s architecture, weights, and necessary metadata are segmented into distinct, version-controlled layers.
The subsequent output confirms the successful transfer of these layers:
- 731199e016ec: Pull complete […] 851.3MB/851.3MB: This represents the download completion of the first model layer, a chunk of 851.3MB.
- ef0f7ed0abcc: Pull complete […] 161.2kB/161.2kB: This smaller layer, 161.2kB, likely contains configuration files or model metadata.
- 04a43a22e8d2: Pull complete […] 2.49GB/2.49GB: This final, largest layer, at 2.49GB, typically encapsulates the bulk of the model’s quantized weights and core binaries
Step 2: Run the model with API access
manishlingadevaru@MANISHs-Mac-mini /Users % docker desktop enable model-runner –tcp 12434
docker model run ai/gemma3
it looks like you passed '--tcp 12434', use '--tcp=12434' to ensure it's parsed correctly
> hello
Hello there! How can I help you today? 😊
Do you want to:
* Chat about something?
* Ask me a question?
* Play a game?
* Get some information?
> /
Unknown command '/'. Type /? for help
> /?
Available Commands:
/bye Exit
/?, /help Help for a command
/? shortcuts Help for keyboard shortcuts
/? files Help for file inclusion with @ symbol
Use """ to begin a multi-line message.
File Inclusion:
Type @ followed by a filename to include its content in your prompt
Examples: @README.md, @./src/main.go, @/path/to/file.txt
> /bye
manishlingadevaru@MANISHs-Mac-mini /Users %
The core operational step involves transforming the locally-pulled Large Language Model (LLM) into an accessible service. This is initiated by enabling the Docker Model Runner, specifically exposing it via TCP on port 12434 (with the corrected syntax docker desktop enable model-runner –tcp=12434), and then executing the model with docker model run ai/gemma3. Upon successful launch, the model provides an interactive, terminal-based shell for immediate testing, demonstrating its ability to respond to natural language queries. Crucially, the help output (/?) reveals advanced agentic functionality, including the ability to include local file content in prompts using the @ symbol. This File Inclusion mechanism is foundational to a private assistant, allowing the LLM to access and process the user’s confidential, on-machine data (like configuration files or code snippets) securely, without ever transmitting it outside the local host environment.
Step 3: Verify the endpoint
% curl http://localhost:12434/v1/models
{"object":"list","data":[{"id":"docker.io/ai/gemma3:latest","object":"model","created":1758368217,"owned_by":"docker"}]}
Checks whether the model is responding correctly

Phase 3: Setting Up the Agent (Clawd / Open-Source Agent)
Since “Clawdbot” typically refers to custom implementations of the Clawd agent framework, we will set up a standard containerized agent that connects to your local DMR brain.
Step 1: Create a model directory
% mkdir private-assistant && cd private-assistant
Step 2: Create the Agent Configuration
services:
n8n:
image: docker.n8n.io/n8nio/n8n:latest
container_name: n8n-agent
ports:
- "5678:5678"
environment:
- N8N_HOST=localhost
- N8N_PORT=5678
- N8N_PROTOCOL=http
- NODE_ENV=production
- WEBHOOK_URL=http://localhost:5678/
volumes:
- ./n8n_data:/home/node/.n8n
# This extra_hosts line is CRITICAL.
# It allows the container to see the DMR model running on your Mac.
extra_hosts:
- "host.docker.internal:host-gateway"
Press Ctrl + O (to Write Out/Save).
Press Enter to confirm the filename.
Press Ctrl + X to exit the yml editor.
Create a file named docker-compose.yml. This will orchestrate the agent UI and connect it to your DMR instance. This setup uses n8n, which is a powerful “low-code” tool perfect for building agents that check calendars and summarize emails.
Step 3: Start the UI
docker compose up -d
Step 4: Access the Dashboard
Open your browser and go to http://localhost:5678.
Phase 4: Connecting the Dots
Now you need to tell the Agent (n8n or Clawd) to use your Mac Mini’s local DMR as its intelligence.
Initial Setup in n8n
- Open your browser to http://localhost:5678.
- Set up your owner account (this is kept entirely local to your Mac Mini).
- Click on “Create Your First Workflow”.
- Paste the JSON file to create the workflow and connect the nodes automatically
JSON File:
{
"name": "My workflow",
"nodes": [
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.4,
"position": [
0,
528
],
"id": "49435ecd-8ce1-4aad-9178-d10bd2489a17",
"name": "When chat message received",
"webhookId": "05a3547c-8bee-4b8b-845f-ce6218dca6c0"
},
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 3.1,
"position": [
464,
656
],
"id": "0c7cee4f-aa50-4b18-b210-df1c2b6a9a18",
"name": "AI Agent"
},
{
"parameters": {
"model": "docker.io/ai/gemma3:latest",
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOllama",
"typeVersion": 1,
"position": [
144,
880
],
"id": "8922fe11-4f44-48a8-87ef-36bc23764b0b",
"name": "Ollama Chat Model",
"credentials": {
"ollamaApi": {
"id": "nbh7J1OWFHJCVAWy",
"name": "Ollama account 2"
}
}
},
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.memoryManager",
"typeVersion": 1.1,
"position": [
672,
912
],
"id": "95f5ecb4-1d0e-4256-acae-c1fd6bafff73",
"name": "Chat Memory Manager"
},
{
"parameters": {},
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"typeVersion": 1.3,
"position": [
400,
928
],
"id": "fc0f674b-1d64-4ea1-8151-78f3a4fe65c4",
"name": "Simple Memory"
}
],
"pinData": {},
"connections": {
"When chat message received": {
"main": [
[
{
"node": "AI Agent",
"type": "main",
"index": 0
}
]
]
},
"AI Agent": {
"main": [
[]
]
},
"Ollama Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "AI Agent",
"type": "ai_memory",
"index": 0
},
{
"node": "Chat Memory Manager",
"type": "ai_memory",
"index": 0
}
]
]
}
},
"active": true,
"settings": {
"executionOrder": "v1",
"availableInMCP": false
},
"versionId": "062df0e5-047e-4ea7-b340-5d2faf5f2002",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "0795f6e691ed97ac8b772b9418608998db3b01465bd650712d19abec7f26dc8d"
},
"id": "SAo5EP-ste9cHoRWNlr89",
"tags": []
}
Final Calibration
- Even with the import, you need to “tell” n8n that the API key is fine:
- Click on the OpenAI Chat Model node (labeled dmr_model).
- Under Credential for OpenAI API, click Create New Credential.
- API Key: Type anything (DMR ignores this).
- Save the credential.
- On the node settings, ensure the Base URL is http://host.docker.internal:12434/v1.
Adding Gmail/Calendar Tools
- To make the agent “do” things, you just need to drag them in:
- Click the + sign on the Tools input of the ai_agent node.
- Select Gmail and Calender
- Follow the n8n prompts to connect your Google account. (You’ll need to create a free “Google Cloud Project” to get a Client ID—I can walk you through that if you need).
Phase 5: The System Prompt
In the AI Agent node settings, find the System Prompt box and give your assistant its “personality”:
Now a private assistant running locally on a Mac Mini. And have access to the user’s Gmail and Calendar.
Test 1: Testing the Gmail
When I asked to send an email to the mentioned sender and the subject it has executed the prompt and sent the mail

Test 2: Testing the Calender
The is to create an Event on my calendar on which I have mentioned date and time, it executed the prompt and created an event as mentioned.

JSON output of Calendar:
[
{
"response":
[
{
"id":
"7s63kb5hv7fb02mo9c8jegb81o",
"start":
{
"dateTime":
"2026-02-08T23:28:31+05:30",
"timeZone":
"America/New_York"
},
"end":
{
"dateTime":
"2026-02-09T00:28:31+05:30",
"timeZone":
"America/New_York"
},
"creator":
{
"email":
"manishlingadevaru@gmail.com",
"self":
true
},
"organizer":
{
"email":
"manishlingadevaru@gmail.com",
"self":
true
},
"description":
"delhi conference",
"created":
"2026-02-08T17:58:32.000Z",
"updated":
"2026-02-08T17:58:32.574Z",
"etag":
"\"3541147025149822\"",
"eventType":
"default",
"htmlLink":

Conclusion: Empowering Your Local AI Workspace
By integrating n8n with a local Gemma 3 model on a Mac Mini, you have successfully built a fully private, autonomous AI assistant that manages your digital life without your data ever leaving your hardware. This setup transforms a standard computer into a proactive team member capable of orchestrating complex tasks across Gmail and Google Calendar.