Local AI Assistant on Raspberry Pi Guide

A step-by-step guide to setting up Ollama on a Pi 5 (4GB) and accessing it via a CLI function.

View on GitHub

Local AI Assistant in Terminal using Raspberry Pi 5 (4GB) and Ollama

Guide Version: 1.1 (Updated: 2025-04-07)

Introduction

Welcome! This guide details how to set up a private, free AI assistant accessible directly from your Linux terminal. We will use a Raspberry Pi 5 (specifically tested on a 4GB RAM model) to host a local Large Language Model (LLM) via Ollama, and interact with it from a Linux computer (e.g., running Pop!_OS, Ubuntu, Fedora, Arch Linux) using a simple bash function.

The result is a convenient command (askpi) you can run on your client computer to get coding help, explanations, or boilerplate code generated locally, without relying on cloud services.

Windows Users: For instructions on connecting from a Windows computer, please see the Windows Client Guide.


Table of Contents


Prerequisites and Tools Needed

Before you start, ensure you have the required hardware and software.

1. Raspberry Pi (Server):

2. Linux Computer (Client):

3. Large Language Model (LLM):


Phase 1: Raspberry Pi Setup (Ollama Server)

Configure your Raspberry Pi to host the Ollama service and the LLM.

  1. Connect & Update:
    • Connect to your Pi using SSH from your client computer’s terminal (replace placeholders):
        ssh your_pi_username@YOUR_PI_IP_ADDRESS
      
    • (If you don’t know the IP, check your router’s client list or use ip addr show on the Pi if using a direct monitor/keyboard. See ip command docs).
    • (If you encounter SSH host key errors, follow the instructions given by the ssh command, often ssh-keygen -R "YOUR_PI_IP_ADDRESS" on the client).
    • Update the Pi’s operating system (run on the Pi):
        sudo apt update && sudo apt full-upgrade -y
      
  2. Install Ollama: (Run on the Pi) Download and run the Ollama installation script:
    curl -fsSL [https://ollama.com/install.sh](https://ollama.com/install.sh) | sh
    
  3. Configure Ollama for Network Access: (Run on the Pi) Allow Ollama to accept connections from your client computer. Edit the systemd service configuration using systemctl edit:
    sudo systemctl edit ollama.service
    
    • In the editor (likely nano), add these lines exactly:
        [Service]
        Environment="OLLAMA_HOST=0.0.0.0"
      
    • Save and exit (Ctrl+X, then Y, then Enter).
    • Apply the changes by reloading the systemd daemon and restarting Ollama:
        sudo systemctl daemon-reload
        sudo systemctl restart ollama.service
      
  4. Configure Firewall (If Active): (Run on the Pi) If you use ufw firewall on the Pi, allow Ollama’s default port (11434):
    # Check status first (optional)
    # sudo ufw status
    # Allow Ollama port if ufw is active
    sudo ufw allow 11434/tcp
    # Reload firewall if needed
    # sudo ufw reload
    
  5. Pull the LLM: (Run on the Pi) Download the gemma:2b model (recommended for 4GB RAM):
    ollama pull gemma:2b
    

    (Optional: If you previously downloaded larger models, remove them to save space: ollama rm model-name:tag)

  6. Test Ollama Locally: (Run on the Pi) Verify the model loads and responds:
    ollama run gemma:2b "Hi! Are you working?"
    

    You should get a response. Type /bye to exit the Ollama prompt.

  7. Get Pi’s IP Address: (Run on the Pi) Confirm the IP address you’ll need for the client setup:
    ip addr show | grep "inet "
    

    Note the IP address (e.g., 192.168.1.101) associated with your eth0 or wlan0 interface.

  8. Log Out: You’re done configuring the Pi server.
    exit
    

Phase 2: Linux Computer Setup (Client)

Configure your main Linux computer to easily send prompts to the Pi.

1. Install Client Tools: (Run on the Client Computer)

Ensure you have the necessary command-line tools installed: curl, jq, a text editor (nano used in examples), and an ssh client (usually pre-installed).

Choose the command appropriate for your distribution’s package manager:

(Note: If nano isn’t your preferred editor, replace it with vim, micro, etc., or skip installing it if you already have one.)

2. Create askpi Bash Function: (Run on the Client Computer)

This step involves adding a helper function to your shell’s configuration file. This function wraps the curl command for convenience.

3. Apply Shell Configuration Changes: (Run on the Client Computer) Load the new function into your current shell session: ```bash # For Bash source ~/.bashrc

# If using Zsh
# source ~/.zshrc
``` *(Note: The function will be available automatically in new terminal windows you open).*

Phase 3: Usage

You can now use the askpi command directly in your client computer’s terminal to interact with the gemma:2b model running on your Raspberry Pi.

  1. Basic Usage: Run the command followed by your prompt enclosed in quotes:
    askpi "YOUR PROMPT HERE"
    
  2. Examples & Expected Output:

    • Example 1: Get code Command:
        askpi "Write a python hello world example"
      

      Expected Output (will vary slightly in wording/formatting):

        ```python
        print("Hello, world!")
      

      This code prints the classic “Hello, world!” message to the console when run with Python. ```

    • Example 2: Explain command Command:
        askpi "Explain the bash command ls -l"
      

      Expected Output (will vary slightly, potentially more/less verbose): ```text The command ls -l lists files and directories in the current location using the “long” format. This provides detailed information for each item, typically including:

      • File type and permissions (e.g., -rw-r--r--, drwxr-xr-x)
      • Number of hard links
      • Owner username
      • Group name
      • File size in bytes
      • Last modification date and time
      • File or directory name ```
    • Example 3: Generate Boilerplate Command:
        askpi "Generate a basic HTML5 document structure"
      

      Expected Output (will vary slightly):

        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Document</title>
        </head>
        <body>
      
        </body>
        </html>
      

Important Note: Run askpi with one distinct prompt at a time. Providing multiple askpi commands or arguments intended as separate prompts on the same line (like askpi prompt1 askpi prompt2) will likely result in everything after the first askpi being treated as part of a single, combined prompt to the LLM.


Troubleshooting & Notes

Congratulations! You’ve set up a functional, private AI assistant using your Raspberry Pi and Ollama. The askpi command provides a simple, reliable interface for leveraging your local LLM directly from your terminal for coding and technical tasks. Enjoy experimenting!