Ollama Setup
Overview
Auto-configure Ollama for local LLM deployment, eliminating hosted API costs and enabling offline AI inference. This skill handles system assessment, model selection based on available hardware (RAM, GPU), installation across macOS/Linux/Docker, and integration with Python, Node.js, and REST API clients.
Prerequisites
- macOS 12+, Linux (Ubuntu 20.04+, Fedora 36+), or Docker runtime
- Minimum 8 GB RAM for 7B parameter models; 16 GB for 13B models; 32 GB+ for 70B models
- Optional: NVIDIA GPU with CUDA drivers for accelerated inference (
nvidia-smi to verify)
- Optional: Apple Silicon (M1/M2/M3) for Metal-accelerated inference on macOS
- Disk space: 4-40 GB depending on model size (quantized weights)
- Package manager:
brew (macOS), curl (Linux), or docker (containerized)
Instructions
- Detect the host operating system and available hardware using
uname -s, free -h (Linux) or vm_stat (macOS), and nvidia-smi (if GPU present)
- Select appropriate models based on available RAM:
- 8 GB: llama3.2:7b (4 GB), mistral:7b (4 GB), phi3:14b (8 GB)
- 16 GB: codellama:13b (7 GB), mixtral:8x7b (26 GB quantized)
- 32 GB+: llama3.2:70b (40 GB), codellama:34b (20 GB)
- Install Ollama using the platform-appropriate method:
- macOS:
brew install ollama && brew services start ollama
- Linux:
curl -fsSL https://ollama.com/install.sh | sh && sudo systemctl start ollama
- Docker:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- Pull the recommended model:
ollama pull llama3.2
- Verify the installation by listing available models (
ollama list) and running a test prompt (ollama run llama3.2 "Say hello")
- Confirm the REST API is accessible:
curl http://localhost:11434/api/tags
- Configure integration with the target application using the appropriate client library (Python
ollama, Node.js ollama, or raw HTTP)
- Set up GPU acceleration if NVIDIA or Apple Silicon hardware is detected
- Configure model persistence and cache directory if non-default storage location is required
- Validate end-to-end inference latency and throughput for the selected model
See ${CLAUDESKILLDIR}/references/skill-workflow.md for the detailed workflow with code snippets.
Output
- Ollama installation confirmed and running as a system service or Docker container
- Selected model(s) pulled and cached locally with verified inference capability
- REST API endpoint ac