Ollama is a powerful tool for running and managing large language models locally on your machine. Whether you’re a developer, researcher, or AI enthusiast, installing Ollama on your Windows, Linux Ubuntu, or macOS system unlocks endless possibilities. In this guide, we’ll walk you through the process to install ollama for all three platforms, ensuring you can get started quickly and efficiently.
Table of Contents
Why Use Ollama?
Ollama simplifies local LLM deployment with:
- One-Click Install: Skip dependency hell—Ollama handles setup automatically.
- Cross-Platform Support: Works on macOS, Linux, and Windows (via WSL).
- GPU Acceleration: Optimizes performance for NVIDIA, AMD, and Apple Silicon.
- Vast Model Library: Access hundreds of pre-configured models (e.g., Llama 3, Phi-3, DeepSeek).
Prerequisites
- Hardware:
- 8GB+ RAM (16GB recommended for larger models).
- 20GB+ free storage (models are hefty!).
- Optional: NVIDIA/AMD GPU or Apple Silicon for faster inference.
- Software:
- macOS, Linux, or Windows Subsystem for Linux (WSL) for Windows users.
How to Install Ollama on Windows
1. Download the Installer
Visit the official Ollama website and navigate to the downloads section. Select the Windows installer (.exe
file) and download it.
2. Run the Installer
Double-click the downloaded .exe
file to launch the setup wizard. Follow the on-screen prompts to complete the installation.
3. Verify Installation
Open Command Prompt or PowerShell and type:
ollama --version
If the installation is successful, you’ll see the installed version of Ollama.
Sample output:
ollama version is 0.5.7
4. Start Using Ollama
Begin interacting with models by running commands like:
ollama run qwen2.5-coder:1.5b
Note: Ensure Windows Defender or your antivirus allows Ollama through the firewall if you encounter connectivity issues.
How to Install Ollama on Linux
1. Install Dependencies
Open Terminal and update your package list:
sudo apt update && sudo apt upgrade -y
2. Run the Installation Script
Use curl
to download and execute the Ollama install script:
curl -fsSL https://ollama.ai/install.sh | sh
3. Start the Ollama Service
Enable and start the service with:
systemctl start ollama
4. Test Ollama
Pull a model and run it:
ollama run qwen2.5-coder:1.5b
Tested Environment: I have tested it in my AWS EC2 instance type t2.large with 20GB Storage EBS Volume whitch has 8GB of RAM.
How to Install Ollama on macOS
1. Download Installaer
Download the macOS installer from the Ollama website and drag the app to your Applications folder and double click on the zipfile and follow the process.
2. Launch the Ollama Service
Open Terminal and start Ollama:
ollama serve
3. Run Your First Model
Test the installation with:
ollama run qwen2.5-coder:1.5b
Tested Environment: I have tested it in my M1 Mac Air base model which has 8Gb of RAM
Getting Started with Ollama
After installation on any OS, explore Ollama’s capabilities:
Action | Command |
List available models in your computer | ollama list |
Pull new models | ollama pull <model-name> |
Remove models | ollama rm <model-name> |
Run model | ollama run <model-name> |
Conclusion
Installing Ollama on Windows, Linux Ubuntu, or macOS is straightforward when you follow platform-specific steps. Whether you’re experimenting with AI models or integrating them into projects, Ollama’s cross-platform compatibility makes it a versatile choice. Start exploring today and unlock the potential of local language models!
FAQs
Does Ollama work on all operating systems?
Yes! Ollama supports Windows, Linux (Ubuntu/Debian), and macOS.
What are the system requirements?
Ensure at least 8GB RAM (16GB recommended) and 10GB of free storage.
How do I exit or terminate ollama chat?
Simply type /bye
or Ctrl+d