AI Dictionary
LOCAL AI

Run AI on your own machine.

No cloud, no subscription, no data leaving your box. Which tool fits which scenario, how to install, how GPU/Metal acceleration works — a practical guide.

Comparison

5 popular local AI tools, one table. Click a card for details.

Platform support

OllamavLLMllama.cppLM StudioMLX
Apple Silicon
CPU
NVIDIA (CUDA)
AMD (ROCm)

Which one should I pick?

Quick routing by scenario.

Personal use on Mac, prototypingOllama (easiest) or LM Studio (if you want a GUI)
Max performance on M-series MacMLX — Apple Silicon native, fine-tune capable
Production — concurrent users, multi-GPUvLLM — PagedAttention + continuous batching
Embedded / limited hardware / own binaryllama.cpp — C++ single binary, runs anywhere
I don't want to use a terminalLM Studio — click download, click run
Sensitive data (contracts, health) must stay localOllama or llama.cpp — fully offline capable