Форки
44
Звёзды
1.0k
Issues
0
LlamaBarn is a macOS menu bar app for running local LLMs.
Install with brew install --cask llamabarn or download from Releases.
LlamaBarn runs a local server at http://localhost:2276/v1.
12 MB native macOS app~/.llamabarn (configurable)LlamaBarn works with any OpenAI-compatible client.
You can also use the built-in WebUI at http://localhost:2276 while LlamaBarn is running.
# list installed models
curl http://localhost:2276/v1/models
# chat with Gemma 3 4B (assuming it's installed)
curl http://localhost:2276/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gemma-3-4b", "messages": [{"role": "user", "content": "Hello"}]}'
Replace gemma-3-4b with any model ID from http://localhost:2276/v1/models.
See complete API reference in llama-server docs.
Expose to network — By default, the server is only accessible from your Mac (localhost). This option allows connections from other devices on your local network. Only enable this if you understand the security risks.
# bind to all interfaces (0.0.0.0)
defaults write app.llamabarn.LlamaBarn exposeToNetwork -bool YES
# or bind to a specific IP (e.g., for Tailscale)
defaults write app.llamabarn.LlamaBarn exposeToNetwork -string "100.x.x.x"
# disable (default)
defaults delete app.llamabarn.LlamaBarn exposeToNetwork
Данные обновлены: 24 марта 2026 г.