Model Context Protocol
Connect your AI client to Vaner.
Start with a uvx launcher snippet (no permanent install required), then configure the model backend and optional guardrails.
1. Connect your client (uvx-first)
Works without installing anything first. Use the picker, keep launcher on Run via uvx, paste the snippet, and restart your client.
No install required: uvx runs Vaner on demand inside your MCP client.
claude mcp add --transport stdio --scope user vaner -- uvx --from 'vaner[mcp]' vaner mcp --path .Verify: Run `/mcp` inside a Claude Code session — `vaner` should appear with the five scenario tools.
Full guide: docs.vaner.ai/integrations/tools/claude-code-mcp
2. Pick a model backend
Local backends (Ollama, LM Studio, vLLM) run fully offline. Hosted backends need an API key in an environment variable.
Install Ollama and run `ollama pull llama3.2:3b`. Ollama exposes an OpenAI-compatible endpoint on port 11434.
[backend]
kind = "openai"
base_url = "http://127.0.0.1:11434/v1"
model = "llama3.2:3b"
api_key_env = ""
Drop the snippet into ~/.vaner/config.toml or run vaner init --backend-preset <id>.
3. Optional: bound ponder time
If you want Vaner to stop pondering after a fixed wall-clock budget, set a compute preset and an optional session cap.
Only when the machine is idle. Safe default.
[compute]
idle_only = true
cpu_fraction = 0.2
gpu_memory_fraction = 0.5
exploration_concurrency = 2
max_parallel_precompute = 1
max_cycle_seconds = 300
Prefer the CLI install?
If you want global CLI commands outside MCP, use the installer below.
curl -fsSL https://vaner.ai/install.sh | bashNext
- Full guides per client at docs.vaner.ai/integrations/tools.
- Concepts and architecture at docs.vaner.ai/architecture.
- Security posture: How Vaner handles security.
- Full skills-loop walkthrough at docs.vaner.ai/skills.
