Skip to content
GitHubBuy Me A Coffee

Providers & models

Obsilo supports 10 AI providers. This page walks you through setting up each one.

For all providers: open Settings > Obsilo Agent > Models, click "+ add model", and select your provider.

Cloud providers

Anthropic

What you needAPI key from console.anthropic.com
Recommended modelsClaude Sonnet 4.6 (best overall), Claude Haiku (fast and cheap)
EmbeddingNot available natively. Use OpenAI for embeddings.

Setup:

  1. Create an account at console.anthropic.com
  2. Go to API Keys and create a new key
  3. In Obsilo, select Anthropic as provider, paste the key, and pick a model

Best tool use

Anthropic models are consistently the best at using Obsilo's tools correctly. If quality is your priority, start here.

OpenAI

What you needAPI key from platform.openai.com
Recommended modelsGPT-4o (balanced), o3 (reasoning), GPT-4o-mini (budget)
EmbeddingNative support. text-embedding-3-small recommended.

Setup:

  1. Create an account at platform.openai.com
  2. Go to API Keys and generate a new key
  3. In Obsilo, select OpenAI as provider, paste the key, and pick a model

Embedding models

An OpenAI key also gives you access to embedding models for semantic search. Configure in Settings > Embeddings.

OpenRouter

What you needAPI key from openrouter.ai
Recommended modelsAny. OpenRouter gives access to 100+ models from multiple providers.
EmbeddingNot available

Setup:

  1. Create an account at openrouter.ai
  2. Go to Keys and create a new API key
  3. In Obsilo, select OpenRouter as provider, paste the key
  4. Browse or type any model ID (e.g., anthropic/claude-sonnet-4.6, google/gemini-2.5-pro)

Azure OpenAI

What you needAzure subscription, a deployed model, API key, and endpoint URL
Recommended modelsGPT-4o (deployed in your Azure region)
EmbeddingNative support via deployed embedding model

Setup:

  1. Deploy a model in your Azure OpenAI resource
  2. Copy the endpoint URL, API key, and deployment name
  3. In Obsilo, select Azure OpenAI as provider and fill in all three fields

Enterprise use

Azure OpenAI works well for organizations with compliance requirements. Data stays within your Azure tenant.

Gateway providers

GitHub Copilot

What you needAn active GitHub Copilot subscription (Individual, Business, or Enterprise)
Recommended modelsGPT-4o, Claude Sonnet (available through Copilot)
EmbeddingNot available

Setup (OAuth device flow):

  1. In Obsilo, select GitHub Copilot as provider
  2. Click "Sign in with GitHub". A device code appears.
  3. Open github.com/login/device in your browser
  4. Enter the code and authorize the app
  5. Obsilo automatically detects your available models

No extra cost

If you already pay for GitHub Copilot, this costs nothing extra. The models are included in your subscription.

Kilo Gateway

What you needA Kilo Code account with gateway access
Recommended modelsDepends on your organization's available models
EmbeddingNot available

Setup (device auth, recommended):

  1. In Obsilo, select Kilo Gateway as provider
  2. Click "Sign in". A device code and URL appear.
  3. Open the URL in your browser, enter the code, and authorize
  4. Models are loaded dynamically from your organization

Setup (manual token):

  1. Obtain a gateway token from your Kilo Code admin
  2. In Obsilo, select Kilo Gateway and choose "Manual Token"
  3. Paste the token. Models load automatically.

Local providers

Ollama

What you needOllama installed on your machine
Recommended modelsQwen 2.5 7B (balanced), Llama 3.2 (general), Codestral (code)
EmbeddingSupported via nomic-embed-text or similar

Setup:

  1. Install Ollama from ollama.ai
  2. Pull a model: ollama pull qwen2.5:7b
  3. In Obsilo, select Ollama as provider. No API key needed.
  4. The model list auto-detects running models

Privacy

With Ollama, no data leaves your machine. Good for sensitive vaults.

LM Studio

What you needLM Studio installed with a model loaded
Recommended modelsAny GGUF model from the built-in catalog
EmbeddingSupported for compatible models

Setup:

  1. Install LM Studio from lmstudio.ai
  2. Download a model from the catalog and load it
  3. Start the local server (LM Studio > Developer tab)
  4. In Obsilo, select LM Studio as provider. No API key needed.

Custom endpoint

What you needAny OpenAI-compatible API endpoint
Recommended modelsDepends on the server
EmbeddingDepends on the server

Setup:

  1. In Obsilo, select Custom as provider
  2. Enter the base URL (e.g., http://localhost:8080/v1)
  3. Enter an API key if your server requires one
  4. Type the model name exactly as the server expects

This works with any server that implements the OpenAI chat completions API, including vLLM, text-generation-inference, LocalAI, and self-hosted endpoints.

Provider comparison

ProviderAuthCostPrivacyEmbeddingBest for
AnthropicAPI keyPay-per-useCloudNoBest quality
OpenAIAPI keyPay-per-useCloudYesStructured output, embeddings
OpenRouterAPI keyPay-per-useCloudNoModel variety
Azure OpenAIAPI key + endpointEnterpriseEnterprise tenantYesCompliance
GitHub CopilotOAuthSubscriptionCloudNoExisting subscribers
Kilo GatewayDevice auth / tokenOrganizationCloudNoTeam deployments
OllamaNoneFreeFully localYesPrivacy, offline
LM StudioNoneFreeFully localYesVisual model browser
CustomVariesVariesVariesVariesSelf-hosted setups