AI Providers
Atomic’s AI features require a configured provider. The provider is used for:
- Embeddings for semantic search, graph links, canvas clustering, and wiki retrieval
- Auto-tagging for new and updated atoms
- Wiki synthesis and wiki proposals
- Chat responses and chat tool use
- Daily briefings
Atomic supports OpenRouter, Ollama, and OpenAI-compatible APIs.
OpenRouter
Section titled “OpenRouter”OpenRouter is the default cloud provider and gives access to many hosted models.
- Create an OpenRouter account.
- Generate an API key.
- In Atomic, go to Settings and select OpenRouter as the provider.
- Paste your API key.
- Choose models for embedding, tagging, wiki, and chat.
OpenRouter uses separate model settings for:
- Embedding - generating vector embeddings for semantic search
- Tagging - extracting tags from note content
- Wiki - synthesizing wiki articles
- Chat - agentic RAG conversations
- Briefings - generated with the wiki model
Ollama
Section titled “Ollama”Ollama runs models locally on your machine or on another host you control.
- Install Ollama.
- Pull the models you want to use.
- In Atomic, go to Settings and select Ollama as the provider.
- Confirm the Ollama host. The default is
http://127.0.0.1:11434. - Select embedding and LLM models from the discovered model list.
Example:
ollama pull nomic-embed-textollama pull llama3.2For best results with local models, use an embedding model such as nomic-embed-text and a capable chat model for tagging, wiki, chat, and briefings.
OpenAI-Compatible APIs
Section titled “OpenAI-Compatible APIs”Use the OpenAI-compatible provider for servers that expose OpenAI-style /embeddings and /chat/completions endpoints. This can include hosted gateways and local model servers.
Configure:
- Base URL
- Optional API key
- Embedding model
- LLM model
- Embedding dimension
- Context length
- Timeout
The server has a connection-test endpoint at POST /api/settings/test-openai-compat.
Defaults
Section titled “Defaults”Fresh databases seed these defaults:
| Setting | Default |
|---|---|
provider | openrouter |
embedding_model | openai/text-embedding-3-small |
tagging_model | openai/gpt-4o-mini |
wiki_model | anthropic/claude-sonnet-4.6 |
chat_model | anthropic/claude-sonnet-4.6 |
ollama_host | http://127.0.0.1:11434 |
ollama_embedding_model | nomic-embed-text |
ollama_llm_model | llama3.2 |
auto_tagging_enabled | true |
Changing Embedding Models
Section titled “Changing Embedding Models”Changing embedding provider, model, or dimensions can require re-embedding existing atoms so semantic search and graph edges are consistent. Use the UI controls where available, or call:
curl -X POST http://localhost:8080/api/embeddings/reembed-all \ -H "Authorization: Bearer <token>"Check progress with:
curl http://localhost:8080/api/embeddings/status \ -H "Authorization: Bearer <token>"Troubleshooting
Section titled “Troubleshooting”- If semantic or hybrid search returns few results, check that embeddings are complete in
/api/embeddings/status. - If Ollama models do not appear, verify Ollama is running and reachable from the server host, not just your laptop.
- If local models fail during tagging or wiki generation, use a larger context length or a stronger LLM model.
- If OpenRouter calls fail, verify the API key and selected model IDs in Settings.