Skip to content

AI Providers

Atomic’s AI features require a configured provider. The provider is used for:

  • Embeddings for semantic search, graph links, canvas clustering, and wiki retrieval
  • Auto-tagging for new and updated atoms
  • Wiki synthesis and wiki proposals
  • Chat responses and chat tool use
  • Daily briefings

Atomic supports OpenRouter, Ollama, and OpenAI-compatible APIs.

OpenRouter is the default cloud provider and gives access to many hosted models.

  1. Create an OpenRouter account.
  2. Generate an API key.
  3. In Atomic, go to Settings and select OpenRouter as the provider.
  4. Paste your API key.
  5. Choose models for embedding, tagging, wiki, and chat.

OpenRouter uses separate model settings for:

  • Embedding - generating vector embeddings for semantic search
  • Tagging - extracting tags from note content
  • Wiki - synthesizing wiki articles
  • Chat - agentic RAG conversations
  • Briefings - generated with the wiki model

Ollama runs models locally on your machine or on another host you control.

  1. Install Ollama.
  2. Pull the models you want to use.
  3. In Atomic, go to Settings and select Ollama as the provider.
  4. Confirm the Ollama host. The default is http://127.0.0.1:11434.
  5. Select embedding and LLM models from the discovered model list.

Example:

Terminal window
ollama pull nomic-embed-text
ollama pull llama3.2

For best results with local models, use an embedding model such as nomic-embed-text and a capable chat model for tagging, wiki, chat, and briefings.

Use the OpenAI-compatible provider for servers that expose OpenAI-style /embeddings and /chat/completions endpoints. This can include hosted gateways and local model servers.

Configure:

  • Base URL
  • Optional API key
  • Embedding model
  • LLM model
  • Embedding dimension
  • Context length
  • Timeout

The server has a connection-test endpoint at POST /api/settings/test-openai-compat.

Fresh databases seed these defaults:

SettingDefault
provideropenrouter
embedding_modelopenai/text-embedding-3-small
tagging_modelopenai/gpt-4o-mini
wiki_modelanthropic/claude-sonnet-4.6
chat_modelanthropic/claude-sonnet-4.6
ollama_hosthttp://127.0.0.1:11434
ollama_embedding_modelnomic-embed-text
ollama_llm_modelllama3.2
auto_tagging_enabledtrue

Changing embedding provider, model, or dimensions can require re-embedding existing atoms so semantic search and graph edges are consistent. Use the UI controls where available, or call:

Terminal window
curl -X POST http://localhost:8080/api/embeddings/reembed-all \
-H "Authorization: Bearer <token>"

Check progress with:

Terminal window
curl http://localhost:8080/api/embeddings/status \
-H "Authorization: Bearer <token>"
  • If semantic or hybrid search returns few results, check that embeddings are complete in /api/embeddings/status.
  • If Ollama models do not appear, verify Ollama is running and reachable from the server host, not just your laptop.
  • If local models fail during tagging or wiki generation, use a larger context length or a stronger LLM model.
  • If OpenRouter calls fail, verify the API key and selected model IDs in Settings.