Chat
Chat is an agentic RAG system that lets you ask questions grounded in your Atomic knowledge base.
How It Works
Section titled “How It Works”The chat agent has tools to search your atoms during conversation. When you ask a question:
- The agent decides whether to search your notes.
- It formulates search queries and retrieves relevant chunks.
- It synthesizes an answer grounded in retrieved content.
- Responses stream back in real time over WebSocket events.
Chat can emit tool-start and tool-complete events, citations, and canvas actions. The REST call that sends a message returns the final assistant message, while the UI updates from streaming events as the model responds.
Scoped Conversations
Section titled “Scoped Conversations”Conversations can be scoped to specific tags. When scoped, the agent only searches atoms under those tags, giving you focused answers about a particular topic.
Conversations
Section titled “Conversations”Chat conversations are persisted. You can revisit previous conversations and continue where you left off. Each conversation tracks messages and scoped tags.
Conversations can also be renamed, archived, or deleted through the API/UI.
API and Events
Section titled “API and Events”The primary endpoints are:
POST /api/conversationsGET /api/conversationsGET /api/conversations/{id}PUT /api/conversations/{id}DELETE /api/conversations/{id}PUT /api/conversations/{id}/scopePOST /api/conversations/{id}/messages
Streaming event names exposed to the frontend include chat-stream-delta, chat-tool-start, chat-tool-complete, chat-complete, chat-canvas-action, and chat-error.
Provider Notes
Section titled “Provider Notes”Chat requires an LLM provider and model that can handle the conversation and tool-use workload. If chat fails but embeddings work, check the chat model setting separately from the embedding model.