Smart Chat API integration
Smart Chat is a full-screen chat interface inside Obsidian that lets you talk to both:
- local models (ex: Ollama)
- cloud APIs (ex: OpenAI, Anthropic, Gemini, OpenRouter)
...without changing tools or breaking your workflow.
The "API models" feature is what makes Smart Chat feel like a power-user cockpit: you configure the providers you care about, then switch models per task while keeping context, history, and control in one place.
Who this helps
You will get immediate value if you are any of these:
- You run local models for privacy/offline work, but still want cloud models for harder tasks.
- You use multiple providers and want one consistent UI to compare outputs.
- You want vault-grounded answers (notes as context), not generic chat.
- You are managing token budgets and want to actively control what the model sees.
- You want one place to search past threads and reuse the best prompts.
This document covers using the Smart Chat API Integration. Looking for more details about specific settings? See the chat settings page.
Open Smart Chat
You can launch Smart Chat from:
- the ribbon icon, or
- the command palette (recommended), or
- an optional hotkey (once you start using it daily)
![]()
Click the Smart Chat icon or open the command palette and run the Open Smart Chat command.
UI tour (what you are looking at)

Key pieces:
- Thread name: the current chat thread.
- Custom instructions: edit the system prompt for this thread. Inherits the global system prompt by default.
- Character and token estimate: quick feedback before you send.
- Context controls:
- Add context (manual selection)
- Lookup context (retrieval)
- Composer: type your message; use
@to attach context quickly. - Status + model indicator: shows readiness and the active model.
Quick start
Goal: ask a question that stays grounded in your vault, while using the best model for the job.
- Open Smart Chat.
- Add context using one of these:
- drag notes/files in, or
- click Lookup context, or
- click Add context to select manually
- Ask a vault-grounded question (examples below).
- If the thread drifts, exclude irrelevant message pairs so they do not contaminate the next turn.
- Use history search to return to good threads.
Example prompts that work well:
- "Based on the attached notes, summarize the current state and list the next 5 actions."
- "Extract constraints from context and propose a plan that satisfies every constraint."
- "Find contradictions between these notes and list what needs clarification."
Custom instructions (thread-level system prompt)
Use Edit custom instructions to set the system prompt for the thread.
This is where you put stable rules like:
- "Use only the attached context when answering."
- "If something is missing, ask the minimum questions needed."
- "Return outputs as checklists."
- "Always cite note titles/sections when making claims."
Treat this as "how this thread should behave", while your message is "what I want right now".
Context is first-class
Smart Chat is designed around a simple truth:
the quality of the answer is mostly about the quality of the context.
You have three fast ways to attach context.
Option 1: Select context manually (high precision)
Use Add context when you already know what matters.
This pairs naturally with the Smart Context Builder:
- build a reusable named context (project working set, constraints pack, meeting continuity, etc.)
- then use it repeatedly in Smart Chat
Related: Smart Context Builder
Option 2: Drag notes/files into the conversation (fastest)

Drag notes or files into the chat to attach them as context.
Use this when you are moving quickly and you already have the right items in view.
Option 3: Lookup context (retrieval / "find the right notes for me")

Use Lookup context when you have a question, but you are not sure which notes contain the answer.
A good mental model:
- you write the question
- Smart Chat retrieves likely-relevant notes/blocks
- you review the context before sending (so you stay in control)
Lookup is ideal for "I know I wrote about this" moments.
Manual selection is ideal for "I know exactly what should count as ground truth."
Preventing ungrounded messages (no-context warning)
If you try to send a message without any context attached, Smart Chat can warn you and offer three paths:

- Lookup context: retrieve relevant notes automatically
- Select context: open the selector and choose items manually
- Continue without context: intentionally run as general chat
This small friction is deliberate: most "AI got it wrong" problems start with "AI did not have the right context".
Thread list and history search
Smart Chat keeps your work in threads so you can return to:
- strong prompts
- good answers
- ongoing projects

Use history search to find threads by:
- thread name, or
- content inside messages (when you remember what you said, not what the thread was called)
Control what gets included next (exclude/include message pairs)
As a thread grows, the fastest way to lose quality is to keep dragging old, irrelevant turns forward.
Smart Chat gives you per-turn control:

Exclude a message pair (user message + assistant response) to keep it from being included as context for future messages.
Use this when:
- early turns were exploratory and wrong
- you changed your mind about the goal
- there is a detour (brainstorming, tangents, debugging noise)
- you are tightening a token budget
- you want the model to stop inheriting a bad framing
A simple rule:
- keep decisions, constraints, and ground truth
- exclude greetings, false starts, and discarded drafts
Errors and retries (provider reality, surfaced in the UI)
Smart Chat is designed to surface errors inline so you can recover quickly.
Common causes:
- missing/invalid API key for a provider
- model not available to your account or plan
- rate limits / quota exhaustion
- local model server not running (for local providers)
When debugging, the fastest sequence is:
- confirm the selected model/provider
- retry once (transient errors happen)
- check provider credentials and availability
- switch models if necessary to keep moving