Skip to content

Smart Chat API integration

Smart Chat is a full-screen chat workspace inside Obsidian that lets you use local models and cloud APIs in one consistent UI, without breaking your note workflow.

It is designed for power users who want:

Tldr
  1. Pick a model
  2. Attach context
  3. Ask for an outcome
  4. Keep the thread clean (exclude noise, reuse good prompts)

Who this helps

You will get immediate value if any of these are true:


Open Smart Chat

Launch from:


Quick start (2 minutes)

Goal: ask a grounded question using the best model for the job.

  1. Open Smart Chat.
  2. Attach context using one of:
    • Lookup context (retrieve relevant notes automatically)
    • Add context (manual selection)
    • Drag and drop notes/files into the chat
  3. Ask for an outcome.
  4. If the thread drifts, exclude irrelevant message pairs so they do not contaminate the next turn.
  5. Use history search to return to strong threads.

Example prompts that work well:


UI tour (what you are looking at)

Key parts of the Smart Chat UI:


Configure your API key

To connect a provider (OpenAI, Anthropic, Gemini, OpenRouter, etc.):

  1. In Smart Chat, click the gear icon to open Chat settings.
  2. Choose the provider and paste your API key.
  3. Save, then select the model from the Smart Chat header.

Tip

If a model fails, switch providers/models and keep moving.
Smart Chat is built for "fallback without losing your thread."


Custom instructions (thread-level system prompt)

Use Edit custom instructions to set the system prompt for the current thread.

This is where you put stable rules like:

Treat this as:


Context is first-class

Smart Chat is built around a simple truth:
the quality of the output is mostly about the quality of the context.

You have three fast ways to attach context.

Option 1: Select context manually (high precision)

Use Add context when you already know what matters.

This pairs well with Smart Context Builder:

Related: Smart Context Builder

Option 2: Drag notes/files into the conversation (fastest)

Drag notes or files into the chat to attach them as context.

Use this when you are moving quickly and you already have the right items in view.

Option 3: Lookup context (retrieval / "find the right notes for me")

Use Lookup context when you have a question, but you are not sure which notes contain the answer.

Mental model:


Preventing ungrounded messages (no-context warning)

If you try to send a message without any context attached, Smart Chat can warn you and offer three paths:

This friction is intentional: most "AI got it wrong" problems start with "AI did not have the right context".

Thread list and history search

Smart Chat keeps your work in threads so you can return to:

History search helps you find threads by:


Control what gets included next (exclude/include message pairs)

As a thread grows, the fastest way to lose quality is to keep dragging old, irrelevant turns forward.

Smart Chat gives you per-turn control:

Exclude a message pair (user message + assistant response) to keep it from being included as context for future messages.

Use this when:

Simple rule:


Errors and retries (provider reality, surfaced in the UI)

Common causes:

Fast recovery sequence:

  1. confirm the selected model/provider
  2. retry once (transient errors happen)
  3. check provider credentials and availability
  4. switch models if necessary to keep moving

Related pages