Skip to content

Chat with your notes in Obsidian

The biggest problem with AI in your workflow is not the model.
It is that the conversation gets separated from the work.

Smart Chat keeps AI threads attached to the notes they reference so you can pick up cleanly tomorrow.

Start here


What changes when chat lives in your vault

When chat is tied to the note:

This is the difference between "one helpful answer" and a reusable workflow.


Two ways to use Smart Chat

Option A: Chat codeblocks in notes

Use this when you want the simplest possible workflow.

  1. Add a chat codeblock to the note where the thread belongs.
  2. Open the thread in your preferred web UI.
  3. Smart Chat stores the thread link in the note so you can resume later.
  4. Mark the thread done when it is complete.

You can use it for:

Option B: Smart Chat Pro workspace

Use this when you want vault-aware context, local or API models, and long-running threads without tab sprawl.

With Pro you can:


The workflow in 3 minutes

  1. Install Smart Chat.
  2. In Obsidian, open Smart Chat and pick your model:
    • local engine like Ollama, or
    • an API provider such as OpenAI, Anthropic, or Gemini
  3. Ask a self-referential question, like:
    • "Summarize what I wrote about it last week."
  4. Click "Retrieve more" if you want additional matches.
  5. Review the suggested context and send.
Optional but high leverage

Turn on "Review context" so you approve what gets sent before the model responds.


A few workflows that fit how people actually use Obsidian

Draft with receipts

Ask for a draft and have Smart Chat pull supporting notes so you are not rewriting context from memory.

Meeting follow-up that stays connected

Keep the meeting note, the decisions, and the AI-generated follow-up email in the same place.

Weekly planning with your real constraints

Ask for a plan that references your existing notes instead of generic advice.


FAQ

Do I need an API key?

Not to start. Free chat codeblocks can use your existing web UIs.
If you use models through Smart Environment, API providers may require a key.

Can I use local models?

Yes. You can choose a local engine like Ollama in settings.

Can I see what gets sent to the model?

Yes. Pro supports context review, so you can approve the retrieved context before generating a response.

How is this different from pasting notes into ChatGPT?

You do not have to rebuild context from scratch every time.
Threads stay attached to the note, and Pro can pull vault context automatically.


Next step

If you want a clean workflow today:

If you want vault-aware chat and a dedicated workspace: