WHY local embeddings
Embeddings are how semantic search works.
Local embeddings are how semantic search stays private.
The simple data flow
- Your notes stay on your device.
- An embedding model runs locally to create vectors.
- Those vectors are stored locally as an index.
- Searches compare your query vector to your index.
- Results are returned without uploading your vault.
No copy of your notes needs to leave your machine for the core workflow to work.
Why this changes behavior
When people trust the boundary, they capture more.
When they capture more, retrieval gets better.
Better retrieval makes AI workflows more accurate.
That is compounding.
The honest trade
Local embeddings can cost:
- setup time (first index)
- compute (depends on model and vault size)
But the benefits are structural:
- privacy by default
- portability
- fewer surprise policies
Next step
Turn on local embeddings for one project folder.
Run three semantic queries and compare to keyword search.