Project Memory
A persistent knowledge graph that gives the AI deep context about your project across sessions.
Project Memory is a persistent knowledge store tied to each project. It gives the AI context that survives across conversations — so you do not need to re-explain your project, your conventions, or your progress every time you open a new chat.
Memory is stored as a bi-temporal knowledge graph in SQLite, embedded alongside your project data.
What Memory Contains
Memory is organized into six sections:
| Section | What it stores |
|---|---|
| Overview | Project goals, scope, and summary description |
| Research Preferences | How you prefer to work, sources you trust, methods you use |
| Domain Knowledge | Concepts, terminology, and background knowledge specific to your field |
| Project Architecture | Code structure, key design decisions, and system components |
| Key Files and Papers | Important files, papers, and documents the AI should know about |
| Recent Context | What you worked on in the last 7 days (auto-expires after the TTL) |
Recent context has a 7-day time-to-live. Older entries are consolidated into the other sections or removed during the memory maintenance cycle.
How Memory Gets Built
Memory is populated through three automatic feeders:
- Project structure analysis — Omnilib scans your file tree and infers the project architecture
- Bibliography — Papers and references you add to your bibliography are indexed into domain knowledge and key papers
- Research sessions — Significant conversations and AI-assisted work sessions are summarized and added to recent context
Background embedding runs continuously as you work, keeping memory up to date without interrupting your workflow. A memory consolidation process periodically merges and refines entries to reduce redundancy.
Retrieving Context
When you send a message, Omnilib retrieves relevant memory entries using three complementary methods:
- Semantic search — Vector similarity finds conceptually related entries
- BM25 — Keyword-based search catches precise terms and identifiers
- Graph traversal — Follows relationships between entities (e.g., from a paper to its authors to related concepts)
Retrieved entries are injected into the AI's system prompt automatically. The AI uses this context to give more relevant, specific answers without you needing to re-explain background.
Initializing Memory
If your project has no memory yet, an Init Project Memory button appears in the empty chat state.
Click it to start an AI-guided interview. The AI asks you a series of questions about your project — its goals, domain, structure, key papers, and conventions. Your answers are used to populate the memory sections with an accurate starting point.
The interview takes 5–10 minutes for a new project. You can skip questions or finish early; memory continues to grow automatically as you use the project.
Updating Memory
Update Memory
Click Update Memory (available in the chat header) to refresh memory from your recent conversation. Use this after a long session to make sure insights from that session are captured.
Revise Memory
Click Revise Memory to open the memory editor in Settings > AI. The editor shows all memory sections as editable text. You can:
- Correct inaccurate entries
- Remove outdated information
- Add facts the AI missed
- Restructure entries for clarity
Changes take effect on your next message.
Memory Decay and Consolidation
Omnilib automatically maintains memory quality over time:
- Decay — Entries that are no longer referenced become lower priority and eventually expire
- Consolidation — Related entries are merged to prevent duplication
- Promotion — Frequently referenced recent context is promoted to more permanent sections
This keeps memory focused and relevant without requiring manual cleanup.
Related
- AI Chat — How memory is used in conversation
- Agent Behaviors — Combine behaviors with memory for specialized assistants
- AI Modes — Memory is available in all three modes
- Introduction — Overview of Omnilib's AI features