feat(memory): add scope support for memory operations and backend que…#7
feat(memory): add scope support for memory operations and backend que…#7aryankinha wants to merge 1 commit intosyrin-labs:mainfrom
Conversation
📝 WalkthroughWalkthroughThis change introduces scope-based filtering for persistent memory recall operations across all memory backends. The implementation maintains backward compatibility through try/except fallbacks when backends don't yet support the scope parameter. Scope values are derived from the agent's persistent memory configuration and threaded through search/list operations. Changes
Sequence DiagramsequenceDiagram
participant Agent as Agent
participant MemAPI as Memory API
participant Backend as Memory Backend
participant DB as Storage
Agent->>MemAPI: recall(query, memory_type)
MemAPI->>MemAPI: derive scope from persistent_memory.scope
MemAPI->>Backend: search(query, memory_type, scope=scope)
alt scope supported
Backend->>Backend: build filter with scope
Backend->>DB: execute filtered query
DB-->>Backend: results (scope-filtered)
else TypeError: scope not supported
MemAPI->>Backend: search(query, memory_type) [retry without scope]
Backend->>DB: execute query
DB-->>Backend: results
end
Backend-->>MemAPI: list[MemoryEntry]
MemAPI-->>Agent: memories
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
src/syrin/memory/backends/postgres.py (1)
226-233: Add an index strategy for the newscopepredicate.Now that
search()andlist()filter onscope, consider indexingscope(and possibly a composite like(scope, type, importance_idx DESC)) to avoid scan-heavy recalls at scale.📈 Example migration/update
idx_type_name = f"idx_{self._table}_type" idx_importance_name = f"idx_{self._table}_importance" + idx_scope_name = f"idx_{self._table}_scope" + idx_scope_type_importance_name = f"idx_{self._table}_scope_type_importance" cursor.execute( sql.SQL("CREATE INDEX IF NOT EXISTS {} ON {} (type)").format( sql.Identifier(idx_type_name), self._table_ident(), ) ) cursor.execute( sql.SQL("CREATE INDEX IF NOT EXISTS {} ON {} (importance_idx)").format( sql.Identifier(idx_importance_name), self._table_ident(), ) ) + cursor.execute( + sql.SQL("CREATE INDEX IF NOT EXISTS {} ON {} (scope)").format( + sql.Identifier(idx_scope_name), + self._table_ident(), + ) + ) + cursor.execute( + sql.SQL("CREATE INDEX IF NOT EXISTS {} ON {} (scope, type, importance_idx DESC)").format( + sql.Identifier(idx_scope_type_importance_name), + self._table_ident(), + ) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/syrin/memory/backends/postgres.py` around lines 226 - 233, Add a PostgreSQL index to support the new scope predicate used by search() and list(): create a migration that adds an index on the scope column (and preferably a composite index on (scope, type, importance_idx DESC) or (scope, type, importance) to support ORDER BY importance DESC queries against the same table referenced by self._table_ident()). Ensure the migration targets the table used by the Postgres backend (the same table name returned by self._table_ident()), includes a matching CREATE INDEX CONCURRENTLY statement (and a DROP INDEX for rollback), and documents the rationale so subsequent deployments avoid full-table scans when scope filtering is used.src/syrin/memory/backends/sqlite.py (1)
158-166: Consider indexingscopenow that it's on the hot path.The new
scope = ?predicate will force broader scans asmemoriesgrows because the table only indexestypeandimportancetoday. Adding an index onscope— or a composite index that matches common filters/order, e.g.(scope, type, importance DESC)— will keep scoped recall/list queries from regressing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/syrin/memory/backends/sqlite.py` around lines 158 - 166, The SELECT uses a new "scope = ?" predicate on the memories table which isn't indexed; update the DB schema/migration that creates or ensures the memories table (where existing indexes for type and importance are defined) to add an index on scope — or better, create a composite index matching common filters and ordering such as (scope, type, importance DESC) — so scoped queries in sqlite.py (the code building the WHERE with "scope = ?") use the index; implement this by adding the appropriate CREATE INDEX (or ALTER TABLE/CREATE INDEX IF NOT EXISTS) in the DB initialization/ensure_schema routine that creates/updates the memories table.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/syrin/agent/_context_builder.py`:
- Around line 103-116: Narrow the broad TypeError handler in the memory lookup
block: when calling memory_backend.search with scope, catch TypeError as e and
only fall back to calling memory_backend.search(... without scope) if the
backend truly doesn't accept the scope parameter (e.g., detect via
inspect.signature(memory_backend.search) lacking a 'scope' parameter or by
checking that "unexpected keyword argument 'scope'" / "'scope'" appears in
str(e)); otherwise re-raise the exception so real runtime errors aren't
swallowed. Use memory_backend.search and _user_input_to_search_str in your
changes.
In `@src/syrin/agent/_memory_api.py`:
- Line 105: The call to agent._memory_backend.list(memory_type, scope=scope,
limit=limit) can break for legacy backends that expect list(memory_type=None,
limit=...), so wrap the calls to agent._memory_backend.list (the usages around
the current recall/forget branches) in a fallback: try calling with scope=scope
first, catch TypeError (or TypeError/TypeError-like signature errors) and then
call list without the scope kwarg (agent._memory_backend.list(memory_type,
limit=limit)); apply the same try/except fallback to both occurrences (the one
at the recall branch and the one at the scoped forget branch referenced in the
review).
In `@src/syrin/memory/backends/redis.py`:
- Around line 178-182: The loop pre-slices ids (ids[: top_k * 2]) before
applying the scope check, which can drop valid in-scope matches; change the
logic to apply the scope filter before truncation: iterate over ids (or build a
filtered list) and first skip entries where entry is None or entry.scope !=
scope (if scope provided), then apply the query content check and collect
matches, and only after collecting matching candidates truncate to top_k (or
top_k * 2 if you still want extra candidates); update the loop that calls
self.get(memory_id) and the variables ids, entry, scope, top_k, and the
collection logic so scope gating happens prior to slicing and stop early once
you have enough final candidates to preserve performance.
In `@src/syrin/memory/config.py`:
- Line 589: The backend.list call assumes all backends accept scope; wrap the
call to backend.list(memory_type=memory_type, scope=self.scope,
limit=effective_limit) in a try/except (catch TypeError) and on failure retry
calling backend.list without the scope kwarg
(backend.list(memory_type=memory_type, limit=effective_limit)) to preserve
backward compatibility; apply the same pattern used in the search()
compatibility fallback to both the list() site shown and the other occurrence
around the similar recall/forget-by-query/type call.
---
Nitpick comments:
In `@src/syrin/memory/backends/postgres.py`:
- Around line 226-233: Add a PostgreSQL index to support the new scope predicate
used by search() and list(): create a migration that adds an index on the scope
column (and preferably a composite index on (scope, type, importance_idx DESC)
or (scope, type, importance) to support ORDER BY importance DESC queries against
the same table referenced by self._table_ident()). Ensure the migration targets
the table used by the Postgres backend (the same table name returned by
self._table_ident()), includes a matching CREATE INDEX CONCURRENTLY statement
(and a DROP INDEX for rollback), and documents the rationale so subsequent
deployments avoid full-table scans when scope filtering is used.
In `@src/syrin/memory/backends/sqlite.py`:
- Around line 158-166: The SELECT uses a new "scope = ?" predicate on the
memories table which isn't indexed; update the DB schema/migration that creates
or ensures the memories table (where existing indexes for type and importance
are defined) to add an index on scope — or better, create a composite index
matching common filters and ordering such as (scope, type, importance DESC) — so
scoped queries in sqlite.py (the code building the WHERE with "scope = ?") use
the index; implement this by adding the appropriate CREATE INDEX (or ALTER
TABLE/CREATE INDEX IF NOT EXISTS) in the DB initialization/ensure_schema routine
that creates/updates the memories table.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a838fcb2-8a73-42dd-8feb-113677225492
📒 Files selected for processing (11)
src/syrin/agent/_context_builder.pysrc/syrin/agent/_memory_api.pysrc/syrin/memory/backends/chroma.pysrc/syrin/memory/backends/memory.pysrc/syrin/memory/backends/postgres.pysrc/syrin/memory/backends/qdrant.pysrc/syrin/memory/backends/redis.pysrc/syrin/memory/backends/sqlite.pysrc/syrin/memory/config.pytests/integration/test_memory_stability.pytests/unit/memory/test_memory_forget.py
| try: | ||
| return memory_backend.search( # type: ignore[attr-defined] | ||
| _user_input_to_search_str(user_input), | ||
| None, | ||
| top_k, | ||
| scope=scope, | ||
| ) | ||
| except TypeError: | ||
| # Backward compatibility for custom backends that don't yet accept `scope`. | ||
| return memory_backend.search( # type: ignore[attr-defined] | ||
| _user_input_to_search_str(user_input), | ||
| None, | ||
| top_k, | ||
| ) |
There was a problem hiding this comment.
Narrow the TypeError fallback to unsupported-argument cases only.
The current handler swallows any TypeError from backend execution, which can mask real failures and silently retry without scope, potentially leaking out-of-scope memories.
🔒 Proposed fix
+ query_text = _user_input_to_search_str(user_input)
+
def _recall_memories() -> object:
try:
return memory_backend.search( # type: ignore[attr-defined]
- _user_input_to_search_str(user_input),
+ query_text,
None,
top_k,
scope=scope,
)
- except TypeError:
+ except TypeError as exc:
+ msg = str(exc)
+ if "unexpected keyword argument 'scope'" not in msg:
+ raise
# Backward compatibility for custom backends that don't yet accept `scope`.
return memory_backend.search( # type: ignore[attr-defined]
- _user_input_to_search_str(user_input),
+ query_text,
None,
top_k,
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/syrin/agent/_context_builder.py` around lines 103 - 116, Narrow the broad
TypeError handler in the memory lookup block: when calling memory_backend.search
with scope, catch TypeError as e and only fall back to calling
memory_backend.search(... without scope) if the backend truly doesn't accept the
scope parameter (e.g., detect via inspect.signature(memory_backend.search)
lacking a 'scope' parameter or by checking that "unexpected keyword argument
'scope'" / "'scope'" appears in str(e)); otherwise re-raise the exception so
real runtime errors aren't swallowed. Use memory_backend.search and
_user_input_to_search_str in your changes.
| results = agent._memory_backend.search(query, memory_type, limit) | ||
| else: | ||
| results = agent._memory_backend.list(memory_type, limit=limit) | ||
| results = agent._memory_backend.list(memory_type, scope=scope, limit=limit) |
There was a problem hiding this comment.
Add the same legacy fallback for list(..., scope=...).
Only search() is guarded for older backends right now. These two branches always pass scope=, so any custom backend still on the old list(memory_type=None, limit=...) signature will start failing on query-less recall and scoped forget.
Possible fix
else:
- results = agent._memory_backend.list(memory_type, scope=scope, limit=limit)
+ try:
+ results = agent._memory_backend.list(memory_type, scope=scope, limit=limit)
+ except TypeError:
+ results = agent._memory_backend.list(memory_type, limit=limit)
@@
elif query or memory_type:
- memories = agent._memory_backend.list(memory_type, scope=scope)
+ try:
+ memories = agent._memory_backend.list(memory_type, scope=scope)
+ except TypeError:
+ memories = agent._memory_backend.list(memory_type)Also applies to: 151-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/syrin/agent/_memory_api.py` at line 105, The call to
agent._memory_backend.list(memory_type, scope=scope, limit=limit) can break for
legacy backends that expect list(memory_type=None, limit=...), so wrap the calls
to agent._memory_backend.list (the usages around the current recall/forget
branches) in a fallback: try calling with scope=scope first, catch TypeError (or
TypeError/TypeError-like signature errors) and then call list without the scope
kwarg (agent._memory_backend.list(memory_type, limit=limit)); apply the same
try/except fallback to both occurrences (the one at the recall branch and the
one at the scoped forget branch referenced in the review).
| for memory_id in ids[: top_k * 2]: # Get more to filter | ||
| entry = self.get(memory_id) | ||
| if entry and query.lower() in entry.content.lower(): | ||
| if scope is not None and entry.scope != scope: | ||
| continue |
There was a problem hiding this comment.
Pre-truncating candidates can hide valid scoped matches.
Because candidates are sliced before scope filtering, search can miss in-scope entries that occur later in ids, especially after adding the new scope gate.
🔧 Proposed fix
- for memory_id in ids[: top_k * 2]: # Get more to filter
+ for memory_id in ids:
entry = self.get(memory_id)
if entry and query.lower() in entry.content.lower():
if scope is not None and entry.scope != scope:
continue
results.append(entry)
if len(results) >= top_k:
break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/syrin/memory/backends/redis.py` around lines 178 - 182, The loop
pre-slices ids (ids[: top_k * 2]) before applying the scope check, which can
drop valid in-scope matches; change the logic to apply the scope filter before
truncation: iterate over ids (or build a filtered list) and first skip entries
where entry is None or entry.scope != scope (if scope provided), then apply the
query content check and collect matches, and only after collecting matching
candidates truncate to top_k (or top_k * 2 if you still want extra candidates);
update the loop that calls self.get(memory_id) and the variables ids, entry,
scope, top_k, and the collection logic so scope gating happens prior to slicing
and stop early once you have enough final candidates to preserve performance.
| except TypeError: | ||
| # Backward compatibility for custom backends that don't yet accept `scope`. | ||
| return backend.search(query, memory_type, top_k=effective_limit) | ||
| return backend.list(memory_type=memory_type, scope=self.scope, limit=effective_limit) |
There was a problem hiding this comment.
list() needs the same compatibility fallback as search().
These branches now assume every backend accepts scope=, but the PR only preserved backward compatibility for search(). Older/custom backends will still blow up here on query-less recall and forget-by-query/type.
Possible fix
+ def _list_backend_compatible(
+ self,
+ backend: MemoryBackendProtocol,
+ *,
+ memory_type: MemoryType | None,
+ limit: int,
+ ) -> list[MemoryEntry]:
+ try:
+ return backend.list(memory_type=memory_type, scope=self.scope, limit=limit)
+ except TypeError:
+ return backend.list(memory_type=memory_type, limit=limit)
+
@@
- return backend.list(memory_type=memory_type, scope=self.scope, limit=effective_limit)
+ return self._list_backend_compatible(
+ backend,
+ memory_type=memory_type,
+ limit=effective_limit,
+ )
@@
- memories = backend.list(memory_type=memory_type, scope=self.scope, limit=1000)
+ memories = self._list_backend_compatible(
+ backend,
+ memory_type=memory_type,
+ limit=1000,
+ )Also applies to: 713-713
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/syrin/memory/config.py` at line 589, The backend.list call assumes all
backends accept scope; wrap the call to backend.list(memory_type=memory_type,
scope=self.scope, limit=effective_limit) in a try/except (catch TypeError) and
on failure retry calling backend.list without the scope kwarg
(backend.list(memory_type=memory_type, limit=effective_limit)) to preserve
backward compatibility; apply the same pattern used in the search()
compatibility fallback to both the list() site shown and the other occurrence
around the similar recall/forget-by-query/type call.
|
Hey @aryankinha, |
Summary by CodeRabbit
Release Notes
New Features
Tests