Skip to content

feat: add @cache.local() for in-process reference caching#96

Merged
27Bslash6 merged 4 commits intomainfrom
feat/cache-local
Apr 25, 2026
Merged

feat: add @cache.local() for in-process reference caching#96
27Bslash6 merged 4 commits intomainfrom
feat/cache-local

Conversation

@27Bslash6
Copy link
Copy Markdown
Contributor

@27Bslash6 27Bslash6 commented Apr 25, 2026

Summary

  • Adds @cache.local() decorator preset for caching opaque Python objects (SDK clients, connections, ML models) that can't be serialized
  • Standalone ObjectCache class with thread-safe entry-count LRU + TTL — no modifications to L1Cache or wrapper.py
  • Intent short-circuit fires before backend/config resolution to prevent silent parameter swallowing
  • Same wrapper API as @cache: invalidate_cache, cache_clear, cache_info, __wrapped__

Spec: .spec-workflow/specs/cache-local/ (requirements, design, tasks — all approved)

What's New

File Lines Purpose
src/cachekit/object_cache.py 193 Thread-safe reference cache (OrderedDict + RLock)
src/cachekit/decorators/local_wrapper.py 140 Decorator bridge (sync/async, key gen, validation)
src/cachekit/decorators/intent.py +17 cache.local preset + intent short-circuit
src/cachekit/key_generator.py +1 "local": "l" in SERIALIZER_CODES
docs/features/reference-caching.md 293 Feature deep-dive
README.md +8 Intent table row

Test plan

  • 10 critical tests (hit/miss/TTL/invalidation/clear/info/async/identity/opaque/threading)
  • 13 ObjectCache unit tests (basic/TTL/LRU/stats/thread-safety)
  • 15 local wrapper unit tests (rejected params/validation/custom key/async/mutation/unhashable args)
  • 38 total, all passing in 0.53s
  • make quick-check passes (lint + format + type-check + critical tests)
  • basedpyright: 0 errors

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced @cache.local() preset decorator for in-process caching of opaque, non-serializable Python objects with LRU/TTL eviction, per-key invalidation, and cache statistics.
  • Documentation

    • Added comprehensive reference guide for @cache.local() covering use cases, configuration parameters, lifecycle semantics, and comparison with alternatives.
    • Updated preset comparison table in README to include the new @cache.local() option alongside existing cache strategies.

Cache opaque Python objects (SDK clients, connections, ML models) that
can't be serialized. Stores references directly in memory with
entry-count LRU eviction, TTL, thread safety, and async support.

- ObjectCache: standalone thread-safe cache (OrderedDict + RLock)
- create_local_wrapper: decorator bridge with sync/async detection
- Intent short-circuit in decorator() before backend/config resolution
- Same wrapper API: invalidate_cache, cache_clear, cache_info
- 38 tests (10 critical, 28 unit), all passing in <1s

Closes the product gap where users abandon cachekit for DIY dict caches
when hitting serialization failures on opaque objects.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 25, 2026

Warning

Rate limit exceeded

@27Bslash6 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 49 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 8 minutes and 49 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ca852535-6172-4095-b850-13396b74e306

📥 Commits

Reviewing files that changed from the base of the PR and between b64fb11 and c486165.

📒 Files selected for processing (7)
  • docs/features/reference-caching.md
  • src/cachekit/decorators/intent.py
  • src/cachekit/decorators/local_wrapper.py
  • src/cachekit/key_generator.py
  • src/cachekit/object_cache.py
  • tests/critical/test_local_cache_works.py
  • tests/unit/test_local_wrapper.py
📝 Walkthrough

Walkthrough

This PR introduces @cache.local(), a new caching decorator for in-memory storage of opaque Python objects with TTL and LRU eviction. The implementation includes an ObjectCache class, a wrapper factory function supporting both sync and async functions, decorator intent registration, and comprehensive test coverage.

Changes

Cohort / File(s) Summary
Documentation
README.md, docs/features/reference-caching.md
Added @cache.local() preset to comparison table and introduced a comprehensive reference guide covering use cases, parameters, API methods, comparison with standard library alternatives, and implementation examples.
Core Caching Infrastructure
src/cachekit/object_cache.py
New ObjectCache class implementing thread-safe in-memory storage with TTL-based expiration, LRU eviction, and hit/miss statistics tracking.
Decorator Intent & Wrapper
src/cachekit/decorators/intent.py, src/cachekit/decorators/local_wrapper.py
Added cache.local preset that routes to create_local_wrapper function; wrapper validates parameters, generates deterministic cache keys, and provides invalidation/statistics methods for both sync and async functions.
Key Generation
src/cachekit/key_generator.py
Extended SERIALIZER_CODES mapping to include "local" serializer type with code "l".
Test Infrastructure & Unit Tests
tests/critical/conftest.py, tests/unit/test_object_cache.py, tests/unit/test_local_wrapper.py, tests/unit/test_key_generator_blake2b.py
Added ObjectCache unit tests covering TTL expiry, LRU eviction, stats, and thread safety; create_local_wrapper parameter validation and caching behavior tests; updated key generator test expectations; modified test fixture logic to exclude local cache tests from Redis-dependent setup.
Integration Tests
tests/critical/test_local_cache_works.py
Comprehensive end-to-end test suite validating cache hits/misses, per-key invalidation, statistics, object identity preservation, async support, and concurrent access without Redis.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant Decorator as `@cache.local`() Decorator
    participant KeyGen as CacheKeyGenerator
    participant Cache as ObjectCache
    participant Function as Original Function

    Caller->>Decorator: Call with args
    Decorator->>KeyGen: Generate cache key from args
    KeyGen-->>Decorator: Return deterministic key
    Decorator->>Cache: get(key)
    alt Cache Hit
        Cache-->>Decorator: (True, cached_object)
        Decorator-->>Caller: Return cached object
    else Cache Miss
        Decorator->>Function: Compute result
        Function-->>Decorator: Return result
        Decorator->>Cache: put(key, result, ttl)
        Cache-->>Decorator: Stored in ObjectCache
        Decorator-->>Caller: Return result
    end
Loading
sequenceDiagram
    participant Async as Async Caller
    participant Decorator as `@cache.local`() Decorator
    participant KeyGen as CacheKeyGenerator
    participant Cache as ObjectCache
    participant Function as Async Function

    Async->>Decorator: await call with args
    Decorator->>KeyGen: Generate cache key from args
    KeyGen-->>Decorator: Return deterministic key
    Decorator->>Cache: get(key)
    alt Cache Hit (Awaited Value)
        Cache-->>Decorator: (True, cached_object)
        Decorator-->>Async: Return cached object
    else Cache Miss
        Decorator->>Function: Await function execution
        Function-->>Decorator: Return result
        Decorator->>Cache: put(key, result, ttl)
        Cache-->>Decorator: Stored in ObjectCache
        Decorator-->>Async: Return result
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 A cache for objects, shiny and new,
No serialization—just Python through and through!
LRU eviction keeps memory tight,
TTL timers make expiry right.
Identity preserved, mutations too—
Local caching, hopping through! 🥕

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 43.82% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The PR description is comprehensive and well-structured, covering summary, what's new, and test plan. However, the required template sections (Motivation, Type of Change, Security/Documentation/Testing Checklists, Backward Compatibility) are missing or unchecked. Fill in all required template sections including checked Type of Change checkbox, completed Security/Documentation/Testing Checklists, and Backward Compatibility assessment to ensure compliance with repository standards.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add @cache.local() for in-process reference caching' clearly and concisely summarizes the main change: introducing a new @cache.local() decorator for in-process object caching. It is specific, accurate, and directly reflects the primary objective of the PR.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/cache-local

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Remove notest from 4 self-contained examples (mutation, identity,
wrapper API). Keep notest only on examples requiring external deps
(Langfuse, torch, sqlalchemy, httpx). Fix SQLAlchemy.create_engine
typo.
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 25, 2026

Codecov Report

❌ Patch coverage is 91.60839% with 12 lines in your changes missing coverage. Please review.
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
src/cachekit/object_cache.py 88.88% 5 Missing and 3 partials ⚠️
src/cachekit/decorators/local_wrapper.py 93.84% 2 Missing and 2 partials ⚠️

📢 Thoughts on this report? Let us know!

The test_serializer_codes_mapping test hardcodes the expected dict.
Adding 'local': 'l' to match the key_generator change from feat.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (8)
src/cachekit/decorators/intent.py (1)

110-118: Short-circuit placement and config rejection look correct.

Placing the local-intent branch before the backend pop / l1_enabled mapping is necessary — otherwise those kwargs would be silently consumed and create_local_wrapper's strict whitelist would never see them. Good catch.

One small wording note: the TypeError message frames config= rejection purely as an encryption concern, but DecoratorConfig carries many things besides encryption (circuit breaker, timeouts, serializer, etc.), and the more general reason config= is rejected is that @cache.local() bypasses the entire backend pipeline. Consider broadening the message slightly, e.g.:

-                    "@cache.local() stores object references in-process — encryption "
-                    "requires serialization to bytes. For encrypted caching, use `@cache.secure`()."
+                    "@cache.local() stores object references in-process and bypasses the "
+                    "backend/serializer/encryption pipeline, so config= is not supported. "
+                    "Use `@cache`(config=...) for serialized caching, or `@cache.secure`() for encryption."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/decorators/intent.py` around lines 110 - 118, The TypeError
message raised in the local-intent branch (when _intent == "local") is too
narrowly worded around encryption; update the message in that branch (the check
that rejects config passed to `@cache.local`()) to explain that DecoratorConfig
contains multiple backend-related settings and `@cache.local`() bypasses the
backend pipeline, so config= is not supported; keep the same location (the if
_intent == "local" block) and retain the rejection behavior and reference to
create_local_wrapper.
tests/unit/test_local_wrapper.py (2)

130-138: Async cache_clear smoke test is intentionally minimal — consider also asserting state.

cache_clear() is exercised on the wrapper, but the test only checks that it doesn't raise. A small follow-up that populates the cache (await afn(1)), clears, then checks afn.cache_info().currsize == 0 would lock down behavior in addition to "doesn't raise".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_local_wrapper.py` around lines 130 - 138, Update the
test_cache_clear_on_async_does_not_raise to also assert state: after wrapping
async function afn with cache.local(), call and await afn(1) to populate the
cache, then call afn.cache_clear(), and finally assert afn.cache_info().currsize
== 0 (or equivalent property) to verify the cache was actually cleared; keep the
existing "does not raise" check and add these assertions referencing afn,
cache.local(), cache_clear, and cache_info.

18-58: match="@cache.local()" is a regex — escape it or use a substring without special chars.

pytest.raises(..., match=...) treats the string as a regex. Here . is "any char" and () is an empty capture group, so the pattern effectively matches "@cache.local" anywhere in the message. It happens to pass today, but it would also pass against an unrelated message like "@cache_local". Recommend escaping or matching a more specific fragment.

♻️ Example fix (apply to all six occurrences)
-        with pytest.raises(TypeError, match="@cache.local()"):
+        with pytest.raises(TypeError, match=r"@cache\.local\(\)"):

Or anchor on the more specific error fragment, e.g. match=r"only accepts: key, max_entries, namespace, ttl".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_local_wrapper.py` around lines 18 - 58, The tests in
tests/unit/test_local_wrapper.py use pytest.raises(..., match="@cache.local()")
which is interpreted as a regex (dots and parens are special); update the six
tests that decorate functions with `@cache.local`(...) so the match argument
either escapes the regex (e.g. use r"\@cache\.local\(\)") or replace match with
a more specific, non-regex substring/error fragment (e.g. r"only accepts: key,
max_entries, namespace, ttl") or remove the match and assert the TypeError
without a regex; locate the assertions around the cache.local decorator usages
inside test_serializer_rejected, test_encryption_rejected,
test_master_key_rejected, test_integrity_checking_rejected, test_config_rejected
and the adjacent backend-rejected test and change their pytest.raises match
accordingly.
src/cachekit/decorators/local_wrapper.py (3)

107-130: Concurrent first-time misses can create duplicate expensive resources.

The sync and async wrappers both follow a classic check-then-act pattern (get → if miss, call func, then put). Under concurrent invocation with the same key on a cold cache, multiple callers will each invoke func() and the last put wins. For the documented headline use cases this is consequential:

  • ML models (load_model("resnet50")) — multiple loads of the same large weight matrix.
  • DB connections / SQLAlchemy engines — duplicate engines created and the losers leak (no on_evict cleanup yet).
  • HTTP/SDK clients with auth handshakes — duplicate authenticated sessions.

functools.lru_cache has the same property, so this is defensible, but the docs in docs/features/reference-caching.md actively pitch "load once, reuse across requests". Consider one of:

  • Documenting the behavior explicitly in the Mutation/Identity section.
  • Adding a per-key lock (e.g., defaultdict(threading.Lock) for sync and asyncio.Lock for async, evicted alongside entries) so first-miss work is deduplicated.

If you want to keep this PR focused, a short doc note plus a tracking issue would be sufficient.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/decorators/local_wrapper.py` around lines 107 - 130, The
wrappers (async_wrapper and sync_wrapper) use a check-then-act pattern around
object_cache.get/put causing duplicate expensive work on concurrent first-time
misses; fix by introducing per-key deduplication: compute cache_key via
_make_key, then acquire a per-key lock before calling func so only the first
waiter executes the expensive work, release the lock after put (use
threading.Lock for the sync_wrapper and asyncio.Lock for async_wrapper), store
locks in a dict keyed by cache_key (e.g., defaultdict) and ensure locks are
cleaned up/evicted when the cache entry is evicted; alternatively, if you prefer
not to change runtime behavior now, add a clear note in docs (reference caching
docs and these wrappers) and open a tracking issue for implementing per-key
locks.

51-59: Minor: no type validation on ttl / max_entries.

kwargs.get("ttl", 300) accepts any type; the < 1 checks happen to also fail for most non-numerics by raising TypeError from comparison, but e.g. ttl=1.5 silently flows through to ObjectCache.put(..., ttl) and is added to time.monotonic(). Probably harmless, but a small isinstance(ttl, int) check would make the failure mode explicit and consistent with the int annotation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/decorators/local_wrapper.py` around lines 51 - 59, Validate
types for ttl and max_entries before the range checks: ensure ttl and
max_entries are ints (use isinstance(ttl, int) / isinstance(max_entries, int))
and raise a TypeError with a clear message if not, then proceed with the
existing ValueError checks (e.g., "ttl must be an int, got
{type(ttl).__name__}"). Update the block that currently reads/annotates ttl and
max_entries (and related variables like namespace/key) so the explicit type
checks happen before the "if ttl < 1" and "if max_entries < 1" comparisons.

79-85: invalidate_cache / ainvalidate_cache ignore a user-supplied key= that depends on call context.

When the user passes a custom key= callable, invalidate_cache(*args, **kw) regenerates the key by re-invoking key(*args, **kw). That works for pure key functions, but if a user provides a key callable that pulls from request context, threadlocals, etc. (a reasonable pattern for namespaces/multi-tenancy), the regenerated key may not match the one used at insert time, and the entry won't be evicted.

Optional improvement: also accept a key= override on invalidate_cache, or document this constraint on key= callables in docs/features/reference-caching.md.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/decorators/local_wrapper.py` around lines 79 - 85,
invalidate_cache and ainvalidate_cache currently regenerate keys via
_make_key(args, kw) which fails when callers used a context-dependent key
callable; change both functions to accept an optional key= parameter (e.g., def
invalidate_cache(*args, key: Optional[Union[str, Callable]] = None, **kw)) and
if key is provided use it directly (if str) or call it with the same args/kw (if
callable) to produce the deletion key, otherwise fall back to _make_key(args,
kw); apply the same change to ainvalidate_cache and ensure you call
object_cache.delete with the resolved key; alternatively document the limitation
in docs/features/reference-caching.md if you don't want to change the API.
src/cachekit/key_generator.py (1)

53-78: Update serializer_type docstring to include "local".

Line 70 still lists only ("std", "auto", "orjson", "arrow"). Now that "local" is a valid input (and used by create_local_wrapper), please add it for completeness.

📝 Proposed doc tweak
-            serializer_type: Serializer type code ("std", "auto", "orjson", "arrow")
+            serializer_type: Serializer type code ("std", "auto", "orjson", "arrow", "local")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/key_generator.py` around lines 53 - 78, The docstring for
generate_key needs to list "local" as an accepted serializer_type; update the
serializer_type description in the generate_key method to include "local"
alongside "std", "auto", "orjson", and "arrow" (since create_local_wrapper uses
"local"), so the doc accurately documents all valid serializer_type values and
the compact metadata suffix note remains unchanged.
src/cachekit/object_cache.py (1)

117-118: Nit: move_to_end after insertion is redundant.

Newly assigned keys in an OrderedDict are already placed at the end, so the move_to_end(key) on Line 118 is a no-op for a fresh insert (the in-place update path on Line 110 needs it; this branch does not).

♻️ Suggested cleanup
             self._store[key] = (value, expires_at)
-            self._store.move_to_end(key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cachekit/object_cache.py` around lines 117 - 118, The insertion path in
the cache method that does self._store[key] = (value, expires_at) redundantly
calls self._store.move_to_end(key) afterward; remove that move_to_end call in
the fresh-insert branch (leave the existing move_to_end in the in-place update
branch intact) so only the update path uses move_to_end on _store in
object_cache.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/features/reference-caching.md`:
- Around line 139-152: The fenced code block in the lifecycle diagram is missing
a language tag (MD040); update the opening triple-backtick to include a language
such as "text" (e.g. change ``` to ```text) so the diagram block is properly
annotated and linting passes; locate the diagram block in reference-caching.md
and add the language to its opening fence.

In `@src/cachekit/object_cache.py`:
- Around line 93-118: The put method allows ttl values < 1 contrary to its
docstring; add validation at the start of ObjectCache.put to enforce ttl >= 1
(raise ValueError with a clear message), mirroring the validation pattern used
in __init__ for _max_entries, so that expires_at calculation and interactions
with _store/_evict_to_make_room behave correctly for all entries.

In `@tests/critical/test_local_cache_works.py`:
- Around line 62-72: The test_invalidate_cache is flaky because it relies on
time.monotonic() changing between cached calls; replace the time-based
uniqueness with a deterministic side-effect counter similar to
test_basic_cache_hit: create a mutable counter/closure or use a
nonlocal/incremented variable inside the decorated greet function (the function
wrapped by cache.local) to produce different outputs on re-execution, call
greet("alice"), invalidate_cache("alice"), call greet("alice") again, and assert
the outputs differ; ensure you reference the decorated function name greet and
the method greet.invalidate_cache for the invalidation step.

---

Nitpick comments:
In `@src/cachekit/decorators/intent.py`:
- Around line 110-118: The TypeError message raised in the local-intent branch
(when _intent == "local") is too narrowly worded around encryption; update the
message in that branch (the check that rejects config passed to `@cache.local`())
to explain that DecoratorConfig contains multiple backend-related settings and
`@cache.local`() bypasses the backend pipeline, so config= is not supported; keep
the same location (the if _intent == "local" block) and retain the rejection
behavior and reference to create_local_wrapper.

In `@src/cachekit/decorators/local_wrapper.py`:
- Around line 107-130: The wrappers (async_wrapper and sync_wrapper) use a
check-then-act pattern around object_cache.get/put causing duplicate expensive
work on concurrent first-time misses; fix by introducing per-key deduplication:
compute cache_key via _make_key, then acquire a per-key lock before calling func
so only the first waiter executes the expensive work, release the lock after put
(use threading.Lock for the sync_wrapper and asyncio.Lock for async_wrapper),
store locks in a dict keyed by cache_key (e.g., defaultdict) and ensure locks
are cleaned up/evicted when the cache entry is evicted; alternatively, if you
prefer not to change runtime behavior now, add a clear note in docs (reference
caching docs and these wrappers) and open a tracking issue for implementing
per-key locks.
- Around line 51-59: Validate types for ttl and max_entries before the range
checks: ensure ttl and max_entries are ints (use isinstance(ttl, int) /
isinstance(max_entries, int)) and raise a TypeError with a clear message if not,
then proceed with the existing ValueError checks (e.g., "ttl must be an int, got
{type(ttl).__name__}"). Update the block that currently reads/annotates ttl and
max_entries (and related variables like namespace/key) so the explicit type
checks happen before the "if ttl < 1" and "if max_entries < 1" comparisons.
- Around line 79-85: invalidate_cache and ainvalidate_cache currently regenerate
keys via _make_key(args, kw) which fails when callers used a context-dependent
key callable; change both functions to accept an optional key= parameter (e.g.,
def invalidate_cache(*args, key: Optional[Union[str, Callable]] = None, **kw))
and if key is provided use it directly (if str) or call it with the same args/kw
(if callable) to produce the deletion key, otherwise fall back to
_make_key(args, kw); apply the same change to ainvalidate_cache and ensure you
call object_cache.delete with the resolved key; alternatively document the
limitation in docs/features/reference-caching.md if you don't want to change the
API.

In `@src/cachekit/key_generator.py`:
- Around line 53-78: The docstring for generate_key needs to list "local" as an
accepted serializer_type; update the serializer_type description in the
generate_key method to include "local" alongside "std", "auto", "orjson", and
"arrow" (since create_local_wrapper uses "local"), so the doc accurately
documents all valid serializer_type values and the compact metadata suffix note
remains unchanged.

In `@src/cachekit/object_cache.py`:
- Around line 117-118: The insertion path in the cache method that does
self._store[key] = (value, expires_at) redundantly calls
self._store.move_to_end(key) afterward; remove that move_to_end call in the
fresh-insert branch (leave the existing move_to_end in the in-place update
branch intact) so only the update path uses move_to_end on _store in
object_cache.py.

In `@tests/unit/test_local_wrapper.py`:
- Around line 130-138: Update the test_cache_clear_on_async_does_not_raise to
also assert state: after wrapping async function afn with cache.local(), call
and await afn(1) to populate the cache, then call afn.cache_clear(), and finally
assert afn.cache_info().currsize == 0 (or equivalent property) to verify the
cache was actually cleared; keep the existing "does not raise" check and add
these assertions referencing afn, cache.local(), cache_clear, and cache_info.
- Around line 18-58: The tests in tests/unit/test_local_wrapper.py use
pytest.raises(..., match="@cache.local()") which is interpreted as a regex (dots
and parens are special); update the six tests that decorate functions with
`@cache.local`(...) so the match argument either escapes the regex (e.g. use
r"\@cache\.local\(\)") or replace match with a more specific, non-regex
substring/error fragment (e.g. r"only accepts: key, max_entries, namespace,
ttl") or remove the match and assert the TypeError without a regex; locate the
assertions around the cache.local decorator usages inside
test_serializer_rejected, test_encryption_rejected, test_master_key_rejected,
test_integrity_checking_rejected, test_config_rejected and the adjacent
backend-rejected test and change their pytest.raises match accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 63950e2a-b04c-448f-b2ce-76be3a08a9b1

📥 Commits

Reviewing files that changed from the base of the PR and between 3030c47 and b64fb11.

📒 Files selected for processing (11)
  • README.md
  • docs/features/reference-caching.md
  • src/cachekit/decorators/intent.py
  • src/cachekit/decorators/local_wrapper.py
  • src/cachekit/key_generator.py
  • src/cachekit/object_cache.py
  • tests/critical/conftest.py
  • tests/critical/test_local_cache_works.py
  • tests/unit/test_key_generator_blake2b.py
  • tests/unit/test_local_wrapper.py
  • tests/unit/test_object_cache.py

Comment thread docs/features/reference-caching.md Outdated
Comment thread src/cachekit/object_cache.py Outdated
Comment thread tests/critical/test_local_cache_works.py
Inline fixes:
- Add ttl >= 1 validation in ObjectCache.put() (docstring promised it)
- Add language tag to lifecycle diagram fenced block (MD040)
- Make test_invalidate_cache deterministic (counter, not monotonic)

Nitpick fixes:
- Broaden config= TypeError message (not just encryption)
- Add isinstance checks for ttl/max_entries types in local_wrapper
- Add "local" to generate_key() docstring's serializer_type list
- Remove redundant move_to_end on fresh insert in ObjectCache
- Assert cache state in async cache_clear test
- Escape regex in pytest.raises match strings
@27Bslash6 27Bslash6 merged commit e5759f5 into main Apr 25, 2026
32 checks passed
@27Bslash6 27Bslash6 deleted the feat/cache-local branch April 25, 2026 07:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant