Conversation
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Replace the sequential per-record insert loop (which committed after each row) with a shared cursor approach matching the sync implementation. _insert_single_record_async handles SQL building and execution on the shared cursor; a single _maybe_commit() and cache invalidation follow. Adds three tests covering the FK, non-FK IntegrityError, and generic sqlite3.Error rollback paths. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add lookup_cache() and store_cache() to QueryBuilder, encapsulating the cache_bypassed guard, key generation, and cache_get/set calls that were previously duplicated verbatim in both _fetch_result and _fetch_projection_result across the sync and async query paths. Both paths now call the shared helpers, removing ~30 lines of duplicate cache logic. The three public accessors (cache_bypassed, query_cache_ttl, make_cache_key) that existed solely to expose internals to the async layer are removed as they are no longer needed. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
#5 - Add optional db_context parameter to _create_instance_from_data / create_instance_from_data so callers can supply the binding directly. AsyncSqliterDB._create_instance_from_data now delegates in a single call instead of delegating then overwriting the db_context attribute. #6 - Remove the redundant _in_transaction getter/setter property pair. in_transaction is now a single direct-read property; the two write sites in __aenter__/__aexit__ call _sync.set_in_transaction() explicitly. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
- Improve AsyncSqliterDB and AsyncQueryBuilder class docstrings from internal implementation notes to user-facing descriptions - Add async mention to pyproject.toml project description Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add a 15th TUI demo category 'Async Support' with 9 runnable demos covering: connection, context manager, CRUD, bulk insert, queries, FK lazy loading (with explicit wrong/correct patterns), FK eager loading, reverse relationships, and transactions with rollback. Key implementation details: - Demos run via a ThreadPoolExecutor to avoid conflicting with Textual's running event loop - Display code is sanitised to strip infra boilerplate (_ASYNC_AVAILABLE guard, _run_async call, file cleanup) before showing to the user - asyncio.run() unavailable when aiosqlite is not installed; each demo returns a helpful install message instead Also adds docs/tui-demo/async.md mirroring the existing tui-demo page style, with code snippets, What Happens sections, gotcha admonitions, and a full sync-vs-async quick-reference table. Updates mkdocs.yml nav and the tui-demo index page. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add a 'mypy and Static Type Checking' section to the async guide explaining the AsyncForeignKey typing trade-off (typed as T for eager loading ergonomics, but lazy-loaded runtime value is AsyncLazyLoader[T]) and the reverse accessor limitation (installed via setattr, so __getattribute__ returns object under strict mypy). Update the TUI async demos to show the required cast() workarounds in the FK lazy and reverse demos, with multi-line explanatory comments. Update docs/tui-demo/async.md to mirror the cast patterns with matching !!! note admonitions linking back to the guide. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
… icons Remove the '⚡' icon from the Async Support demo category so all 15 categories are consistent (all use icon=""). Add a TODO item to revisit adding suitable icons to all categories in one pass. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
📝 WalkthroughWalkthroughThis pull request adds comprehensive optional async support to SQLiter via the Changes
Sequence DiagramsequenceDiagram
actor App as Application
participant AsyncDB as AsyncSqliterDB
participant Conn as aiosqlite.Connection
participant SQLite as SQLite Database
App->>AsyncDB: await create(db_file)
AsyncDB->>Conn: establish connection
Conn->>SQLite: open database
SQLite-->>Conn: connection ready
Conn-->>AsyncDB: connected
AsyncDB-->>App: ready for operations
App->>AsyncDB: await insert(model)
AsyncDB->>AsyncDB: build plan (SQL, values)
AsyncDB->>Conn: execute INSERT
Conn->>SQLite: write record
SQLite-->>Conn: success
Conn-->>AsyncDB: cursor result
AsyncDB->>AsyncDB: cache invalidation
AsyncDB-->>App: model instance with pk
App->>AsyncDB: await select(Model).fetch_all()
AsyncDB->>AsyncDB: build execution plan
AsyncDB->>Conn: execute SELECT
Conn->>SQLite: read records
SQLite-->>Conn: result rows
Conn-->>AsyncDB: cursor
AsyncDB->>AsyncDB: convert rows → models
AsyncDB->>AsyncDB: execute prefetch (if needed)
AsyncDB-->>App: list of models
App->>AsyncDB: await close()
AsyncDB->>AsyncDB: maybe_commit()
AsyncDB->>Conn: close connection
Conn->>SQLite: close
SQLite-->>Conn: closed
Conn-->>AsyncDB: closed
AsyncDB-->>App: cleanup complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 655 |
| Duplication | 59 |
🟢 Coverage 100.00% diff coverage · +0.00% coverage variation
Metric Results Coverage variation ✅ +0.00% coverage variation (-1.00%) Diff coverage ✅ 100.00% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (a27ead8) 4869 4869 100.00% Head commit (89df2d2) 6376 (+1507) 6376 (+1507) 100.00% (+0.00%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#149) 1731 1731 100.00% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
TIP This summary will be updated as you push new changes. Give us feedback
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Use isinstance() instead of type() for bulk_insert type checking (both sync and async), and move AsyncLazyLoader to TYPE_CHECKING block since it's only used in a string annotation. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Update bulk_insert mixed-model tests to expect TypeError instead of ValueError after isinstance refactor. Add full coverage for async_demos.py including execution, ImportError fallback, and guard clause paths. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Fix TestImportErrorFallback to use monkeypatch for all sys.modules cleanup, and also save/restore the parent package's async_demos attribute (importlib.import_module updates it, which patch() uses for resolution). Replace AsyncSqliterDB.get patch with full mock using SimpleNamespace return values to avoid dangling aiosqlite connections. Use real function in test_leading_blank_lines_removed instead of fragile MagicMock. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/index.md (1)
10-17:⚠️ Potential issue | 🟡 MinorClarify that async support requires an optional runtime dependency.
The documentation currently advertises async APIs (line 17) whilst stating that Pydantic is the only runtime dependency (line 13). This is misleading because the optional async extra requires the additional
aiosqlitepackage. Update the wording to clarify that core synchronous usage only needs Pydantic, whilst async support is optional and requires the async extra.📝 Proposed wording
SQLite databases using Pydantic models. The only external run-time dependency -is Pydantic itself. +for core synchronous usage is Pydantic itself. Async support is available via +the optional async extra, which installs the additional async SQLite driver. @@ -projects. It supports both synchronous and asynchronous APIs. +projects. It supports synchronous APIs and optional asynchronous APIs. @@ -- Optional async database, query, and ORM support +- Optional async database, query, and ORM support via the async extra @@ -- No external dependencies other than Pydantic +- No required external dependencies other than Pydantic for core sync usage🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` around lines 10 - 17, Update the paragraph that mentions "Pydantic" as the only runtime dependency and the sentence that advertises "async APIs" to clarify that core synchronous usage only requires Pydantic, while async support is optional and provided via an extra that requires the additional "aiosqlite" package; specifically change the lines containing the phrases "The only external run-time dependency is Pydantic itself." and "It supports both synchronous and asynchronous APIs." to state that async support requires installing the optional async extra (e.g., "pip install sqliter[async]") and list "aiosqlite" as the additional runtime dependency for async usage.README.md (1)
54-65:⚠️ Potential issue | 🟡 MinorUpdate the “no external dependencies” feature wording.
With optional async support now documented, Line 65 reads as if optional extras do not add external packages. “No required external dependencies…” avoids contradicting the new
aiosqliteextra.Proposed wording tweak
-- No external dependencies other than Pydantic +- No required external dependencies other than Pydantic🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 54 - 65, Update the README bullet that currently reads "No external dependencies other than Pydantic" to clarify that extras are optional; replace it with wording such as "No required external dependencies other than Pydantic (optional extras like aiosqlite may be needed for async features)" so the statement aligns with the documented optional async support and the aiosqlite extra.
🧹 Nitpick comments (5)
sqliter/orm/m2m.py (1)
1041-1048: Index creation: granularity change swallows subsequent indexes on first failure, and conn/cursor are re-acquired per iteration.Two minor points on the new loop:
- Previously each index had its own
try/except(per the PR summary), so a failure on the first FK index would still allow the second to be attempted. With the single outertry/except sqlite3.Error, the first error aborts the loop and silently skips remaining indexes. Given that creation failures are already tolerated, this is a low-severity reliability regression rather than a bug — but worth a conscious choice. Logging the swallowed exception (even atdebug) would help diagnose missing indexes later.conn = db.connect()andcursor = conn.cursor()are re-acquired inside the loop for eachindex_sql.db.connect()returns the already-open connection so this is not incorrect, just redundant; hoisting them outside the loop (as done for theCREATE TABLEabove) keeps the two blocks symmetric.♻️ Suggested tidy-up
# Create indexes on both FK columns + conn = db.connect() + cursor = conn.cursor() try: for index_sql in build_junction_index_sqls(junction_table, columns): - conn = db.connect() - cursor = conn.cursor() db._execute(cursor, index_sql) # noqa: SLF001 conn.commit() except sqlite3.Error: pass # Non-critical: index creation failure🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/orm/m2m.py` around lines 1041 - 1048, The current index-creation loop wraps all indexes in a single try/except and reacquires conn/cursor each iteration; change it to acquire conn and cursor once (hoist conn = db.connect() and cursor = conn.cursor() outside the loop) and run each index_sql inside its own try/except sqlite3.Error so one failing index doesn't stop subsequent ones; in each per-index except block log the caught exception (e.g. via process logger or db logger) including the index_sql to aid diagnosis, and continue to the next index; keep using db._execute(cursor, index_sql) and conn.commit() after each successful create.docs/quickstart.md (1)
49-55: Consider also showing thepipequivalent alongsideuv.Readers who don't use
uvwon't have a direct command to copy; a one-linepip install 'sqliter-py[async]'alternative (as done in the installation page) would make this self-contained. The link toguide/asyncio.mdis fine.📝 Suggested addition
```bash uv add 'sqliter-py[async]' +# or +pip install 'sqliter-py[async]' ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/quickstart.md` around lines 49 - 55, Update the code block that currently shows the uv command "uv add 'sqliter-py[async]'" to include the pip equivalent on the next line; specifically, in the same fenced bash block that references "uv add 'sqliter-py[async]'", add a second line "pip install 'sqliter-py[async]'" (with a comment or separator like "# or" if you prefer) so readers who don't use uv have a direct pip install command..pre-commit-config.yaml (1)
29-36: LGTM — mypy dependency added for async typing.Adding
aiosqlitetoadditional_dependenciesensures the strict mypy hook can resolve types imported from the newsqliter.asynciomodules.One optional consideration: the lockfile pins
aiosqlite==0.22.1, but this hook installs unpinned. If you've seen type-check drift from dep bumps elsewhere, pinning here too would keep the pre-commit check reproducible with the lockfile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.pre-commit-config.yaml around lines 29 - 36, The pre-commit hook lists `additional_dependencies` including `aiosqlite` unpinned which can diverge from the locked `aiosqlite==0.22.1`; update the `additional_dependencies` entry to pin `aiosqlite==0.22.1` (i.e., replace `aiosqlite` with `aiosqlite==0.22.1`) so the strict mypy hook runs with the same version as the lockfile and keeps type-checking reproducible.tests/tui/unit/test_demos_async.py (1)
175-189: Report the function that actually failed.Line 188 always reports
_run_async_conn, which makes failures from the other demo extractors harder to diagnose.Proposed fix
- f"_demo_code({_run_async_conn.__name__}) is empty" + f"_demo_code({func.__name__}) is empty"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/tui/unit/test_demos_async.py` around lines 175 - 189, The assertion message always references _run_async_conn even when another extractor failed; update the assertion in the loop that iterates over functions (the list containing _run_async_conn, _run_async_context, _run_async_crud, etc.) to use the current loop variable's name (func.__name__) and/or include the value of code, so _demo_code(func) failures report the actual failing function; locate the loop and replace the hardcoded _run_async_conn reference in the f-string with func.__name__ (and optionally show code snippet or its length) to make failures diagnosable.sqliter/asyncio/db.py (1)
438-447:table_namecomputed twice.Line 438 already computes
table_name; line 442 immediately rebinds it from the tuple returned by_build_create_table_sql(...). Either drop the first assignment or destructure onlycreate_table_sql, fk_columnsfrom the helper.♻️ Suggested tweak
- table_name = model_class.get_table_name() - if force: - await self._execute_sql(f"DROP TABLE IF EXISTS {table_name}") - - table_name, create_table_sql, fk_columns = ( - self._sync._build_create_table_sql( # noqa: SLF001 - model_class, - exists_ok=exists_ok, - ) - ) + table_name, create_table_sql, fk_columns = ( + self._sync._build_create_table_sql( # noqa: SLF001 + model_class, + exists_ok=exists_ok, + ) + ) + if force: + await self._execute_sql(f"DROP TABLE IF EXISTS {table_name}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/asyncio/db.py` around lines 438 - 447, The local variable table_name is computed twice: once from model_class.get_table_name() and again when unpacking the tuple returned by _build_create_table_sql(model_class, exists_ok=exists_ok). Remove the redundant first assignment or change the unpacking of _build_create_table_sql to only extract create_table_sql and fk_columns (leaving table_name from the initial model_class.get_table_name()), updating any subsequent uses accordingly (refer to symbols: model_class, table_name, _build_create_table_sql, exists_ok, force, and _execute_sql).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/api-reference/async-sqliterdb.md`:
- Around line 291-292: The example dereferences the optional result of
db.get(Book, 1) without guarding for None; update the snippet around the get()
call (the Book variable and subsequent book.author.fetch() usage) to check that
book is not None (e.g., add assert book is not None or an explicit if book is
None: branch returning/handling the missing case) before calling
book.author.fetch() so the example matches the documented T | None return of
get().
In `@pyproject.toml`:
- Around line 31-35: The optional-dependencies block has a mismatch where
"extras" pins inflect==7.0.0 but "full" allows inflect>=7.0.0; update the "full"
entry in pyproject.toml's [project.optional-dependencies] so its inflect
constraint matches the "extras" pin (use inflect==7.0.0) to ensure
sqliter-py[full] and sqliter-py[extras] resolve the same pluralisation
dependency.
In `@sqliter/asyncio/__init__.py`:
- Around line 7-30: The current try/except sets _IMPORT_ERROR for any
ImportError from importing AsyncSqliterDB/AsyncQueryBuilder which masks real
import-time bugs; change the exception handling so you only capture
ModuleNotFoundError for the missing dependency "aiosqlite" and re-raise other
import problems. Concretely, in the import block that references AsyncSqliterDB
and AsyncQueryBuilder, replace the broad except ImportError as exc with an
except ModuleNotFoundError as exc that checks exc.name (or exc.args) for
"aiosqlite" and assigns _IMPORT_ERROR = exc only in that case; for any other
ImportError/ModuleNotFoundError re-raise immediately so real errors in
sqliter.asyncio.db or .query surface. Ensure __getattr__ and the _IMPORT_ERROR
symbol remain unchanged.
In `@sqliter/asyncio/db.py`:
- Around line 685-707: The code currently writes negative cache entries (None)
in the get-path when result is falsy, which can mask cross-process inserts; in
the branch where result is None (inside the method that uses
get_plan.table_name, cache_key, cache_ttl, bypass_cache and _cache_set), change
behavior to avoid storing a negative entry unless an explicit negative-cache TTL
was provided (or unless cache_ttl is explicitly set), i.e., skip calling
_cache_set with None when cache_ttl is the default/unspecified; alternatively
add a separate negative_ttl parameter and only cache None when that is non-null,
and leave a short comment referencing invalidate_table_cache to explain
cross-process invalidation limitations.
- Around line 571-582: The exception handlers perform a rollback when not
self.in_transaction but ignore self.auto_commit, which can discard a caller's
pending work; change the rollback guard to require auto_commit as well (i.e.,
only rollback when self.auto_commit and not self.in_transaction and self.conn is
not None). Implement this once as a small helper (e.g., _maybe_rollback
mirroring _maybe_commit) and call it from the except blocks in insert (the block
using insert_plan), bulk_insert, update, and delete so the same correct guard is
applied everywhere.
- Around line 527-535: The _execute_sql helper unconditionally calls await
conn.commit(), which breaks transactions started by async with db:; change it to
call the existing transaction-aware commit helper (use await
self._maybe_commit(conn) or self._maybe_commit() depending on signature) after
executing SQL so commits only occur when not inside an explicit transaction.
Locate _execute_sql (uses self.connect(), conn.cursor(),
self._execute_async(cursor, sql)) and replace the direct await conn.commit()
with the transaction-aware call, leaving exception handling (raise
SqlExecutionError(sql) from exc) intact; this preserves create_table, FK/index,
and M2M helpers behavior without flushing enclosing transactions.
- Around line 355-376: get_table_names() currently auto-closes a connection it
opened, which breaks in-memory DBs; modify the auto-close logic so that if a
connection was opened by this call (was_connected is False) you only call
self.close() afterwards when the database is not in-memory (check
self.is_memory), i.e. keep the existing connect() behavior but change the final
conditional to: close only if not was_connected and not self.is_memory; refer to
get_table_names, is_connected, connect, close, and is_memory when making the
change.
- Around line 798-822: The __aexit__ implementation currently closes the DB
connection and unconditionally clears cache/stats, which destroys in-memory DBs
and defeats cache across transactional blocks; change __aexit__ (paired with
__aenter__) to only perform transaction semantics: call await
self.conn.rollback() on error or await self.conn.commit() on success, call
self._sync.set_in_transaction(value=False), and return without calling await
self.conn.close() or setting self.conn = None; remove the unconditional
self._sync.clear_cache() and self._sync.reset_cache_stats() calls here and
instead invalidate cache only for tables modified during the transaction (e.g.
add or call a helper like _invalidate_modified_tables() that clears entries for
affected tables), leaving full connection teardown and global cache clears to
the explicit close() method and preserving behavior for memory=True and
long-lived connections.
In `@sqliter/asyncio/orm/fields.py`:
- Around line 42-46: The fetch method currently treats only None as a missing FK
and will attempt to load when _fk_id == 0; change the presence check to use
truthiness (e.g., if not self._fk_id) so that 0 is treated as an unsaved/missing
FK and sets self._cached = None and returns None; apply the same fix to the
other similar check around lines 117-119 (the other loader/resolver method
referencing _fk_id/_cached) so all relationship loading uses the repo’s
truthiness convention for pk.
- Around line 48-65: The AsyncLazyLoader code currently swallows all
SqliterError exceptions in the getter (the block using self._db.get,
self._to_model, self._fk_id and setting self._cached), making real DB failures
appear as missing relationships; remove the broad except SqliterError handler
(or if you want to log, log and re-raise) so that SqliterError propagates
instead of being converted to None, leaving the existing behavior where db.get()
returns None for a genuinely missing row.
In `@sqliter/asyncio/orm/m2m.py`:
- Around line 404-418: The add/remove/clear/set methods currently delegate to
self._manager but leave the wrapper's cached snapshot self._items stale; after
each awaited call to self._manager.add/remove/clear/set, invalidate or refresh
self._items (e.g., set self._items = None or re-fetch into self._items) so
subsequent fetch_all(), count(), or exists() on the same wrapper see current
data; update the async methods add, remove, clear, and set to refresh or clear
self._items immediately after the manager operation completes.
- Around line 304-308: The set method currently calls clear() before
validating/writing the replacement instances which can lose the original links
if add() fails; change set(self, *instances) to first validate/obtain all
replacement primary keys (ensure each instance has a saved PK or resolve it) and
then perform clear() followed by add() inside a single transaction so any
failure during add() rolls back; use the existing clear, add and set method
names and the ORM's transaction context manager to wrap the clear/add sequence
atomically.
In `@sqliter/asyncio/orm/model.py`:
- Around line 29-42: The FK cache keyed by name can become stale when the
underlying FK value changes; in the branch handling cached = cache.get(name)
(and when cached is an AsyncLazyLoader or a loaded model), compare the cached
item's primary key (for AsyncLazyLoader check cached.fk_id or for loaded model
check its PK via fk_descriptors[name].to_model) against the current fk_id, and
if they differ rebuild the cache entry (create a new AsyncLazyLoader with
instance=self, to_model=object.__getattribute__(self,
"fk_descriptors")[name].to_model, fk_id=fk_id, db_context=db_context) before
returning; also handle the case where cached is a loaded model by replacing it
with a fresh AsyncLazyLoader when PK mismatch.
- Around line 22-24: The current check only treats fk_id == None as missing, so
when fk_id == 0 an AsyncLazyLoader will be created and may query pk=0; change
the presence check after fk_id = object.__getattribute__(self, f"{name}_id") to
treat falsy values as unloaded (use a truthiness check like "if not fk_id") so
that unsaved/missing FKs (pk == 0) are handled the same as None and the
AsyncLazyLoader for the relation (where the code constructs/returns an
AsyncLazyLoader) is not created for missing FK values.
In `@sqliter/asyncio/orm/query.py`:
- Around line 117-124: The _build_query currently only checks fk_value for None,
which allows unsaved parent instances with pk==0 to run reverse queries; update
the condition in _build_query to use a truthiness check on fk_id (e.g., if not
fk_id or self._db is None: return None) so fk_value==0 is treated as missing.
Locate the _build_query method and the fk_value/fk_field usage to adjust the
guard before calling self._db.select(self.to_model).filter(...), ensuring
reverse queries are skipped for falsy PKs.
In `@sqliter/asyncio/query.py`:
- Around line 506-515: The validation in async query update should raise
InvalidUpdateError instead of RecordUpdateError: when checking primary key via
model_class.get_primary_key() (pk_field) and when validating unknown fields
using model_class.model_fields (invalid_fields), replace the raised
RecordUpdateError instances with InvalidUpdateError and preserve the same error
messages (e.g., "Cannot update the primary key '...'" and the comma-separated
unknown field names) so callers can catch validation errors the same way as the
sync QueryBuilder.update().
In `@sqliter/query/query.py`:
- Around line 2258-2261: You're mutating self._fields while building the
execution plan which changes the cache key; instead, create a local copy (e.g.,
selected_fields = list(self._fields) or tuple(self._fields)) and ensure "pk" is
appended to that local selected_fields when needed, then use selected_fields for
the join/fields variable and for any cache-key construction so self._fields
remains unchanged; this prevents the cache being written under a different key
for calls like db.select(Model, fields=["name"]).
In `@sqliter/sqliter.py`:
- Around line 1418-1428: The code currently uses isinstance(inst, model_class)
in the bulk-insert validation which permits subclasses and can mix table caches
between calls; change the check to require exact type equality (e.g., type(inst)
is model_class or type(inst) != model_class) so every element in instances has
the same concrete class as the first one; this ensures _insert_single_record()
and bulk_insert() operate on the exact same model class and table cache.
In `@sqliter/tui/demos/async_demos.py`:
- Around line 266-298: The demo functions (async main and _run_async_reverse)
create an AsyncSqliterDB instance and currently return early on None results
without calling await db.close(); wrap the core demo body after creating db in a
try/finally so that await db.close() is always executed in the finally block
(keep early returns inside the try), e.g., ensure the logic around fresh = await
db.get(Book, book.pk) and the author is None guard in _run_async_reverse are
inside the try and that the finally calls await db.close() to guarantee cleanup.
In `@tests/test_asyncio_core.py`:
- Around line 1342-1372: The test creates a dynamic model Sale without restoring
the global ModelRegistry, which can leak metadata; wrap the dynamic model
registration so you save the current ModelRegistry before creating Sale and
restore it after the test (e.g., capture the registry state, run the code that
defines Sale and uses AsyncSqliterDB, then reassign the saved registry back),
ensuring the ModelRegistry global is restored regardless of test outcome (use
try/finally around the block that defines/uses Sale).
---
Outside diff comments:
In `@docs/index.md`:
- Around line 10-17: Update the paragraph that mentions "Pydantic" as the only
runtime dependency and the sentence that advertises "async APIs" to clarify that
core synchronous usage only requires Pydantic, while async support is optional
and provided via an extra that requires the additional "aiosqlite" package;
specifically change the lines containing the phrases "The only external run-time
dependency is Pydantic itself." and "It supports both synchronous and
asynchronous APIs." to state that async support requires installing the optional
async extra (e.g., "pip install sqliter[async]") and list "aiosqlite" as the
additional runtime dependency for async usage.
In `@README.md`:
- Around line 54-65: Update the README bullet that currently reads "No external
dependencies other than Pydantic" to clarify that extras are optional; replace
it with wording such as "No required external dependencies other than Pydantic
(optional extras like aiosqlite may be needed for async features)" so the
statement aligns with the documented optional async support and the aiosqlite
extra.
---
Nitpick comments:
In @.pre-commit-config.yaml:
- Around line 29-36: The pre-commit hook lists `additional_dependencies`
including `aiosqlite` unpinned which can diverge from the locked
`aiosqlite==0.22.1`; update the `additional_dependencies` entry to pin
`aiosqlite==0.22.1` (i.e., replace `aiosqlite` with `aiosqlite==0.22.1`) so the
strict mypy hook runs with the same version as the lockfile and keeps
type-checking reproducible.
In `@docs/quickstart.md`:
- Around line 49-55: Update the code block that currently shows the uv command
"uv add 'sqliter-py[async]'" to include the pip equivalent on the next line;
specifically, in the same fenced bash block that references "uv add
'sqliter-py[async]'", add a second line "pip install 'sqliter-py[async]'" (with
a comment or separator like "# or" if you prefer) so readers who don't use uv
have a direct pip install command.
In `@sqliter/asyncio/db.py`:
- Around line 438-447: The local variable table_name is computed twice: once
from model_class.get_table_name() and again when unpacking the tuple returned by
_build_create_table_sql(model_class, exists_ok=exists_ok). Remove the redundant
first assignment or change the unpacking of _build_create_table_sql to only
extract create_table_sql and fk_columns (leaving table_name from the initial
model_class.get_table_name()), updating any subsequent uses accordingly (refer
to symbols: model_class, table_name, _build_create_table_sql, exists_ok, force,
and _execute_sql).
In `@sqliter/orm/m2m.py`:
- Around line 1041-1048: The current index-creation loop wraps all indexes in a
single try/except and reacquires conn/cursor each iteration; change it to
acquire conn and cursor once (hoist conn = db.connect() and cursor =
conn.cursor() outside the loop) and run each index_sql inside its own try/except
sqlite3.Error so one failing index doesn't stop subsequent ones; in each
per-index except block log the caught exception (e.g. via process logger or db
logger) including the index_sql to aid diagnosis, and continue to the next
index; keep using db._execute(cursor, index_sql) and conn.commit() after each
successful create.
In `@tests/tui/unit/test_demos_async.py`:
- Around line 175-189: The assertion message always references _run_async_conn
even when another extractor failed; update the assertion in the loop that
iterates over functions (the list containing _run_async_conn,
_run_async_context, _run_async_crud, etc.) to use the current loop variable's
name (func.__name__) and/or include the value of code, so _demo_code(func)
failures report the actual failing function; locate the loop and replace the
hardcoded _run_async_conn reference in the f-string with func.__name__ (and
optionally show code snippet or its length) to make failures diagnosable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fb1d9fed-75c3-46d5-96a8-f25460702891
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (43)
.markdownlint.yaml.markdownlintrc.pre-commit-config.yamlREADME.mdTODO.mddocs/api-reference/async-orm.mddocs/api-reference/async-query-builder.mddocs/api-reference/async-sqliterdb.mddocs/api-reference/index.mddocs/guide/asyncio.mddocs/guide/foreign-keys/orm.mddocs/guide/guide.mddocs/index.mddocs/installation.mddocs/quickstart.mddocs/tui-demo/async.mddocs/tui-demo/index.mdmkdocs.ymlpyproject.tomlrequirements-dev.txtsqliter/asyncio/__init__.pysqliter/asyncio/db.pysqliter/asyncio/orm/__init__.pysqliter/asyncio/orm/fields.pysqliter/asyncio/orm/m2m.pysqliter/asyncio/orm/model.pysqliter/asyncio/orm/query.pysqliter/asyncio/query.pysqliter/orm/m2m.pysqliter/query/query.pysqliter/sqliter.pysqliter/tui/demos/__init__.pysqliter/tui/demos/async_demos.pytests/test_aggregates.pytests/test_asyncio_core.pytests/test_asyncio_orm.pytests/test_bulk_insert.pytests/test_indexes.pytests/test_prefetch_related.pytests/test_query.pytests/test_sqliter.pytests/test_unique.pytests/tui/unit/test_demos_async.py
💤 Files with no reviewable changes (1)
- .markdownlintrc
| book = await db.get(Book, 1) | ||
| author = await book.author.fetch() |
There was a problem hiding this comment.
Guard the optional get() result in the example.
get() is documented above as returning T | None, but this example immediately dereferences book. Add an assert book is not None or an explicit if book is None branch before accessing book.author.
Proposed fix
book = await db.get(Book, 1)
+assert book is not None
author = await book.author.fetch()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| book = await db.get(Book, 1) | |
| author = await book.author.fetch() | |
| book = await db.get(Book, 1) | |
| assert book is not None | |
| author = await book.author.fetch() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/api-reference/async-sqliterdb.md` around lines 291 - 292, The example
dereferences the optional result of db.get(Book, 1) without guarding for None;
update the snippet around the get() call (the Book variable and subsequent
book.author.fetch() usage) to check that book is not None (e.g., add assert book
is not None or an explicit if book is None: branch returning/handling the
missing case) before calling book.author.fetch() so the example matches the
documented T | None return of get().
| [project.optional-dependencies] | ||
| async = ["aiosqlite>=0.20.0"] | ||
| demo = ["textual>=7.3.0"] | ||
| extras = ["inflect==7.0.0"] | ||
| full = ["inflect>=7.0.0", "textual>=7.3.0"] | ||
| full = ["inflect>=7.0.0", "textual>=7.3.0", "aiosqlite>=0.20.0"] |
There was a problem hiding this comment.
Keep full aligned with the extras dependency constraint.
extras pins inflect==7.0.0, but full allows newer inflect versions, so sqliter-py[full] can resolve a different pluralisation stack than sqliter-py[extras]. If the pin is intentional, mirror it in full.
Proposed dependency alignment
-full = ["inflect>=7.0.0", "textual>=7.3.0", "aiosqlite>=0.20.0"]
+full = ["inflect==7.0.0", "textual>=7.3.0", "aiosqlite>=0.20.0"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [project.optional-dependencies] | |
| async = ["aiosqlite>=0.20.0"] | |
| demo = ["textual>=7.3.0"] | |
| extras = ["inflect==7.0.0"] | |
| full = ["inflect>=7.0.0", "textual>=7.3.0"] | |
| full = ["inflect>=7.0.0", "textual>=7.3.0", "aiosqlite>=0.20.0"] | |
| [project.optional-dependencies] | |
| async = ["aiosqlite>=0.20.0"] | |
| demo = ["textual>=7.3.0"] | |
| extras = ["inflect==7.0.0"] | |
| full = ["inflect==7.0.0", "textual>=7.3.0", "aiosqlite>=0.20.0"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 31 - 35, The optional-dependencies block has a
mismatch where "extras" pins inflect==7.0.0 but "full" allows inflect>=7.0.0;
update the "full" entry in pyproject.toml's [project.optional-dependencies] so
its inflect constraint matches the "extras" pin (use inflect==7.0.0) to ensure
sqliter-py[full] and sqliter-py[extras] resolve the same pluralisation
dependency.
| try: | ||
| from sqliter.asyncio.db import AsyncSqliterDB | ||
| from sqliter.asyncio.query import AsyncQueryBuilder | ||
| except ImportError as exc: | ||
| _IMPORT_ERROR = exc | ||
|
|
||
| __all__ = ["AsyncQueryBuilder", "AsyncSqliterDB"] | ||
|
|
||
|
|
||
| def __getattr__(name: str) -> object: | ||
| """Raise a helpful dependency error for async imports.""" | ||
| if name not in __all__: | ||
| err = f"module {__name__!r} has no attribute {name!r}" | ||
| raise AttributeError(err) | ||
|
|
||
| if _IMPORT_ERROR is None: | ||
| return globals()[name] | ||
|
|
||
| msg = ( | ||
| "aiosqlite is required for sqliter.asyncio. " | ||
| "Install the package with `sqliter-py[async]` or " | ||
| "`sqliter-py[full]`." | ||
| ) | ||
| raise ImportError(msg) from _IMPORT_ERROR |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Locate broad ImportError handlers around the asyncio package imports.
# Expected after the fix: no "broad ImportError" output for sqliter/asyncio/__init__.py.
python - <<'PY'
import ast
from pathlib import Path
path = Path("sqliter/asyncio/__init__.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if not isinstance(node, ast.Try):
continue
guarded_import_lines = [
child.lineno
for child in node.body
if isinstance(child, (ast.Import, ast.ImportFrom))
]
for handler in node.handlers:
handler_name = getattr(handler.type, "id", None) if handler.type else "bare"
if handler_name == "ImportError":
print(
f"{path}:{handler.lineno}: broad ImportError around imports "
f"at lines {guarded_import_lines}"
)
PYRepository: seapagan/sqliter-py
Length of output: 143
Catch only missing aiosqlite, not all import errors.
Line 10 catches any ImportError from the db and query imports, including genuine bugs in those modules. This masks internal errors and defers failure to __getattr__, where users see "install aiosqlite" instead of the actual problem. Catch ModuleNotFoundError instead and re-raise if it is not aiosqlite.
Suggested fix
try:
from sqliter.asyncio.db import AsyncSqliterDB
from sqliter.asyncio.query import AsyncQueryBuilder
-except ImportError as exc:
+except ModuleNotFoundError as exc:
+ if exc.name != "aiosqlite":
+ raise
_IMPORT_ERROR = exc🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/__init__.py` around lines 7 - 30, The current try/except sets
_IMPORT_ERROR for any ImportError from importing
AsyncSqliterDB/AsyncQueryBuilder which masks real import-time bugs; change the
exception handling so you only capture ModuleNotFoundError for the missing
dependency "aiosqlite" and re-raise other import problems. Concretely, in the
import block that references AsyncSqliterDB and AsyncQueryBuilder, replace the
broad except ImportError as exc with an except ModuleNotFoundError as exc that
checks exc.name (or exc.args) for "aiosqlite" and assigns _IMPORT_ERROR = exc
only in that case; for any other ImportError/ModuleNotFoundError re-raise
immediately so real errors in sqliter.asyncio.db or .query surface. Ensure
__getattr__ and the _IMPORT_ERROR symbol remain unchanged.
| async def get_table_names(self) -> list[str]: | ||
| """Return a list of all user-created table names.""" | ||
| was_connected = self.is_connected | ||
| if not was_connected: | ||
| await self.connect() | ||
|
|
||
| if self.conn is None: | ||
| err_msg = "Failed to establish a database connection." | ||
| raise DatabaseConnectionError(err_msg) | ||
|
|
||
| cursor = await self.conn.cursor() | ||
| await self._execute_async( | ||
| cursor, | ||
| "SELECT name FROM sqlite_master WHERE type='table' " | ||
| "AND name NOT LIKE 'sqlite_%';", | ||
| ) | ||
| tables = [row[0] for row in await cursor.fetchall()] | ||
|
|
||
| if not was_connected: | ||
| await self.close() | ||
|
|
||
| return tables |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# See whether other async helpers also use the was_connected/close pattern
rg -nP --type=py -C2 'was_connected' sqliter/asyncioRepository: seapagan/sqliter-py
Length of output: 692
🏁 Script executed:
#!/bin/bash
# First, let's see the overall structure of the db.py file and understand the class
head -50 sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 1201
🏁 Script executed:
#!/bin/bash
# Look for the connect() and close() methods to understand the connection management
rg -nP --type=py 'async def (connect|close)\(' sqliter/asyncio/db.py -A 10Repository: seapagan/sqliter-py
Length of output: 1029
🏁 Script executed:
#!/bin/bash
# Check if there's an is_memory property and how it's used
rg -nP --type=py 'is_memory|memory' sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 563
🏁 Script executed:
#!/bin/bash
# Search for other async methods that might follow similar patterns
rg -nP --type=py 'async def [a-z_]+\(self\).*->' sqliter/asyncio/db.py | head -20Repository: seapagan/sqliter-py
Length of output: 437
🏁 Script executed:
#!/bin/bash
# Check the reset_database() method and other helper methods
rg -nP --type=py 'async def reset_database|async def [a-z_]+\(self\)' sqliter/asyncio/db.py -A 8 | grep -A 8 'reset_database'Repository: seapagan/sqliter-py
Length of output: 450
🏁 Script executed:
#!/bin/bash
# Search for other methods that might use the was_connected pattern or similar connection management
rg -nP --type=py 'was_connected|if not was_connected|if self.is_connected' sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 176
🏁 Script executed:
#!/bin/bash
# Check how db_filename is set when memory=True
rg -nP --type=py 'db_filename.*:memory:|:memory:' sqliter/asyncioRepository: seapagan/sqliter-py
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Look at the initialization to understand db_filename for memory mode
sed -n '40,100p' sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 1931
🏁 Script executed:
#!/bin/bash
# Check the sync SqliterDB class to see how it sets db_filename for memory mode
rg -nP --type=py 'class SqliterDB|def __init__|memory|db_filename.*=' sqliter/sqliter.py | head -40Repository: seapagan/sqliter-py
Length of output: 1939
🏁 Script executed:
#!/bin/bash
# Search for all async methods in the AsyncSqliterDB class
rg -nP --type=py '^ async def ' sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 1077
🏁 Script executed:
#!/bin/bash
# Check if there are other async helper methods that might auto-close
sed -n '355,430p' sqliter/asyncio/db.pyRepository: seapagan/sqliter-py
Length of output: 2458
get_table_names() auto-close is unsafe for memory=True.
The was_connected/auto-close pattern opens a connection on demand and closes it at the end. For memory=True, each aiosqlite.connect(":memory:") creates a separate, independent in-memory database. Calling get_table_names() before any other operation will open a fresh empty DB, return an empty list, and close it; the next connect() gets a different fresh empty DB, creating data inconsistency.
Consider either (a) never auto-closing when self.is_memory, or (b) documenting that this introspection helper requires an already-open connection for in-memory mode.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/db.py` around lines 355 - 376, get_table_names() currently
auto-closes a connection it opened, which breaks in-memory DBs; modify the
auto-close logic so that if a connection was opened by this call (was_connected
is False) you only call self.close() afterwards when the database is not
in-memory (check self.is_memory), i.e. keep the existing connect() behavior but
change the final conditional to: close only if not was_connected and not
self.is_memory; refer to get_table_names, is_connected, connect, close, and
is_memory when making the change.
| async def _execute_sql(self, sql: str) -> None: | ||
| """Execute a raw SQL statement.""" | ||
| try: | ||
| conn = await self.connect() | ||
| cursor = await conn.cursor() | ||
| await self._execute_async(cursor, sql) | ||
| await conn.commit() | ||
| except (sqlite3.Error, sqlite3.Warning) as exc: | ||
| raise SqlExecutionError(sql) from exc |
There was a problem hiding this comment.
_execute_sql always commits — breaks nesting inside an async with db: transaction.
_execute_sql unconditionally calls await conn.commit(), and it is invoked from create_table for FK indexes, regular/unique indexes, and M2M junction tables. If a caller runs await db.create_table(Model) inside an async with db: block (which sets in_transaction=True and is expected to commit or rollback as a unit on exit), those DDL helpers will flush the enclosing transaction early. SQLite does allow DDL inside a transaction, so the early commit isn't required and leaks partially-applied state on subsequent rollback.
Gate the commit on _maybe_commit() for consistency with the other write methods:
♻️ Suggested change
async def _execute_sql(self, sql: str) -> None:
"""Execute a raw SQL statement."""
try:
conn = await self.connect()
cursor = await conn.cursor()
await self._execute_async(cursor, sql)
- await conn.commit()
+ await self._maybe_commit()
except (sqlite3.Error, sqlite3.Warning) as exc:
raise SqlExecutionError(sql) from exc🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/db.py` around lines 527 - 535, The _execute_sql helper
unconditionally calls await conn.commit(), which breaks transactions started by
async with db:; change it to call the existing transaction-aware commit helper
(use await self._maybe_commit(conn) or self._maybe_commit() depending on
signature) after executing SQL so commits only occur when not inside an explicit
transaction. Locate _execute_sql (uses self.connect(), conn.cursor(),
self._execute_async(cursor, sql)) and replace the direct await conn.commit()
with the transaction-aware call, leaving exception handling (raise
SqlExecutionError(sql) from exc) intact; this preserves create_table, FK/index,
and M2M helpers behavior without flushing enclosing transactions.
| pk_field = self.model_class.get_primary_key() | ||
| if pk_field in values: | ||
| msg = f"Cannot update the primary key '{pk_field}' via bulk update" | ||
| raise RecordUpdateError(msg) | ||
|
|
||
| valid_fields = set(self.model_class.model_fields.keys()) | ||
| invalid_fields = set(values.keys()) - valid_fields | ||
| if invalid_fields: | ||
| invalid_names = ", ".join(sorted(invalid_fields)) | ||
| raise RecordUpdateError(invalid_names) |
There was a problem hiding this comment.
Use InvalidUpdateError for validation failures.
The async query API should mirror sync QueryBuilder.update(): primary-key and unknown-field validation errors are caller input errors, not database update failures. Raising RecordUpdateError here breaks catchers that already handle InvalidUpdateError for the sync API.
Proposed fix
from sqliter.exceptions import (
InvalidProjectionError,
+ InvalidUpdateError,
RecordDeletionError,
RecordFetchError,
RecordUpdateError,
)
@@
pk_field = self.model_class.get_primary_key()
if pk_field in values:
msg = f"Cannot update the primary key '{pk_field}' via bulk update"
- raise RecordUpdateError(msg)
+ raise InvalidUpdateError(msg)
@@
if invalid_fields:
invalid_names = ", ".join(sorted(invalid_fields))
- raise RecordUpdateError(invalid_names)
+ raise InvalidUpdateError(invalid_names)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pk_field = self.model_class.get_primary_key() | |
| if pk_field in values: | |
| msg = f"Cannot update the primary key '{pk_field}' via bulk update" | |
| raise RecordUpdateError(msg) | |
| valid_fields = set(self.model_class.model_fields.keys()) | |
| invalid_fields = set(values.keys()) - valid_fields | |
| if invalid_fields: | |
| invalid_names = ", ".join(sorted(invalid_fields)) | |
| raise RecordUpdateError(invalid_names) | |
| pk_field = self.model_class.get_primary_key() | |
| if pk_field in values: | |
| msg = f"Cannot update the primary key '{pk_field}' via bulk update" | |
| raise InvalidUpdateError(msg) | |
| valid_fields = set(self.model_class.model_fields.keys()) | |
| invalid_fields = set(values.keys()) - valid_fields | |
| if invalid_fields: | |
| invalid_names = ", ".join(sorted(invalid_fields)) | |
| raise InvalidUpdateError(invalid_names) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/query.py` around lines 506 - 515, The validation in async
query update should raise InvalidUpdateError instead of RecordUpdateError: when
checking primary key via model_class.get_primary_key() (pk_field) and when
validating unknown fields using model_class.model_fields (invalid_fields),
replace the raised RecordUpdateError instances with InvalidUpdateError and
preserve the same error messages (e.g., "Cannot update the primary key '...'"
and the comma-separated unknown field names) so callers can catch validation
errors the same way as the sync QueryBuilder.update().
| elif self._fields: | ||
| if "pk" not in self._fields: | ||
| self._fields.append("pk") | ||
| fields = ", ".join( |
There was a problem hiding this comment.
Don’t mutate _fields while building the execution plan.
Cache lookup happens before this plan is built, but _fields is part of the cache key. For db.select(Model, fields=["name"]), the first lookup uses ("name",) and the store uses ("name", "pk"), so the cached result is written under a different key. Normalise selected fields before cache lookup, or use a local selected_fields without mutating _fields.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/query/query.py` around lines 2258 - 2261, You're mutating
self._fields while building the execution plan which changes the cache key;
instead, create a local copy (e.g., selected_fields = list(self._fields) or
tuple(self._fields)) and ensure "pk" is appended to that local selected_fields
when needed, then use selected_fields for the join/fields variable and for any
cache-key construction so self._fields remains unchanged; this prevents the
cache being written under a different key for calls like db.select(Model,
fields=["name"]).
| model_class = type(instances[0]) | ||
| for inst in instances[1:]: | ||
| if not isinstance(inst, model_class): | ||
| msg = ( | ||
| "All instances must be the same model type. " | ||
| f"Expected {model_class.__name__}, " | ||
| f"got {type(inst).__name__}." | ||
| ) | ||
| raise TypeError(msg) | ||
|
|
||
| return model_class, model_class.get_table_name() |
There was a problem hiding this comment.
Require exact model types in bulk inserts.
isinstance() allows subclasses through, but _insert_single_record() builds each insert from the concrete instance type while bulk_insert() invalidates only the first model’s table cache. That can mix table writes in one call despite the “same model type” contract. Use exact type comparison.
Proposed fix
model_class = type(instances[0])
for inst in instances[1:]:
- if not isinstance(inst, model_class):
+ if type(inst) is not model_class:
msg = (
"All instances must be the same model type. "
f"Expected {model_class.__name__}, "🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/sqliter.py` around lines 1418 - 1428, The code currently uses
isinstance(inst, model_class) in the bulk-insert validation which permits
subclasses and can mix table caches between calls; change the check to require
exact type equality (e.g., type(inst) is model_class or type(inst) !=
model_class) so every element in instances has the same concrete class as the
first one; this ensures _insert_single_record() and bulk_insert() operate on the
exact same model class and table cache.
| async def main() -> None: | ||
| db = AsyncSqliterDB(memory=True) | ||
| await db.create_table(Author) | ||
| await db.create_table(Book) | ||
|
|
||
| tolkien = await db.insert(Author(name="J.R.R. Tolkien")) | ||
| book = await db.insert(Book(title="The Hobbit", author=tolkien)) | ||
|
|
||
| # Re-fetch from DB to get a fresh instance (no in-memory cache) | ||
| fresh = await db.get(Book, book.pk) | ||
| if fresh is None: | ||
| return | ||
|
|
||
| # WRONG — raises AttributeError in async mode: | ||
| # fresh.author.name | ||
| # AttributeError: Async foreign key 'name' is not loaded. | ||
| # Use `await relation.fetch()` first. | ||
|
|
||
| # CORRECT — explicitly fetch the loader, then await it | ||
| # | ||
| # mypy note: AsyncForeignKey is typed to return T (Author) so | ||
| # that eager-loaded access (book.author.name) type-checks. At | ||
| # runtime the lazy-loaded value is AsyncLazyLoader[T], not T, | ||
| # so strict mypy requires a cast. This is a known trade-off — | ||
| # see the async guide for a full explanation. | ||
| loader = cast("AsyncLazyLoader[Author]", fresh.author) | ||
| fetched_author = await loader.fetch() | ||
| output.write(f"Book: {fresh.title}\n") | ||
| if fetched_author is not None: | ||
| output.write(f"Author: {fetched_author.name}\n") | ||
| output.write("Loaded via: await book.author.fetch()\n") | ||
|
|
||
| await db.close() |
There was a problem hiding this comment.
Close the async DB on early returns.
Both defensive return paths skip await db.close(), leaving the connection open when get() unexpectedly returns None. Wrap the demo body in try/finally so cleanup always runs.
Proposed pattern
async def main() -> None:
db = AsyncSqliterDB(memory=True)
- await db.create_table(Author)
- await db.create_table(Book)
+ try:
+ await db.create_table(Author)
+ await db.create_table(Book)
- ...
- if fresh is None:
- return
+ ...
+ if fresh is None:
+ return
- ...
- await db.close()
+ ...
+ finally:
+ await db.close()Apply the same pattern to _run_async_reverse() around the author is None guard.
Also applies to: 367-394
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/tui/demos/async_demos.py` around lines 266 - 298, The demo functions
(async main and _run_async_reverse) create an AsyncSqliterDB instance and
currently return early on None results without calling await db.close(); wrap
the core demo body after creating db in a try/finally so that await db.close()
is always executed in the finally block (keep early returns inside the try),
e.g., ensure the logic around fresh = await db.get(Book, book.pk) and the author
is None guard in _run_async_reverse are inside the try and that the finally
calls await db.close() to guarantee cleanup.
| @pytest.mark.asyncio | ||
| async def test_async_query_having_filters_grouped_results() -> None: | ||
| """having() is available for async grouped projection queries.""" | ||
|
|
||
| class Sale(BaseDBModel): | ||
| """Simple model for aggregate query tests.""" | ||
|
|
||
| category: str | ||
| amount: float | ||
|
|
||
| class Meta: | ||
| """Aggregate test table metadata.""" | ||
|
|
||
| table_name = "sales_async" | ||
|
|
||
| db = AsyncSqliterDB(memory=True) | ||
| await db.create_table(Sale) | ||
| await db.insert(Sale(category="books", amount=10)) | ||
| await db.insert(Sale(category="books", amount=20)) | ||
| await db.insert(Sale(category="music", amount=5)) | ||
|
|
||
| rows = await ( | ||
| db.select(Sale) | ||
| .group_by("category") | ||
| .annotate(total=func.sum("amount")) | ||
| .having(total__gt=15) | ||
| .fetch_dicts() | ||
| ) | ||
|
|
||
| assert rows == [{"category": "books", "total": 30.0}] | ||
| await db.close() |
There was a problem hiding this comment.
Restore ModelRegistry around the dynamic model.
This test defines Sale but does not restore the global registry, unlike the neighbouring dynamic-model tests. That can leak metadata and make later tests order-dependent.
Proposed fix
`@pytest.mark.asyncio`
async def test_async_query_having_filters_grouped_results() -> None:
"""having() is available for async grouped projection queries."""
+ state = ModelRegistry.snapshot()
+ try:
- class Sale(BaseDBModel):
- """Simple model for aggregate query tests."""
+ class Sale(BaseDBModel):
+ """Simple model for aggregate query tests."""
- category: str
- amount: float
+ category: str
+ amount: float
- class Meta:
- """Aggregate test table metadata."""
+ class Meta:
+ """Aggregate test table metadata."""
- table_name = "sales_async"
+ table_name = "sales_async"
- db = AsyncSqliterDB(memory=True)
- await db.create_table(Sale)
- await db.insert(Sale(category="books", amount=10))
- await db.insert(Sale(category="books", amount=20))
- await db.insert(Sale(category="music", amount=5))
+ db = AsyncSqliterDB(memory=True)
+ await db.create_table(Sale)
+ await db.insert(Sale(category="books", amount=10))
+ await db.insert(Sale(category="books", amount=20))
+ await db.insert(Sale(category="music", amount=5))
- rows = await (
- db.select(Sale)
- .group_by("category")
- .annotate(total=func.sum("amount"))
- .having(total__gt=15)
- .fetch_dicts()
- )
+ rows = await (
+ db.select(Sale)
+ .group_by("category")
+ .annotate(total=func.sum("amount"))
+ .having(total__gt=15)
+ .fetch_dicts()
+ )
- assert rows == [{"category": "books", "total": 30.0}]
- await db.close()
+ assert rows == [{"category": "books", "total": 30.0}]
+ await db.close()
+ finally:
+ ModelRegistry.restore(state)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_asyncio_core.py` around lines 1342 - 1372, The test creates a
dynamic model Sale without restoring the global ModelRegistry, which can leak
metadata; wrap the dynamic model registration so you save the current
ModelRegistry before creating Sale and restore it after the test (e.g., capture
the registry state, run the code that defines Sale and uses AsyncSqliterDB, then
reassign the saved registry back), ensuring the ModelRegistry global is restored
regardless of test outcome (use try/finally around the block that defines/uses
Sale).
Summary
This branch adds first-class async support to SQLiter under
sqliter.asyncioandsqliter.asyncio.orm.It includes:
AsyncSqliterDBandAsyncQueryBuilderAsyncBaseDBModel,AsyncForeignKey, async reverse relationships, and async many-to-many managersbulk_inserttransaction handlingAPI notes
Async ORM intentionally differs from sync ORM for lazy FK loading:
book.author.nameauthor = await book.author.fetch()Reverse relationships and many-to-many accessors keep the same object-oriented shape, but terminal and write operations are awaited.
Follow-up notes
A small docs note remains about current mypy limitations around dynamic async relationship typing. The runtime API is implemented and covered; the note is there to set correct expectations for static typing ergonomics.
Summary by CodeRabbit
Release Notes
New Features
sqliter.asynciomodule.asyncoptional dependency (aiosqlite).Documentation