[core] Improve, test and document the DSL#35
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 0 minutes and 47 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (31)
📝 Walkthrough📝 Walkthrough🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (9)
projects/core/dsl/runtime.py (2)
144-158: Comment scope — this block also runs on the success path.After the success path,
task_index == len(file_tasks)sopending == []and the whole block becomes a no-op — correct behavior, but the leading comment “After a failure, skip pending non-@Always tasks; still run pending@alwaystasks” suggests it’s failure-only. A small tweak would make the intent obvious to readers tracking control flow:📝 Doc-only tweak
- # After a failure, skip pending non-@always tasks; still run pending `@always` tasks + # On failure: skip pending non-@always tasks, still run pending `@always` tasks. + # On success: task_index == len(file_tasks), so `pending` is empty and this is a no-op. pending = file_tasks[task_index + 1 :] if task_index < len(file_tasks) else []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 144 - 158, The leading comment is misleading because this block also executes on the success path (when task_index == len(file_tasks) it becomes a no-op); update the comment above the pending/always_pending logic to reflect the true intent — e.g., say that for any remaining tasks after the current index we skip non-@always tasks and still run `@always` tasks (and note it will be a no-op when there are no pending tasks) — so change the comment near variables task_index, file_tasks, pending, always_pending and the check on current_task_info.get("always_execute") to make the behavior explicit.
135-142: Redundant exception classes in tuple.
Exceptionalready catchesTaskExecutionError,ConditionError, andRetryFailure. The explicit list is purely informational and ends up being dead code from the interpreter’s point of view. Either drop them, or drop the bareExceptionif the intent was to document the expected set (and add a separate fallback).♻️ Minimal cleanup
- except (TaskExecutionError, ConditionError, RetryFailure, Exception) as e: + except Exception as e:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 135 - 142, The except clause currently lists TaskExecutionError, ConditionError, RetryFailure and Exception together which is redundant; split into two handlers: one that explicitly catches (TaskExecutionError, ConditionError, RetryFailure) as e and performs the existing logging, log_completion_banner call and sets execution_error, and a separate fallback except Exception as e that does the same fallback handling so the specific expected exceptions remain documented while still preserving a generic catch-all; refer to the existing symbols logger.fatal, log_completion_banner and execution_error when making the changes.projects/core/tests/conftest.py (1)
9-24:autouse=Truebinds every test to DSL setup — narrow when the suite grows.This fixture currently changes cwd to
FORGE_HOME, setsARTIFACT_DIR, and callsenv.init()for every test collected underprojects/core/tests. That is fine today because the suite is DSL-focused, but once non-DSL tests are added they will pay this cost and be forced to run with cwd pinned toFORGE_HOME. Consider gating via a marker (e.g.@pytest.mark.dsl) or splitting into a non-autouse fixture you opt into, once the suite diversifies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/tests/conftest.py` around lines 9 - 24, The _dsl_isolation fixture is currently autouse and forces all tests to run with cwd set to FORGE_HOME and ARTIFACT_DIR set, which is undesirable for non-DSL tests; change it from autouse=True to a non-autouse fixture (remove autouse) and provide an alternate marker-based approach so only tests that need DSL setup opt in (e.g., add a pytest config to register a "dsl" marker and create a helper fixture or require tests to use `@pytest.mark.dsl` to invoke _dsl_isolation), while keeping the existing behavior inside _dsl_isolation (monkeypatch.setenv("ARTIFACT_DIR", ...), monkeypatch.chdir(env.FORGE_HOME), env.init(), reset_script_manager()) and ensuring teardown still calls reset_script_manager()..github/workflows/test_toolbox_dsl.yml (1)
11-13: Concurrency group key is fragile for non-PR events.
github.event.pull_request.numberisnullforpushandworkflow_dispatch, so the group collapses to${{ github.workflow }}-${{ github.ref }}— usually fine, but if both a push-on-mainand a manually dispatched run onmaincoexist, the latter will cancel the former. Not a blocker; consider includinggithub.run_idas a fallback forworkflow_dispatch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test_toolbox_dsl.yml around lines 11 - 13, Update the concurrency group key so it falls back to github.run_id when github.event.pull_request.number is null (e.g., for workflow_dispatch or push) instead of collapsing to just github.ref; change the group expression at the concurrency block to prefer github.event.pull_request.number, then github.run_id, then github.ref so manual dispatches don’t cancel unrelated runs while keeping cancel-in-progress: true unchanged.projects/core/dsl/task.py (3)
51-73: Function parameters silently overridden byfunc._retry_config.
_execute_with_retryacceptsattempts,delay,backoff,retry_on_exceptionsas explicit parameters, but Lines 69-73 then overwrite all of them fromgetattr(func, "_retry_config", {}). Sinceretryalways populates every key of_retry_config(Lines 329-334), the.get(..., <param>)fallback is effectively never used — the passed-in arguments are dead. This is confusing, and ifruntime.pyever passes values that differ from what's attached tofunc(e.g., future call sites, dynamic overrides), those values will be silently ignored.Pick one source of truth: either drop the per-call parameters and have
_execute_with_retryread exclusively fromfunc._retry_config, or drop thegetattrlookup and trust the parameters. The current hybrid hides bugs.♻️ Suggested simplification (trust parameters passed by runtime)
-def _execute_with_retry(func, attempts, delay, backoff, retry_on_exceptions, *args, **kwargs): +def _execute_with_retry(func, attempts, delay, backoff, retry_on_exceptions, *args, **kwargs): ... - retry_config = getattr(func, "_retry_config", {}) - retry_attempts = retry_config.get("attempts", attempts) - retry_delay = retry_config.get("delay", delay) - retry_backoff = retry_config.get("backoff", backoff) - retry_on_exc = retry_config.get("retry_on_exceptions", retry_on_exceptions) - - current_delay = retry_delay + current_delay = delay + retry_attempts = attempts + retry_backoff = backoff + retry_on_exc = retry_on_exceptions🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 51 - 73, The function _execute_with_retry currently overwrites the explicit parameters (attempts, delay, backoff, retry_on_exceptions) with values from func._retry_config, making caller-passed values ignored; fix this by removing the getattr(func, "_retry_config", {}) lookup and the subsequent assignment to retry_attempts/retry_delay/retry_backoff/retry_on_exc so the function uses the parameters passed into _execute_with_retry directly; also remove or adjust any code that relies on reading _retry_config inside _execute_with_retry (leave the decorator/ retry that sets func._retry_config untouched) and ensure callers (e.g., runtime) pass the desired retry values into _execute_with_retry instead of relying on func._retry_config.
143-201:task_onlysimple-decorator path now requires the exact parameter namefunc.The new gate
params[0].name == "func"cleanly disambiguates@when(lambda: ...)from a bare@decoratorapplication, which is a real fix. Worth noting for future maintainers: any future simple decorator wrapped with@task_onlythat doesn't name its sole parameterfuncwill silently fall through to the factory path and break at apply-time in a non-obvious way. The docstring hints at this, but a brief inline assertion (or an explicit error on “looks like a simple decorator but param isn't namedfunc”) would make the contract harder to violate accidentally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 143 - 201, The task_only wrapper currently treats a single-parameter decorator as a simple decorator only if that parameter is literally named "func", which can silently misroute valid simple decorators that use a different parameter name; update task_only to detect the one-parameter case and raise a clear error if params[0].name != "func" (including decorator_func.__name__ in the message) so maintainers get an explicit failure rather than an obscure factory-path break; reference the task_only wrapper and _ensure_is_task when adding this assertion/error.
107-140: Minor: use bareraiseand note the unreachable fallback.Two small nits in the new exception-retry path:
- Line 114:
raise excre-raises but anchors the traceback to this line; a bareraiseinside theexceptblock preserves the original traceback more cleanly.- Lines 137-140: this trailing
raise RetryFailureis unreachable wheneverretry_attempts >= 1— every branch of the loop body eitherreturns,raises on falsy exhaustion (Lines 100-102), orraises on exception exhaustion (Lines 119-122). It is only reached whenretry_attempts == 0, which@retry(attempts=0)currently accepts silently. Consider either validatingattempts >= 1at decoration time or keeping the fallback with a comment that it guards the zero-attempts corner case.♻️ Proposed small cleanup
except Exception as exc: if not retry_on_exc: logger.error(f"==> TASK EXCEPTION: {func.__name__} failed with exception") logger.info("") - raise exc + raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 107 - 140, Replace the anchored re-raise "raise exc" inside the except block with a bare "raise" to preserve the original traceback (affecting the except that checks retry_on_exc and references func.__name__). Also prevent the unreachable trailing fallback by validating the retry count early: in the retry decorator/wrapper (the code path that sets retry_attempts / accepts `@retry`(attempts=...)), assert or raise a ValueError if retry_attempts < 1, or alternatively keep the final raise RetryFailure as an explicit guard and add a comment noting it only runs when retry_attempts == 0; reference the symbols retry_attempts, retry_on_exc, RetryFailure, and the wrapper handling func to locate where to apply these changes.projects/core/tests/test_dsl_toolbox.py (2)
15-18: Exercise the publicRetryFailureexport.The PR exports
RetryFailurefromprojects.core.dsl, but this test imports it from the internalprojects.core.dsl.taskmodule, so a broken public export would not be caught.♻️ Proposed import update
-from projects.core.dsl import always, execute_tasks, retry, task, when +from projects.core.dsl import RetryFailure, always, execute_tasks, retry, task, when from projects.core.dsl.runtime import TaskExecutionError from projects.core.dsl.script_manager import reset_script_manager -from projects.core.dsl.task import RetryFailure🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/tests/test_dsl_toolbox.py` around lines 15 - 18, The test imports RetryFailure from the internal module projects.core.dsl.task instead of exercising the public export; update the import in test_dsl_toolbox.py to import RetryFailure from the public package (projects.core.dsl) alongside always, execute_tasks, retry, task, when so the test will fail if the public export is missing or broken.
186-198: Assert the original exception remains chained underRetryFailure.
_execute_with_retryraisesRetryFailurefrom the last exception on exhaustion, but this test only checks the outer cause. Add the nested cause assertion so that regression is caught.🧪 Proposed assertion strengthening
with pytest.raises(TaskExecutionError) as ei: execute_tasks(locals()) - assert isinstance(ei.value.__cause__, RetryFailure) + retry_failure = ei.value.__cause__ + assert isinstance(retry_failure, RetryFailure) + assert isinstance(retry_failure.__cause__, ValueError) + assert str(retry_failure.__cause__) == "nope"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/tests/test_dsl_toolbox.py` around lines 186 - 198, The test test_retry_on_exceptions_exhausted needs to assert that the original exception is preserved under the RetryFailure chain: after execute_tasks raises TaskExecutionError and you assert its __cause__ is a RetryFailure, also assert that ei.value.__cause__.__cause__ (the nested cause from _execute_with_retry / RetryFailure) is the original ValueError (or isinstance(..., ValueError)) so the test fails if the original exception is not chained correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/core/dsl/runtime.py`:
- Around line 144-168: The warning messages are being emitted to the root logger
instead of the DSL file logger, so replace calls to logging.warning(...) with
the module-level logger.warning(...) (the same logger used elsewhere from .log)
so the messages are captured in the per-run task.log; specifically update the
"Executing the `@always` tasks …" warning in the pending-task handling block
(where _execute_single_task and execution_error are used) and the "Path
{co_filename} isn't relative ..." warning in the error-wrapping path to use
logger.warning; no other behavior changes are needed since
_setup_execution_logging attaches the file handler to the DSL logger.
- Around line 162-168: The current handler in the always-task loop stores only
the first always_exc into execution_error and discards further exceptions;
change this to accumulate all always-task exceptions into an ExceptionGroup
(e.g., collect them into a list and, when raising, wrap them with
ExceptionGroup) instead of overwriting execution_error, update the logging to
still log each failure (logger.error) as now, and when preparing to re-raise at
the end of the task-run logic (where execution_error is currently raised), raise
an ExceptionGroup that includes any previously stored execution_error plus all
collected always-task exceptions so callers receive all cleanup failures; look
for the variables execution_error, always_exc and the block that currently
raises execution_error and replace that raise with raising an ExceptionGroup
built from the collected exceptions.
In `@projects/foreign_testing/orchestration/cli.py`:
- Around line 10-12: Remove the unused and duplicate imports: delete the unused
"sys" import and remove the duplicate "types" and "pathlib" import lines so only
the original imports remain; specifically, ensure there is a single import
statement for "types" and a single for "pathlib" (remove the extra occurrences
you added on lines importing sys, types, pathlib) so the module no longer
imports sys and does not duplicate types/pathlib.
In `@pyproject.toml`:
- Line 99: The pyproject.toml change narrows pytest discovery by setting
testpaths = ["projects/core/tests"], which will omit tests under
projects/llm_d/orchestration and projects/skeleton/orchestration; update the
testpaths value (e.g., include "projects/llm_d/orchestration" and
"projects/skeleton/orchestration" or broader roots like "projects") or add a
comment/documentation entry near the testpaths key explaining why those
subprojects are intentionally excluded so test discovery behavior is explicit
and reproducible.
---
Nitpick comments:
In @.github/workflows/test_toolbox_dsl.yml:
- Around line 11-13: Update the concurrency group key so it falls back to
github.run_id when github.event.pull_request.number is null (e.g., for
workflow_dispatch or push) instead of collapsing to just github.ref; change the
group expression at the concurrency block to prefer
github.event.pull_request.number, then github.run_id, then github.ref so manual
dispatches don’t cancel unrelated runs while keeping cancel-in-progress: true
unchanged.
In `@projects/core/dsl/runtime.py`:
- Around line 144-158: The leading comment is misleading because this block also
executes on the success path (when task_index == len(file_tasks) it becomes a
no-op); update the comment above the pending/always_pending logic to reflect the
true intent — e.g., say that for any remaining tasks after the current index we
skip non-@always tasks and still run `@always` tasks (and note it will be a no-op
when there are no pending tasks) — so change the comment near variables
task_index, file_tasks, pending, always_pending and the check on
current_task_info.get("always_execute") to make the behavior explicit.
- Around line 135-142: The except clause currently lists TaskExecutionError,
ConditionError, RetryFailure and Exception together which is redundant; split
into two handlers: one that explicitly catches (TaskExecutionError,
ConditionError, RetryFailure) as e and performs the existing logging,
log_completion_banner call and sets execution_error, and a separate fallback
except Exception as e that does the same fallback handling so the specific
expected exceptions remain documented while still preserving a generic
catch-all; refer to the existing symbols logger.fatal, log_completion_banner and
execution_error when making the changes.
In `@projects/core/dsl/task.py`:
- Around line 51-73: The function _execute_with_retry currently overwrites the
explicit parameters (attempts, delay, backoff, retry_on_exceptions) with values
from func._retry_config, making caller-passed values ignored; fix this by
removing the getattr(func, "_retry_config", {}) lookup and the subsequent
assignment to retry_attempts/retry_delay/retry_backoff/retry_on_exc so the
function uses the parameters passed into _execute_with_retry directly; also
remove or adjust any code that relies on reading _retry_config inside
_execute_with_retry (leave the decorator/ retry that sets func._retry_config
untouched) and ensure callers (e.g., runtime) pass the desired retry values into
_execute_with_retry instead of relying on func._retry_config.
- Around line 143-201: The task_only wrapper currently treats a single-parameter
decorator as a simple decorator only if that parameter is literally named
"func", which can silently misroute valid simple decorators that use a different
parameter name; update task_only to detect the one-parameter case and raise a
clear error if params[0].name != "func" (including decorator_func.__name__ in
the message) so maintainers get an explicit failure rather than an obscure
factory-path break; reference the task_only wrapper and _ensure_is_task when
adding this assertion/error.
- Around line 107-140: Replace the anchored re-raise "raise exc" inside the
except block with a bare "raise" to preserve the original traceback (affecting
the except that checks retry_on_exc and references func.__name__). Also prevent
the unreachable trailing fallback by validating the retry count early: in the
retry decorator/wrapper (the code path that sets retry_attempts / accepts
`@retry`(attempts=...)), assert or raise a ValueError if retry_attempts < 1, or
alternatively keep the final raise RetryFailure as an explicit guard and add a
comment noting it only runs when retry_attempts == 0; reference the symbols
retry_attempts, retry_on_exc, RetryFailure, and the wrapper handling func to
locate where to apply these changes.
In `@projects/core/tests/conftest.py`:
- Around line 9-24: The _dsl_isolation fixture is currently autouse and forces
all tests to run with cwd set to FORGE_HOME and ARTIFACT_DIR set, which is
undesirable for non-DSL tests; change it from autouse=True to a non-autouse
fixture (remove autouse) and provide an alternate marker-based approach so only
tests that need DSL setup opt in (e.g., add a pytest config to register a "dsl"
marker and create a helper fixture or require tests to use `@pytest.mark.dsl` to
invoke _dsl_isolation), while keeping the existing behavior inside
_dsl_isolation (monkeypatch.setenv("ARTIFACT_DIR", ...),
monkeypatch.chdir(env.FORGE_HOME), env.init(), reset_script_manager()) and
ensuring teardown still calls reset_script_manager().
In `@projects/core/tests/test_dsl_toolbox.py`:
- Around line 15-18: The test imports RetryFailure from the internal module
projects.core.dsl.task instead of exercising the public export; update the
import in test_dsl_toolbox.py to import RetryFailure from the public package
(projects.core.dsl) alongside always, execute_tasks, retry, task, when so the
test will fail if the public export is missing or broken.
- Around line 186-198: The test test_retry_on_exceptions_exhausted needs to
assert that the original exception is preserved under the RetryFailure chain:
after execute_tasks raises TaskExecutionError and you assert its __cause__ is a
RetryFailure, also assert that ei.value.__cause__.__cause__ (the nested cause
from _execute_with_retry / RetryFailure) is the original ValueError (or
isinstance(..., ValueError)) so the test fails if the original exception is not
chained correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fb48fffb-be5e-4e73-88de-f1c218bd6fcd
📒 Files selected for processing (18)
.github/workflows/test_toolbox_dsl.ymldocs/toolbox/dsl.mdprojects/core/dsl/__init__.pyprojects/core/dsl/runtime.pyprojects/core/dsl/task.pyprojects/core/launcher/README.mdprojects/core/tests/conftest.pyprojects/core/tests/test_dsl_toolbox.pyprojects/foreign_testing/orchestration/cli.pypyproject.tomlspecs/008-toolbox-dsl/checklists/requirements.mdspecs/008-toolbox-dsl/contracts/toolbox_dsl_authoring.mdspecs/008-toolbox-dsl/data-model.mdspecs/008-toolbox-dsl/plan.mdspecs/008-toolbox-dsl/quickstart.mdspecs/008-toolbox-dsl/research.mdspecs/008-toolbox-dsl/spec.mdspecs/008-toolbox-dsl/tasks.md
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
projects/core/dsl/task.py (1)
174-188:⚠️ Potential issue | 🟠 MajorThe simple-decorator branch is not dead code;
alwaysactively uses it, but the brittleness risk remains.The new
params[0].name == "func"heuristic correctly disambiguateswhen(lambda: ...)from a simple@decorator(func)application on a lambda. However, the original claim that "@task_onlyis currently applied only to factorieswhenandretry;alwaysdoesn't use it" is incorrect—always(line 155) is decorated with@task_onlyand is a simple decorator withfuncas its parameter, so it successfully uses the simple-decorator path.That said, the brittleness concern is real: any simple decorator (or future variant of
always) whose wrapped-callable parameter is not literally namedfunc(e.g.fn,function) would silently fall through to the factory branch and break at runtime. The naming invariant should either be enforced with a clear error message or documented explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 174 - 188, The simple-decorator branch currently only accepts a single-parameter decorator when that parameter is literally named "func", which is brittle (used by always decorated with `@task_only`); change behavior to detect a simple decorator attempt more robustly and raise a clear error if the parameter name is not "func". Concretely, in the branch that checks len(params) == 1 and callable(args[0]) (refer to params[0].name and decorator_func.__name__), if the parameter name is not "func" fail fast with a TypeError that references decorator_func.__name__ and instructs that simple decorators must name their callable parameter "func" (or call the decorator as a factory), rather than silently falling through; keep the _ensure_is_task(func, decorator_func.__name__) and return decorator_func(func) behavior when the name is "func".
🧹 Nitpick comments (2)
projects/core/ci_entrypoint/run_ci.py (1)
480-482: Use the module-scopedloggerfor consistency.
logging.exception(...)goes through the root logger, unlike the rest of this file which uses the modulelogger. Minor nit:- logging.exception("Unexpected exception") + logger.exception("Unexpected exception")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 480 - 482, The except block currently calls logging.exception("Unexpected exception") which logs to the root logger; replace that call with the module-scoped logger (logger.exception("Unexpected exception")) so the message goes through the same logger used elsewhere in this file (ensure the module-level variable logger is imported/defined and used in the except Exception handler).projects/core/dsl/task.py (1)
109-139: Minor polish in the exception branch.Two small things worth considering in
_execute_with_retry:
- Line 113 uses
raise exc; prefer the bareraiseinside theexceptblock so the traceback is preserved verbatim without re-binding.- The trailing
raise RetryFailure(...)at lines 136-139 is only reachable whenretry_attempts <= 0(theforloop body never runs). In that misconfiguration the message "(exceptions retried until exhausted)" is misleading since nothing was actually retried. Consider either validatingattempts >= 1at decorator time or tweaking the message for that case.Proposed tweak
- except Exception as exc: - if not retry_on_exc: - logger.error(f"==> TASK EXCEPTION: {func.__name__} failed with exception") - logger.info("") - raise exc + except Exception as exc: + if not retry_on_exc: + logger.error(f"==> TASK EXCEPTION: {func.__name__} failed with exception") + logger.info("") + raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 109 - 139, In _execute_with_retry: replace the re-raised exception "raise exc" with a bare "raise" inside the except block so the original traceback is preserved (locate the re-raise in the except where retry_on_exc is false in the _execute_with_retry function). Also handle the unreachable/misleading final RetryFailure: either validate retry_attempts >= 1 at decorator/configuration time and raise a clear configuration error earlier, or change the final RetryFailure message produced at the end of _execute_with_retry to reflect that no attempts were made (e.g., indicate retry_attempts was <= 0) so the message is accurate when the for-loop never ran.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/cluster/toolbox/rebuild_image/main.py`:
- Line 23: Replace the module-level logger creation so it uses the DSL-named
logger instead of module name: change the logger = logging.getLogger(__name__)
instance to logging.getLogger("DSL") so CLI-facing info messages emitted by this
module (the logger variable used throughout main.py) are handled by the DSL
infrastructure (the logger configured in execute_tasks()). No other behavior
changes needed—just switch the logger reference to "DSL" so INFO output appears
consistently like in task.py/shell.py/runner.py/script_manager.py.
In `@projects/core/ci_entrypoint/github/pr_args.py`:
- Around line 524-526: The error log call currently double-prefixes the message
("ERROR: ERROR: ..."); update the except block by removing the manual "ERROR: "
prefix so the logger uses the configured level prefix only—replace the call to
logger.error(f"ERROR: {e}") with logger.error(str(e)) (or logger.exception(e) if
a traceback is desired) and keep the existing sys.exit(1).
- Line 465: The module/function docstring was corrupted by a blanket
print→logger.info change—replace the malformed single-line text "Reads
environment variables and logger.infos the parsed configuration to stdout." with
a proper triple-quoted docstring that describes the function's behavior (e.g.,
"Reads environment variables and prints the parsed configuration to stdout." or
similar), ensuring the docstring belongs to the correct function/class in
pr_args.py and is a literal string (not code invoking logger.info); keep the
original semantic wording and formatting so the docstring is not executed as a
logging call.
- Around line 469-473: The --help-directives informational output is being sent
via logger.info and can be dropped when pr_args.py is run standalone because
setup_logging() isn't called; change the user-facing informational emissions in
pr_args.py (the block that calls get_supported_directives() and
get_directive_prefixes() and the other banner/info lines emitted by main()) to
use print(...) to write to stdout instead of logger.info(...), leaving logger
usage intact for internal/parsed flows (e.g., when invoked via
prepare_ci.py/parse_pr_arguments()) so standalone main() prints reliably.
In `@projects/fournos_launcher/orchestration/cli.py`:
- Around line 72-74: The code currently logs raw user-provided data (args_list
and extra_overrides) via logger.info; change these logger calls so they do not
print full values — instead log safe summaries such as the number of args
(len(args_list)) or the set of override keys (e.g., extra_overrides.keys()) and
keep the existing config.project.set_config calls unchanged; update the
logger.info calls referencing args_list and extra_overrides to log only counts
or keys and a brief context message.
---
Outside diff comments:
In `@projects/core/dsl/task.py`:
- Around line 174-188: The simple-decorator branch currently only accepts a
single-parameter decorator when that parameter is literally named "func", which
is brittle (used by always decorated with `@task_only`); change behavior to detect
a simple decorator attempt more robustly and raise a clear error if the
parameter name is not "func". Concretely, in the branch that checks len(params)
== 1 and callable(args[0]) (refer to params[0].name and
decorator_func.__name__), if the parameter name is not "func" fail fast with a
TypeError that references decorator_func.__name__ and instructs that simple
decorators must name their callable parameter "func" (or call the decorator as a
factory), rather than silently falling through; keep the _ensure_is_task(func,
decorator_func.__name__) and return decorator_func(func) behavior when the name
is "func".
---
Nitpick comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 480-482: The except block currently calls
logging.exception("Unexpected exception") which logs to the root logger; replace
that call with the module-scoped logger (logger.exception("Unexpected
exception")) so the message goes through the same logger used elsewhere in this
file (ensure the module-level variable logger is imported/defined and used in
the except Exception handler).
In `@projects/core/dsl/task.py`:
- Around line 109-139: In _execute_with_retry: replace the re-raised exception
"raise exc" with a bare "raise" inside the except block so the original
traceback is preserved (locate the re-raise in the except where retry_on_exc is
false in the _execute_with_retry function). Also handle the
unreachable/misleading final RetryFailure: either validate retry_attempts >= 1
at decorator/configuration time and raise a clear configuration error earlier,
or change the final RetryFailure message produced at the end of
_execute_with_retry to reflect that no attempts were made (e.g., indicate
retry_attempts was <= 0) so the message is accurate when the for-loop never ran.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 29aaf8c2-b06e-434a-9531-db216d65fcb8
📒 Files selected for processing (15)
projects/cluster/toolbox/rebuild_image/main.pyprojects/core/ci_entrypoint/github/pr_args.pyprojects/core/ci_entrypoint/prepare_ci.pyprojects/core/ci_entrypoint/run_ci.pyprojects/core/dsl/log.pyprojects/core/dsl/runtime.pyprojects/core/dsl/shell.pyprojects/core/dsl/task.pyprojects/core/library/config.pyprojects/core/library/run.pyprojects/core/notifications/github/api.pyprojects/core/notifications/send.pyprojects/core/notifications/slack/api.pyprojects/fournos_launcher/orchestration/cli.pyprojects/skeleton/orchestration/prepare_skeleton.py
💤 Files with no reviewable changes (2)
- projects/core/dsl/shell.py
- projects/core/dsl/log.py
✅ Files skipped from review due to trivial changes (6)
- projects/skeleton/orchestration/prepare_skeleton.py
- projects/core/ci_entrypoint/prepare_ci.py
- projects/core/notifications/slack/api.py
- projects/core/library/config.py
- projects/core/notifications/github/api.py
- projects/core/notifications/send.py
🚧 Files skipped from review as they are similar to previous changes (1)
- projects/core/dsl/runtime.py
d7b36e5 to
5bf8e85
Compare
|
/test fournos skeleton |
|
🔴 Test of 'fournos_launcher submit' failed after 00 hours 08 minutes 52 seconds 🔴 • Link to the test results. • No reports index generated... Test configuration: |
|
/test fournos skeleton |
|
🟢 Test of 'fournos_launcher submit' succeeded after 00 hours 00 minutes 27 seconds 🟢 • Link to the test results. • No reports index generated... Test configuration: |
5bf8e85 to
7b9ba1a
Compare
|
/test fournos llm_d |
|
🟢 Test of 'fournos_launcher submit' succeeded after 00 hours 00 minutes 28 seconds 🟢 • Link to the test results. • No reports index generated... Test configuration: |
|
/test fournos llm_d |
|
🟢 Test of 'fournos_launcher submit' succeeded after 00 hours 09 minutes 30 seconds 🟢 • Link to the test results. • No reports index generated... Test configuration: |
|
I merge this one 👍🏻 |
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
Refactor