docs: improve docstring coverage across event processors and models#64
docs: improve docstring coverage across event processors and models#64dkargatzis merged 1 commit intomainfrom
Conversation
Add comprehensive docstrings to improve project-wide docstring coverage: **src/integrations/github/models.py** (13 docstrings added): - Document all GraphQL response wrapper classes (ReviewConnection, IssueConnection, CommitConnection, FileConnection, etc.) - Clarify purpose of each Pydantic model in API response structure **src/event_processors/deployment_protection_rule.py** (5 docstrings): - Add detailed docstring to process() method explaining the complete deployment approval/rejection workflow with error handling - Document __init__, get_event_type, prepare_webhook_data, and prepare_api_data methods **Event Processors** (12 docstrings total): - deployment_status.py: Document logging-only processor - deployment.py: Document deployment creation logging - check_run.py: Document check_run re-evaluation flow - deployment_review.py: Document review approval handling - All methods: __init__, get_event_type, and process() Total: 30 docstrings added to address ~77% coverage issue. Addresses CodeRabbit pre-merge check warning about insufficient docstring coverage (77.27% vs 80% threshold).
🛡️ Watchflow Governance ChecksStatus: ❌ 2 Violations Found 🟡 Medium Severity (2)Checks PR description (body) and title for a linked issue reference (e.g. #123, Fixes #123, Closes #456). Use when the rule requires issue refs in either field.PR does not reference a linked issue (e.g. #123 or closes #123 in body/title) Ensures PRs that modify source code also include a CHANGELOG or .changeset addition.Source code was modified without a corresponding CHANGELOG update. 💡 Reply with Thanks for using Watchflow! It's completely free for OSS and private repositories. You can also self-host it easily. |
📝 WalkthroughWalkthroughAdds validation and error-handling to several event processors, improves docstrings across deployment-related processors and GraphQL models, and enhances check_run processing to ignore self-produced events and validate agent responses. Introduces a new environment validation helper in the deployment protection processor. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/event_processors/check_run.py (2)
85-91:⚠️ Potential issue | 🟠 MajorDon't treat engine failures as a clean pass.
RuleEngineAgent.execute()can returnsuccess=Falsewith emptydata(for example on request validation failure). Because this method only readsresult.dataand later returnssuccess=(not violations), an unsuccessful evaluation silently passes the check run. Return a failedProcessingResultwhen the agent call is unsuccessful or the expected payload is missing.As per coding guidelines,
Reject unvalidated agent outputs and missing confidence gating.Also applies to: 98-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/check_run.py` around lines 85 - 91, The current logic reads result.data and infers success from violations, which lets engine failures (e.g., result.success == False or missing data) pass; update the check_run handling around self.engine_agent.execute(...) and the later block (the similar code at lines ~98-103) to validate the agent response: if result.success is False or result.data is missing/doesn't contain "violations", return a failed ProcessingResult (with an error message/context) instead of treating it as a clean pass; specifically, check result.success and existence of result.data["violations"] after RuleEngineAgent.execute (result) and only derive success from violations when the agent response is valid.
105-126:⚠️ Potential issue | 🟠 MajorHandle
dict-backed rules in the normalizer.This converter assumes every rule exposes
description,enabled,severity, andevent_typesattributes, but the engine request schema already acceptsRule | dict[str, Any], andsrc/event_processors/deployment_protection_rule.pyalready normalizes both shapes. Adict-backed rule from the loader will raiseAttributeErrorhere.src/event_processors/deployment_protection_rule.py (2)
191-207:⚠️ Potential issue | 🔴 CriticalThese failure paths currently auto-approve deployments.
RuleEngineAgent.execute()can returnsuccess=Falsewith emptydata(for example on request validation failure), soviolationsstays empty and this code falls into the approval branch. The catch-all handler also approves on any exception and then still returnssuccess=False, which can retrigger a state-changing write. Check the agent result before acting, and fail closed or leave the deployment pending instead of auto-approving.As per coding guidelines,
Reject unvalidated agent outputs and missing confidence gatingandFail closed for risky decisions; provide actionable remediation in error paths.Also applies to: 213-226, 293-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/deployment_protection_rule.py` around lines 191 - 207, The agent execution result is not validated before deciding to approve, causing empty or failed RuleEngineAgent.execute() outputs to auto-approve; update the handling around execute_with_timeout/engine_agent.execute so you only derive violations when analysis_result exists and analysis_result.success is True and analysis_result.data contains "evaluation_result"; otherwise treat it as a failure and fail-closed (leave the deployment pending or explicitly reject) instead of approving, and in the exception/catch-all path do not set an approval—log the error with context and return a non-approving outcome that avoids triggering state-changing writes; apply the same checks/behavior to the other similar blocks handling analysis_result (the later sections around where violations are built and returned).
33-34:⚠️ Potential issue | 🟠 MajorReject invalid callback targets up front.
_is_valid_callback_url()currently accepts any string starting withhttp, andprocess()only warns before continuing when the callback or environment is invalid. That is too weak for a callback that later drives an authenticated review request, and it can also returnsuccess=Truewhile GitHub never receives an approval/rejection. Restrict this to the expected GitHub API host over HTTPS and returnsuccess=Falsewhen either field is missing.As per coding guidelines,
Validate all external inputs; verify webhook signaturesandFail closed for risky decisions; provide actionable remediation in error paths.Also applies to: 92-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/deployment_protection_rule.py` around lines 33 - 34, The helper _is_valid_callback_url currently accepts any http URL; update it to only accept HTTPS URLs with the expected GitHub API host (e.g., host equals "api.github.com") and ensure url is non-empty; then modify process() in deployment_protection_rule.py to fail closed (return success=False) when the callback URL or required environment is missing/invalid instead of merely warning, referencing the _is_valid_callback_url() and process() symbols so reviewers can locate and update the validation and early-return behavior.src/event_processors/deployment_review.py (2)
87-101:⚠️ Potential issue | 🟠 MajorHandle
dictrules in_convert_rules_to_new_format().The filtering loop explicitly keeps both rule objects and
dictrules, but_convert_rules_to_new_format()immediately dereferencesrule.description,rule.enabled, andrule.event_types. Anydict-backed rule will raise here before evaluation runs. Normalize both shapes here or reuse the converter pattern already present insrc/event_processors/deployment_protection_rule.py.Also applies to: 143-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/deployment_review.py` around lines 87 - 101, The loop that collects deployment_review_rules allows dict-backed rules but later _convert_rules_to_new_format() dereferences attributes like rule.description, rule.enabled, and rule.event_types causing crashes for dicts; update the filtering code (the loop that builds deployment_review_rules) to normalize each rule into a uniform object/structure before appending (or call the existing converter used in src/event_processors/deployment_protection_rule.py) so both dict and Rule objects are converted to the same shape expected by _convert_rules_to_new_format() and downstream evaluation; ensure you handle keys "description", "enabled", and "event_types" when converting dicts and reuse the converter function/class from deployment_protection_rule.py to avoid duplicate logic.
115-121:⚠️ Potential issue | 🟠 MajorFail the processor when the engine call fails.
RuleEngineAgent.execute()can returnsuccess=Falsewith emptydata(for example on request validation failure). Because this method later returnssuccess=(not violations), an unsuccessful agent call is currently treated as a clean pass. Return a failedProcessingResultwhen the agent call is unsuccessful or when the expected evaluation payload is missing.As per coding guidelines,
Reject unvalidated agent outputs and missing confidence gating.Also applies to: 128-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/deployment_review.py` around lines 115 - 121, The call to RuleEngineAgent.execute (self.engine_agent.execute) may return result.success=False or missing/invalid payload, but the current code treats that as a pass by only reading violations; change the logic in the deployment_review processor to validate the agent output: first check result.success is True and that result.data is a dict containing the expected "violations" key (and any required confidence fields), and if not return a failed ProcessingResult (success=False) with an explanatory error; only proceed to compute success=(not violations) after the agent output has been validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/event_processors/check_run.py`:
- Around line 85-91: The current logic reads result.data and infers success from
violations, which lets engine failures (e.g., result.success == False or missing
data) pass; update the check_run handling around self.engine_agent.execute(...)
and the later block (the similar code at lines ~98-103) to validate the agent
response: if result.success is False or result.data is missing/doesn't contain
"violations", return a failed ProcessingResult (with an error message/context)
instead of treating it as a clean pass; specifically, check result.success and
existence of result.data["violations"] after RuleEngineAgent.execute (result)
and only derive success from violations when the agent response is valid.
In `@src/event_processors/deployment_protection_rule.py`:
- Around line 191-207: The agent execution result is not validated before
deciding to approve, causing empty or failed RuleEngineAgent.execute() outputs
to auto-approve; update the handling around
execute_with_timeout/engine_agent.execute so you only derive violations when
analysis_result exists and analysis_result.success is True and
analysis_result.data contains "evaluation_result"; otherwise treat it as a
failure and fail-closed (leave the deployment pending or explicitly reject)
instead of approving, and in the exception/catch-all path do not set an
approval—log the error with context and return a non-approving outcome that
avoids triggering state-changing writes; apply the same checks/behavior to the
other similar blocks handling analysis_result (the later sections around where
violations are built and returned).
- Around line 33-34: The helper _is_valid_callback_url currently accepts any
http URL; update it to only accept HTTPS URLs with the expected GitHub API host
(e.g., host equals "api.github.com") and ensure url is non-empty; then modify
process() in deployment_protection_rule.py to fail closed (return success=False)
when the callback URL or required environment is missing/invalid instead of
merely warning, referencing the _is_valid_callback_url() and process() symbols
so reviewers can locate and update the validation and early-return behavior.
In `@src/event_processors/deployment_review.py`:
- Around line 87-101: The loop that collects deployment_review_rules allows
dict-backed rules but later _convert_rules_to_new_format() dereferences
attributes like rule.description, rule.enabled, and rule.event_types causing
crashes for dicts; update the filtering code (the loop that builds
deployment_review_rules) to normalize each rule into a uniform object/structure
before appending (or call the existing converter used in
src/event_processors/deployment_protection_rule.py) so both dict and Rule
objects are converted to the same shape expected by
_convert_rules_to_new_format() and downstream evaluation; ensure you handle keys
"description", "enabled", and "event_types" when converting dicts and reuse the
converter function/class from deployment_protection_rule.py to avoid duplicate
logic.
- Around line 115-121: The call to RuleEngineAgent.execute
(self.engine_agent.execute) may return result.success=False or missing/invalid
payload, but the current code treats that as a pass by only reading violations;
change the logic in the deployment_review processor to validate the agent
output: first check result.success is True and that result.data is a dict
containing the expected "violations" key (and any required confidence fields),
and if not return a failed ProcessingResult (success=False) with an explanatory
error; only proceed to compute success=(not violations) after the agent output
has been validated.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3acdfc17-bfe2-4887-bb5f-0a56386d9802
📒 Files selected for processing (6)
src/event_processors/check_run.pysrc/event_processors/deployment.pysrc/event_processors/deployment_protection_rule.pysrc/event_processors/deployment_review.pysrc/event_processors/deployment_status.pysrc/integrations/github/models.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
**/*.py: Use modern typing only:dict[str, Any],list[str],str | None(noDict,List,Optional)
GitHub/HTTP/DB calls must beasync def; avoid blocking calls (time.sleep, sync HTTP) in async paths
All agent outputs and external payloads must use validatedBaseModelfrom Pydantic
Use dataclasses for internal immutable state where appropriate
Use structured logging at boundaries with fields:operation,subject_ids,decision,latency_ms
Implement Agent pattern: single-responsibility agents with typed inputs/outputs
Use Decorator pattern for retries, metrics, caching as cross-cutting concerns
Agent outputs must include:decision,confidence (0..1), shortreasoning,recommendations,strategy_used
Implement confidence policy: reject or route to human-in-the-loop when confidence < 0.5
Use minimal, step-driven prompts; provide Chain-of-Thought only for complexity > 0.7 or ambiguity > 0.6
Strip secrets/PII from agent prompts; scope tools; keep raw reasoning out of logs (store summaries only)
Cache idempotent lookups; lazy-import heavy dependencies; bound fan-out withasyncio.Semaphore
Avoid redundant LLM calls; memoize per event when safe
Use domain errors (e.g.,AgentError) witherror_type,message,context,timestamp,retry_count
Use exponential backoff for transient failures; circuit-break noisy integrations when needed
Fail closed for risky decisions; provide actionable remediation in error paths
Validate all external inputs; verify webhook signatures
Implement prompt-injection hardening; sanitize repository content passed to LLMs
Performance targets: Static validation ~<100ms typical, hybrid decisions sub-second when cache warm, budget LLM paths thoughtfully
Reject old typing syntax (Dict,List,Optional) in code review
Reject blocking calls in async code; reject bareexcept:clauses; reject swallowed errors
Reject LLM calls for trivial/deterministic checks
Reject unvalidated agent outputs and missing confidenc...
Files:
src/event_processors/deployment_status.pysrc/event_processors/check_run.pysrc/integrations/github/models.pysrc/event_processors/deployment_review.pysrc/event_processors/deployment.pysrc/event_processors/deployment_protection_rule.py
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/event_processors/deployment_protection_rule.py (1)
332-336:⚠️ Potential issue | 🟠 MajorInconsistent fail-open behavior in exception handler contradicts fail-closed principle.
The agent failure validation (lines 207-238) correctly rejects deployments when the agent fails. However, this exception handler approves deployments as a fallback when other exceptions occur, which contradicts the "fail closed for risky decisions" guideline.
This creates an inconsistent security posture: agent timeout → rejection, but other exceptions (network errors, parsing failures, etc.) → approval.
Consider whether fail-closed behavior should be consistent across all failure modes, or document the explicit rationale for this divergence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/event_processors/deployment_protection_rule.py` around lines 332 - 336, The current exception handler uses _is_valid_callback_url and _is_valid_environment to call _approve_deployment as a fallback, which creates inconsistent fail-open behavior; change the handler to follow the fail-closed principle by replacing the fallback approval with a rejection path (e.g., call a rejecting method or log and return a denied status) so all unexpected exceptions result in deployment rejection, or if you intentionally want a different policy, add an explicit comment and a documented justification next to the exception block referencing _approve_deployment and the exception handler to make the divergence explicit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/event_processors/deployment_review.py`:
- Around line 187-190: The backward-compatibility loop in deployment_review.py
that does "for condition in rule.conditions:
rule_dict['parameters'].update(condition.parameters)" can raise AttributeError
because BaseCondition may not have a parameters attribute; update the loop to
guard access (e.g., use hasattr or getattr(condition, 'parameters', None)) and
only update rule_dict['parameters'] when the attribute exists and is a mapping;
reference rule.conditions, condition.parameters, and Rule.parameters (loaded by
github_loader) so the guard is applied where the fallback merges parameters from
condition objects.
- Around line 121-140: The validation in deployment_review.py is checking for
violations at result.data["violations"] but the agent places them under
result.data["evaluation_result"]["violations"]; update the conditional that
gates failure (the block using logger.error and returning ProcessingResult) to
check for result.data and "evaluation_result" in result.data and "violations" in
result.data["evaluation_result"], and then change the extraction line
(violations = ...) to read violations = result.data.get("evaluation_result",
{}).get("violations", []) so the code uses the correct nested path; keep the
same logging/ProcessingResult behavior but reflect the corrected field checks
and retrieval.
---
Outside diff comments:
In `@src/event_processors/deployment_protection_rule.py`:
- Around line 332-336: The current exception handler uses _is_valid_callback_url
and _is_valid_environment to call _approve_deployment as a fallback, which
creates inconsistent fail-open behavior; change the handler to follow the
fail-closed principle by replacing the fallback approval with a rejection path
(e.g., call a rejecting method or log and return a denied status) so all
unexpected exceptions result in deployment rejection, or if you intentionally
want a different policy, add an explicit comment and a documented justification
next to the exception block referencing _approve_deployment and the exception
handler to make the divergence explicit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cb5a89de-5441-424e-b13b-d09ebd840f16
📒 Files selected for processing (3)
src/event_processors/check_run.pysrc/event_processors/deployment_protection_rule.pysrc/event_processors/deployment_review.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/event_processors/check_run.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
**/*.py: Use modern typing only:dict[str, Any],list[str],str | None(noDict,List,Optional)
GitHub/HTTP/DB calls must beasync def; avoid blocking calls (time.sleep, sync HTTP) in async paths
All agent outputs and external payloads must use validatedBaseModelfrom Pydantic
Use dataclasses for internal immutable state where appropriate
Use structured logging at boundaries with fields:operation,subject_ids,decision,latency_ms
Implement Agent pattern: single-responsibility agents with typed inputs/outputs
Use Decorator pattern for retries, metrics, caching as cross-cutting concerns
Agent outputs must include:decision,confidence (0..1), shortreasoning,recommendations,strategy_used
Implement confidence policy: reject or route to human-in-the-loop when confidence < 0.5
Use minimal, step-driven prompts; provide Chain-of-Thought only for complexity > 0.7 or ambiguity > 0.6
Strip secrets/PII from agent prompts; scope tools; keep raw reasoning out of logs (store summaries only)
Cache idempotent lookups; lazy-import heavy dependencies; bound fan-out withasyncio.Semaphore
Avoid redundant LLM calls; memoize per event when safe
Use domain errors (e.g.,AgentError) witherror_type,message,context,timestamp,retry_count
Use exponential backoff for transient failures; circuit-break noisy integrations when needed
Fail closed for risky decisions; provide actionable remediation in error paths
Validate all external inputs; verify webhook signatures
Implement prompt-injection hardening; sanitize repository content passed to LLMs
Performance targets: Static validation ~<100ms typical, hybrid decisions sub-second when cache warm, budget LLM paths thoughtfully
Reject old typing syntax (Dict,List,Optional) in code review
Reject blocking calls in async code; reject bareexcept:clauses; reject swallowed errors
Reject LLM calls for trivial/deterministic checks
Reject unvalidated agent outputs and missing confidenc...
Files:
src/event_processors/deployment_protection_rule.pysrc/event_processors/deployment_review.py
🔇 Additional comments (4)
src/event_processors/deployment_review.py (1)
28-38: LGTM!The docstring provides clear documentation for the
processmethod, including Args and Returns sections that describe the expected input and output.src/event_processors/deployment_protection_rule.py (3)
207-238: LGTM!The agent response validation correctly checks for
"evaluation_result"in the agent data, matching the actual agent return structure fromsrc/agents/engine_agent/agent.py. The fail-closed behavior (rejecting deployment on agent failure) aligns with the coding guidelines.
449-451: LGTM!The use of
getattr(condition, "parameters", {})safely handles cases whereBaseConditionobjects may not have aparametersattribute. This is the correct defensive pattern that should also be applied indeployment_review.py.
47-85: LGTM!Excellent comprehensive docstring that documents the workflow, arguments, return values, side effects, and error handling. This significantly improves code maintainability.
| # Validate agent response before acting on data | ||
| if not result.success or not result.data or "violations" not in result.data: | ||
| logger.error( | ||
| "deployment_review_agent_failure", | ||
| extra={ | ||
| "operation": "deployment_review", | ||
| "agent_success": result.success, | ||
| "has_data": bool(result.data), | ||
| "has_violations": "violations" in result.data if result.data else False, | ||
| }, | ||
| ) | ||
| return ProcessingResult( | ||
| success=False, | ||
| violations=[], | ||
| api_calls_made=1, | ||
| processing_time_ms=int((time.time() - start_time) * 1000), | ||
| error="Agent execution failed or returned invalid response", | ||
| ) | ||
|
|
||
| violations = result.data.get("violations", []) |
There was a problem hiding this comment.
Critical: Agent response validation checks wrong field, will reject all valid responses.
The agent returns violations inside data["evaluation_result"] (see src/agents/engine_agent/agent.py:171-181), not at the top-level data["violations"]. This validation will incorrectly fail ALL valid agent responses.
Line 140 also extracts violations from the wrong path, which would return an empty list even if validation passed.
🐛 Proposed fix to check the correct field path
# Validate agent response before acting on data
- if not result.success or not result.data or "violations" not in result.data:
+ if not result.success or not result.data or "evaluation_result" not in result.data:
logger.error(
"deployment_review_agent_failure",
extra={
"operation": "deployment_review",
"agent_success": result.success,
"has_data": bool(result.data),
- "has_violations": "violations" in result.data if result.data else False,
+ "has_evaluation_result": "evaluation_result" in result.data if result.data else False,
},
)
return ProcessingResult(
success=False,
violations=[],
api_calls_made=1,
processing_time_ms=int((time.time() - start_time) * 1000),
error="Agent execution failed or returned invalid response",
)
- violations = result.data.get("violations", [])
+ eval_result = result.data["evaluation_result"]
+ violations = [
+ v.model_dump(mode="json") if hasattr(v, "model_dump") else v
+ for v in getattr(eval_result, "violations", [])
+ ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/event_processors/deployment_review.py` around lines 121 - 140, The
validation in deployment_review.py is checking for violations at
result.data["violations"] but the agent places them under
result.data["evaluation_result"]["violations"]; update the conditional that
gates failure (the block using logger.error and returning ProcessingResult) to
check for result.data and "evaluation_result" in result.data and "violations" in
result.data["evaluation_result"], and then change the extraction line
(violations = ...) to read violations = result.data.get("evaluation_result",
{}).get("violations", []) so the code uses the correct nested path; keep the
same logging/ProcessingResult behavior but reflect the corrected field checks
and retrieval.
| # If no parameters field, try to extract from conditions (backward compatibility) | ||
| if not rule_dict["parameters"] and hasattr(rule, "conditions"): | ||
| for condition in rule.conditions: | ||
| rule_dict["parameters"].update(condition.parameters) |
There was a problem hiding this comment.
Potential AttributeError accessing condition.parameters on BaseCondition objects.
The backward compatibility code at line 190 calls condition.parameters without verifying the attribute exists. Per src/rules/models.py, BaseCondition objects may not have a parameters attribute—this could raise AttributeError at runtime.
Additionally, per src/rules/loaders/github_loader.py:100-145, Rule.parameters is populated directly from YAML, so this fallback should never actually be needed for properly loaded rules.
🛡️ Proposed fix to guard attribute access
# If no parameters field, try to extract from conditions (backward compatibility)
if not rule_dict["parameters"] and hasattr(rule, "conditions"):
for condition in rule.conditions:
- rule_dict["parameters"].update(condition.parameters)
+ if hasattr(condition, "parameters"):
+ rule_dict["parameters"].update(condition.parameters)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/event_processors/deployment_review.py` around lines 187 - 190, The
backward-compatibility loop in deployment_review.py that does "for condition in
rule.conditions: rule_dict['parameters'].update(condition.parameters)" can raise
AttributeError because BaseCondition may not have a parameters attribute; update
the loop to guard access (e.g., use hasattr or getattr(condition, 'parameters',
None)) and only update rule_dict['parameters'] when the attribute exists and is
a mapping; reference rule.conditions, condition.parameters, and Rule.parameters
(loaded by github_loader) so the guard is applied where the fallback merges
parameters from condition objects.
f86a80e to
8fb03be
Compare
Add comprehensive docstrings to improve project-wide docstring coverage:
src/integrations/github/models.py (13 docstrings added):
src/event_processors/deployment_protection_rule.py (5 docstrings):
Event Processors (12 docstrings total):
Total: 30 docstrings added to address ~77% coverage issue.
Addresses CodeRabbit pre-merge check warning about insufficient docstring coverage (77.27% vs 80% threshold).
Summary by CodeRabbit
New Features
Bug Fixes
Documentation