Conversation
🚨 Watchflow Rule Violations Detected🟠 High SeverityWhen a PR modifies paths that have owners defined in CODEOWNERS, the corresponding code owners must be added as reviewers This comment was automatically generated by Watchflow. |
📝 WalkthroughWalkthroughRewrites several PR rule definitions in Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/rules/acknowledgment.py (1)
41-51: Consider case-insensitive matching to harden rule ID resolution against message format drift.The current implementation uses case-sensitive substring matching, which works correctly with today's controlled message formats (as shown in tests). However, if message generation evolves or varies, mappings may silently fail to resolve. The suggested improvement adds defensive robustness:
🔧 Suggested hardening
def map_violation_text_to_rule_id(violation_text: str) -> RuleID | None: + normalized_text = violation_text.lower() for key, rule_id in VIOLATION_TEXT_TO_RULE_MAPPING.items(): - if key in violation_text: + if key.lower() in normalized_text: return rule_id return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/rules/acknowledgment.py` around lines 41 - 51, Normalize both the incoming message and the mapping keys to a consistent case (e.g., lower()) before matching so lookups become case-insensitive; update the code that resolves message -> RuleID to call .lower() on the message and either build a normalized mapping where each key is key.lower() for the existing string keys (such as "approvals, requires", "Missing required labels", "does not match required pattern", etc.) or compare using message_lower in key.lower() checks, ensuring you still return the corresponding RuleID constants (e.g., RuleID.MIN_PR_APPROVALS, RuleID.REQUIRED_LABELS, RuleID.PR_TITLE_PATTERN) when a match is found.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.watchflow/rules.yaml:
- Around line 23-29: Update the rule description string so it accurately
reflects what the rule enforces: mention that only the PR title is validated
against the title_pattern parameter and the PR body is checked for a minimum
length via min_description_length, and remove or reword the misleading phrase
"must align diff"; reference the rule's parameters title_pattern and
min_description_length and keep the description concise and accurate about
enforcement scope.
- Line 28: The title_pattern currently accepts prefixes without the required
colon; update the regex for the title_pattern key to enforce Conventional
Commits (type, optional scope, colon, space, and description). Replace the value
of title_pattern with a pattern like
^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\([\w\/\-\s]+\))?:\s.+$
so the rule (title_pattern) requires a colon and a non-empty description after
an optional scope.
In `@src/rules/conditions/filesystem.py`:
- Around line 244-246: The code incorrectly uses an "or" chain to derive
max_lines (treating 0 as falsy) and duplicates this logic in evaluate and
validate; add a static helper (e.g., Filesystem._resolve_max_lines(parameters))
that checks parameter keys in the intended precedence (check "max_lines", then
"max_pr_loc", then "max_changed_lines") using explicit "in" membership tests and
returns the exact value (including 0) or None/default, then replace the
duplicated inline expression in both evaluate and validate with calls to this
helper to preserve precedence and handle explicit 0 correctly.
---
Nitpick comments:
In `@src/rules/acknowledgment.py`:
- Around line 41-51: Normalize both the incoming message and the mapping keys to
a consistent case (e.g., lower()) before matching so lookups become
case-insensitive; update the code that resolves message -> RuleID to call
.lower() on the message and either build a normalized mapping where each key is
key.lower() for the existing string keys (such as "approvals, requires",
"Missing required labels", "does not match required pattern", etc.) or compare
using message_lower in key.lower() checks, ensuring you still return the
corresponding RuleID constants (e.g., RuleID.MIN_PR_APPROVALS,
RuleID.REQUIRED_LABELS, RuleID.PR_TITLE_PATTERN) when a match is found.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.watchflow/rules.yamlsrc/rules/acknowledgment.pysrc/rules/conditions/filesystem.pysrc/rules/loaders/github_loader.pytests/unit/rules/test_acknowledgment.py
💤 Files with no reviewable changes (1)
- src/rules/loaders/github_loader.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
**/*.py: Use modern typing only:dict[str, Any],list[str],str | None(noDict,List,Optional)
GitHub/HTTP/DB calls must beasync def; avoid blocking calls (time.sleep, sync HTTP) in async paths
All agent outputs and external payloads must use validatedBaseModelfrom Pydantic
Use dataclasses for internal immutable state where appropriate
Use structured logging at boundaries with fields:operation,subject_ids,decision,latency_ms
Implement Agent pattern: single-responsibility agents with typed inputs/outputs
Use Decorator pattern for retries, metrics, caching as cross-cutting concerns
Agent outputs must include:decision,confidence (0..1), shortreasoning,recommendations,strategy_used
Implement confidence policy: reject or route to human-in-the-loop when confidence < 0.5
Use minimal, step-driven prompts; provide Chain-of-Thought only for complexity > 0.7 or ambiguity > 0.6
Strip secrets/PII from agent prompts; scope tools; keep raw reasoning out of logs (store summaries only)
Cache idempotent lookups; lazy-import heavy dependencies; bound fan-out withasyncio.Semaphore
Avoid redundant LLM calls; memoize per event when safe
Use domain errors (e.g.,AgentError) witherror_type,message,context,timestamp,retry_count
Use exponential backoff for transient failures; circuit-break noisy integrations when needed
Fail closed for risky decisions; provide actionable remediation in error paths
Validate all external inputs; verify webhook signatures
Implement prompt-injection hardening; sanitize repository content passed to LLMs
Performance targets: Static validation ~<100ms typical, hybrid decisions sub-second when cache warm, budget LLM paths thoughtfully
Reject old typing syntax (Dict,List,Optional) in code review
Reject blocking calls in async code; reject bareexcept:clauses; reject swallowed errors
Reject LLM calls for trivial/deterministic checks
Reject unvalidated agent outputs and missing confidenc...
Files:
tests/unit/rules/test_acknowledgment.pysrc/rules/conditions/filesystem.pysrc/rules/acknowledgment.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
tests/**/*.py: Write unit tests for deterministic rule evaluation (pass/warn/block), model validation, and error paths
Write integration tests for webhook parsing, idempotency, multi-agent coordination, and state persistence
Usepytest.mark.asynciofor async tests; avoid live network calls; freeze time and seed randomness
Write regression tests for every bug fix; keep CI coverage thresholds green
Files:
tests/unit/rules/test_acknowledgment.py
🧬 Code graph analysis (1)
tests/unit/rules/test_acknowledgment.py (1)
src/rules/acknowledgment.py (1)
RuleID(20-36)
🔇 Additional comments (1)
tests/unit/rules/test_acknowledgment.py (1)
143-155: Good test alignment with updated violation phrasing.These updates keep mapping and parser expectations in sync across single/multi-violation scenarios.
Also applies to: 178-179, 196-216, 233-244
| - description: "PR titles must follow a pattern (e.g. feat:, fix:, docs:) and descriptions must align diff." | ||
| enabled: true | ||
| severity: "high" | ||
| severity: "medium" | ||
| event_types: ["pull_request"] | ||
| parameters: | ||
| require_code_owner_reviewers: true | ||
|
|
||
| - description: "No direct pushes to main branch - all changes must go through PRs" | ||
| enabled: true | ||
| severity: "critical" | ||
| event_types: ["push"] | ||
| parameters: | ||
| no_force_push: true | ||
| title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert" | ||
| min_description_length: 50 |
There was a problem hiding this comment.
Rule description currently overstates enforcement scope.
The text says descriptions “must align diff,” but configured parameters only enforce title_pattern and min_description_length.
📝 Suggested wording tweak
- - description: "PR titles must follow a pattern (e.g. feat:, fix:, docs:) and descriptions must align diff."
+ - description: "PR titles must follow a pattern (e.g. feat:, fix:, docs:) and descriptions must meet minimum detail requirements."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - description: "PR titles must follow a pattern (e.g. feat:, fix:, docs:) and descriptions must align diff." | |
| enabled: true | |
| severity: "high" | |
| severity: "medium" | |
| event_types: ["pull_request"] | |
| parameters: | |
| require_code_owner_reviewers: true | |
| - description: "No direct pushes to main branch - all changes must go through PRs" | |
| enabled: true | |
| severity: "critical" | |
| event_types: ["push"] | |
| parameters: | |
| no_force_push: true | |
| title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert" | |
| min_description_length: 50 | |
| - description: "PR titles must follow a pattern (e.g. feat:, fix:, docs:) and descriptions must meet minimum detail requirements." | |
| enabled: true | |
| severity: "medium" | |
| event_types: ["pull_request"] | |
| parameters: | |
| title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert" | |
| min_description_length: 50 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.watchflow/rules.yaml around lines 23 - 29, Update the rule description
string so it accurately reflects what the rule enforces: mention that only the
PR title is validated against the title_pattern parameter and the PR body is
checked for a minimum length via min_description_length, and remove or reword
the misleading phrase "must align diff"; reference the rule's parameters
title_pattern and min_description_length and keep the description concise and
accurate about enforcement scope.
| event_types: ["push"] | ||
| parameters: | ||
| no_force_push: true | ||
| title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .watchflow/rules.yamlRepository: warestack/watchflow
Length of output: 1258
title_pattern does not enforce the colon required by Conventional Commits standard.
Line 23 description shows examples with colons (feat:, fix:, docs:), but the pattern allows any string starting with those prefixes. Update the pattern to require the colon and proper format.
Suggested pattern update
- title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert"
+ title_pattern: "^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\\([^)]+\\))?:\\s.+$"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title_pattern: "^feat|^fix|^docs|^style|^refactor|^test|^chore|^perf|^ci|^build|^revert" | |
| title_pattern: "^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\\([^)]+\\))?:\\s.+$" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.watchflow/rules.yaml at line 28, The title_pattern currently accepts
prefixes without the required colon; update the regex for the title_pattern key
to enforce Conventional Commits (type, optional scope, colon, space, and
description). Replace the value of title_pattern with a pattern like
^(feat|fix|docs|style|refactor|test|chore|perf|ci|build|revert)(\([\w\/\-\s]+\))?:\s.+$
so the rule (title_pattern) requires a colon and a non-empty description after
an optional scope.
| max_lines = ( | ||
| parameters.get("max_lines") or parameters.get("max_pr_loc") or parameters.get("max_changed_lines") or 0 | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read the relevant sections of the file
echo "=== Lines 228-250 (evaluate method) ==="
sed -n '228,250p' src/rules/conditions/filesystem.py
echo ""
echo "=== Lines 265-280 (validate method) ==="
sed -n '265,280p' src/rules/conditions/filesystem.pyRepository: warestack/watchflow
Length of output: 1664
🏁 Script executed:
#!/bin/bash
# Check the full structure around these lines to understand the methods better
wc -l src/rules/conditions/filesystem.pyRepository: warestack/watchflow
Length of output: 101
🏁 Script executed:
#!/bin/bash
# Search for usages of the parameters (max_lines, max_pr_loc, max_changed_lines) to understand intent
rg "(max_lines|max_pr_loc|max_changed_lines)" src/rules/conditions/filesystem.py -B 2 -A 2Repository: warestack/watchflow
Length of output: 2196
🏁 Script executed:
#!/bin/bash
# Check if there are tests that use these parameters with explicit 0
find . -name "*.py" -type f | xargs rg "max_lines.*0|max_pr_loc.*0|max_changed_lines.*0" -lRepository: warestack/watchflow
Length of output: 148
🏁 Script executed:
#!/bin/bash
# Check the test file to understand the expected behavior
echo "=== Test file content for max_pr_loc ==="
rg "MaxPrLocCondition|max_lines|max_pr_loc|max_changed_lines" tests/unit/rules/conditions/test_filesystem.py -B 1 -A 3Repository: warestack/watchflow
Length of output: 3063
🏁 Script executed:
#!/bin/bash
# Look for any documentation or configuration examples
find . -name "*.md" -o -name "*.yaml" -o -name "*.yml" | xargs rg "max_lines|max_pr_loc|max_changed_lines" 2>/dev/null || echo "No matches in docs"Repository: warestack/watchflow
Length of output: 2511
🏁 Script executed:
#!/bin/bash
# Check if there's any explicit 0 usage in test fixtures or examples
rg "max_lines.*0|max_pr_loc.*0|max_changed_lines.*0" tests/ -B 3 -A 3Repository: warestack/watchflow
Length of output: 5105
Extract max_lines alias resolution to a static method to fix precedence logic and eliminate duplication.
The current or chain treats 0 as missing, causing an explicit disable value to be silently overridden if multiple aliases coexist in the parameters dict. The test test_evaluate_missing_max_lines_returns_empty confirms max_lines: 0 is intended to disable the check. This pattern is duplicated across evaluate and validate methods.
Refactor to fix precedence
+ `@staticmethod`
+ def _resolve_max_lines(parameters: dict[str, Any]) -> int:
+ for key in ("max_lines", "max_pr_loc", "max_changed_lines"):
+ if key in parameters and parameters[key] is not None:
+ return int(parameters[key])
+ return 0
+
async def evaluate(self, context: Any) -> list[Violation]:
- max_lines = (
- parameters.get("max_lines") or parameters.get("max_pr_loc") or parameters.get("max_changed_lines") or 0
- )
+ max_lines = self._resolve_max_lines(parameters)
async def validate(self, parameters: dict[str, Any], event: dict[str, Any]) -> bool:
- max_lines = (
- parameters.get("max_lines") or parameters.get("max_pr_loc") or parameters.get("max_changed_lines") or 0
- )
+ max_lines = self._resolve_max_lines(parameters)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rules/conditions/filesystem.py` around lines 244 - 246, The code
incorrectly uses an "or" chain to derive max_lines (treating 0 as falsy) and
duplicates this logic in evaluate and validate; add a static helper (e.g.,
Filesystem._resolve_max_lines(parameters)) that checks parameter keys in the
intended precedence (check "max_lines", then "max_pr_loc", then
"max_changed_lines") using explicit "in" membership tests and returns the exact
value (including 0) or None/default, then replace the duplicated inline
expression in both evaluate and validate with calls to this helper to preserve
precedence and handle explicit 0 correctly.
…correctly display violation messages in PR comments
🚨 Watchflow Rule Violations Detected🟠 High SeverityWhen a PR modifies paths that have owners defined in CODEOWNERS, the corresponding code owners must be added as reviewers This comment was automatically generated by Watchflow. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/presentation/github_formatter.py (1)
172-178: Consider extracting shared violation formatting logic.This block duplicates the formatting logic from
format_check_run_output(lines 108-114). A small helper function could reduce duplication.♻️ Optional: Extract shared formatting
def _format_violation_entry(violation: Violation) -> str: """Format a single violation entry.""" lines = [ f"### {violation.rule_description or 'Unknown Rule'}\n", f"{violation.message}\n", f"Rule validation failed with severity: **{violation.severity}**\n", ] if violation.how_to_fix: lines.append(f"**How to fix:** {violation.how_to_fix}\n") lines.append("\n") return "".join(lines)Then use in both functions:
text += _format_violation_entry(violation)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/presentation/github_formatter.py` around lines 172 - 178, The violation-entry formatting is duplicated between the loop in the shown block and format_check_run_output; extract the shared logic into a small helper such as _format_violation_entry(violation: Violation) that builds and returns the string for a single Violation (including rule_description fallback, message, severity line, optional how_to_fix, and trailing newline), then replace the duplicated concatenation in both the current loop and format_check_run_output with appending the helper's return value (e.g., comment += _format_violation_entry(violation)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/event_processors/pull_request/test_enricher.py`:
- Line 68: The violation text incorrectly uses "1 approvals"; update the
pluralization so the message uses "approval" when the count equals 1 and
"approvals" otherwise. Locate the code that composes the violation line (the
place that sets the "body" string in the PR watcher/enricher logic or helper
that formats lines like "PR has {n} approvals, requires {m}") and change it to
choose singular for n==1, then update the test expectation in
tests/unit/event_processors/pull_request/test_enricher.py to assert "PR has 1
approval, requires 2".
---
Nitpick comments:
In `@src/presentation/github_formatter.py`:
- Around line 172-178: The violation-entry formatting is duplicated between the
loop in the shown block and format_check_run_output; extract the shared logic
into a small helper such as _format_violation_entry(violation: Violation) that
builds and returns the string for a single Violation (including rule_description
fallback, message, severity line, optional how_to_fix, and trailing newline),
then replace the duplicated concatenation in both the current loop and
format_check_run_output with appending the helper's return value (e.g., comment
+= _format_violation_entry(violation)).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/presentation/github_formatter.pysrc/rules/conditions/access_control.pytests/unit/event_processors/pull_request/test_enricher.pytests/unit/presentation/test_github_formatter.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
**/*.py: Use modern typing only:dict[str, Any],list[str],str | None(noDict,List,Optional)
GitHub/HTTP/DB calls must beasync def; avoid blocking calls (time.sleep, sync HTTP) in async paths
All agent outputs and external payloads must use validatedBaseModelfrom Pydantic
Use dataclasses for internal immutable state where appropriate
Use structured logging at boundaries with fields:operation,subject_ids,decision,latency_ms
Implement Agent pattern: single-responsibility agents with typed inputs/outputs
Use Decorator pattern for retries, metrics, caching as cross-cutting concerns
Agent outputs must include:decision,confidence (0..1), shortreasoning,recommendations,strategy_used
Implement confidence policy: reject or route to human-in-the-loop when confidence < 0.5
Use minimal, step-driven prompts; provide Chain-of-Thought only for complexity > 0.7 or ambiguity > 0.6
Strip secrets/PII from agent prompts; scope tools; keep raw reasoning out of logs (store summaries only)
Cache idempotent lookups; lazy-import heavy dependencies; bound fan-out withasyncio.Semaphore
Avoid redundant LLM calls; memoize per event when safe
Use domain errors (e.g.,AgentError) witherror_type,message,context,timestamp,retry_count
Use exponential backoff for transient failures; circuit-break noisy integrations when needed
Fail closed for risky decisions; provide actionable remediation in error paths
Validate all external inputs; verify webhook signatures
Implement prompt-injection hardening; sanitize repository content passed to LLMs
Performance targets: Static validation ~<100ms typical, hybrid decisions sub-second when cache warm, budget LLM paths thoughtfully
Reject old typing syntax (Dict,List,Optional) in code review
Reject blocking calls in async code; reject bareexcept:clauses; reject swallowed errors
Reject LLM calls for trivial/deterministic checks
Reject unvalidated agent outputs and missing confidenc...
Files:
tests/unit/presentation/test_github_formatter.pysrc/presentation/github_formatter.pytests/unit/event_processors/pull_request/test_enricher.pysrc/rules/conditions/access_control.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/guidelines.mdc)
tests/**/*.py: Write unit tests for deterministic rule evaluation (pass/warn/block), model validation, and error paths
Write integration tests for webhook parsing, idempotency, multi-agent coordination, and state persistence
Usepytest.mark.asynciofor async tests; avoid live network calls; freeze time and seed randomness
Write regression tests for every bug fix; keep CI coverage thresholds green
Files:
tests/unit/presentation/test_github_formatter.pytests/unit/event_processors/pull_request/test_enricher.py
🔇 Additional comments (3)
src/presentation/github_formatter.py (1)
109-114: LGTM!The formatting changes correctly structure violations with a dedicated header and message block. The markdown hierarchy (## Severity → ### Rule Description) is semantically appropriate.
tests/unit/presentation/test_github_formatter.py (1)
22-24: LGTM!Test assertions correctly updated to match the new markdown header format (
### Rule X). The assertion order properly reflects severity grouping (Critical → High).src/rules/conditions/access_control.py (1)
301-305: Add test coverage and clarify code owner policy for PR author scenario.The code treats the PR author as satisfying code owner review requirements (lines 301-305). While the recent fix commit message states "exclude PR author," the implementation adds the author to
requested_logins, meaning code owners who authored a PR will not appear in themissinglist.This behavior lacks test coverage—no tests validate what happens when a code owner authors a PR modifying their owned paths. The misleading comment ("inherently a reviewer") obscures the actual policy: whether author+code owner auto-satisfies the review requirement or should still require independent review.
Clarify the intent:
- Add test case(s) covering author as code owner scenario
- Update the comment to reflect policy explicitly (e.g., "Author satisfies code owner requirement for paths they own" or "Author excluded from code owner requirement")
- Confirm this aligns with your governance policy (especially if "four eyes" review is required)
| mock_github_client.get_issue_comments.return_value = [ | ||
| { | ||
| "body": "🚨 Watchflow Rule Violations Detected\n\n**Reason:** valid reason\n\n---\nThe following violations have been overridden:\n• **Rule** - Pull request does not have the minimum required approvals\n", | ||
| "body": "🚨 Watchflow Rule Violations Detected\n\n**Reason:** valid reason\n\n---\nThe following violations have been overridden:\n• PR has 1 approvals, requires 2\n", |
There was a problem hiding this comment.
Grammatical issue in violation text: "1 approvals" should be "1 approval".
The test expectation contains "PR has 1 approvals, requires 2" which has incorrect pluralization. If this reflects the actual production output, the source should be fixed to use proper grammar (e.g., "PR has 1 approval, requires 2" or dynamic pluralization).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/event_processors/pull_request/test_enricher.py` at line 68, The
violation text incorrectly uses "1 approvals"; update the pluralization so the
message uses "approval" when the count equals 1 and "approvals" otherwise.
Locate the code that composes the violation line (the place that sets the "body"
string in the PR watcher/enricher logic or helper that formats lines like "PR
has {n} approvals, requires {m}") and change it to choose singular for n==1,
then update the test expectation in
tests/unit/event_processors/pull_request/test_enricher.py to assert "PR has 1
approval, requires 2".
|
Codecov Report❌ Patch coverage is ❌ Your project status has failed because the head coverage (68.7%) is below the target coverage (80.0%). You can increase the head coverage or adjust the target coverage. @@ Coverage Diff @@
## main #54 +/- ##
=====================================
Coverage 68.7% 68.7%
=====================================
Files 154 154
Lines 9709 9712 +3
=====================================
+ Hits 6677 6681 +4
+ Misses 3032 3031 -1 Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
Summary by CodeRabbit
New Features
Bug Fixes
Behavior
Tests