PatchPatrol Docs

Troubleshooting

Resolve common PatchPatrol failures quickly using exit codes, stderr signals, and artifact metadata.

Troubleshooting

This page stays on the supported GitLab artifact-first PatchPatrol path. Use it when setup, diff planning, provider calls, policy gates, or GitLab delivery break and you need one concrete next move.

Use this order for every failure:

  1. Confirm the exit code and first failing stderr line.
  2. Open artifacts at AI_REVIEW_OUTPUT_DIR (default: .ai-review).
  3. Check .ai-review/ai-review.json for the matching meta.* section.

Symptom Map

SymptomLikely causeFix
CONFIG_ERROR: ... or exit code 2Core config validation failed before run startedCheck env var names, booleans, and numeric values in Configuration Reference.
OPENAI_CONFIG_ERROR[...] or OPENAI_CONFIG_* with exit 4AI_REVIEW_PROVIDER=openai is missing or using invalid settingsSet a valid OPENAI_BASE_URL, remove unsupported OpenAI-only knobs, and rerun readiness checks.
DIFF_CONTEXT_ERROR[...] or exit 4Diff context could not be resolvedRun ai-review run --mode auto --dry-run, then fix --mode, --base-ref, or --head-ref.
DIFF_EXTRACTION_ERROR[...] with MAX_FILES_EXCEEDED, MAX_DIFF_BYTES_EXCEEDED, or INVALID_LIMIT (exit 5)Diff scope is too large or the limit set is invalidRaise the right limit or narrow the review scope.
PROVIDER_ERROR[...] or OPENAI_API_ERROR[...] / OLLAMA_API_ERROR[...]Provider endpoint, auth, or transport failureCheck provider reachability, model availability, and retry budget.
TRUST_ALLOWLIST_EMPTY, TRUST_ALLOWLIST_ENDPOINT_MISSING, TRUST_ALLOWLIST_BLOCKED_ENDPOINTTrust gate blocked remote calls before review startedAlign the provider base URL with AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS, then rerun.
EXIT_REASON[FAIL_ON_*] or EXIT_REASON[FAIL_ON_PARTIAL_COVERAGE] with exit 10Review gate tripped on findings or omitted chunksReview findings and meta.limits, then adjust policy only if your team intends to change it.
GITLAB_NOTE_ERROR[...] with exit 9GitLab MR feedback failed after artifacts were writtenOpen the artifacts first, then repair GitLab delivery.
Exit code 6 (ARTIFACT_ERROR)Report write failed before completionCheck that .ai-review exists and is writable by the runner.

Exit semantics you can use to triage

ExitMeaning
2Config validation failed (CONFIG_ERROR[...]).
3Provider API transport/response failure (OPENAI_API_ERROR[...], OLLAMA_API_ERROR[...]).
4Diff-context failfast or OpenAI config validation (DIFF_CONTEXT_ERROR[...], OPENAI_CONFIG_ERROR[...]).
5Diff extraction failfast (DIFF_EXTRACTION_ERROR[...]).
6Artifact persistence failure (ARTIFACT_ERROR[...]).
7Provider runtime failure (PROVIDER_ERROR[...]).
8Provider output validation fallback (EXIT_REASON[LLM_OUTPUT_INVALID]).
9GitLab MR note/feedback failure (GITLAB_NOTE_ERROR[...]).
10Review gate triggered by configured policy.
11Trust gate blocked execution before review.

Recovery lanes

Access and readiness failures

Use this lane when the run never really starts.

Common signals:

  • CONFIG_ERROR: ...
  • OPENAI_CONFIG_ERROR[...]
  • TRUST_ALLOWLIST_EMPTY
  • TRUST_ALLOWLIST_ENDPOINT_MISSING
  • TRUST_ALLOWLIST_BLOCKED_ENDPOINT

What to check:

  • Run ai-review test --gitlab-readiness --semantic-readiness in the same environment the job uses.
  • Confirm the admin-owned handoff is complete: provider endpoint, model, GitLab project access, and merge request pipeline wiring.
  • When AI_REVIEW_PROVIDER=openai, verify OPENAI_BASE_URL, OPENAI_API_KEY when required, and remove unsupported OpenAI-only knobs such as AI_REVIEW_NUM_CTX.
  • When trust-gate signals appear, normalize the actual provider base URL and add that exact value to AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS.

Next action: Fix the missing prerequisite first, then return to Access and roles or Configuration Reference before rerunning the review.

Repository setup and diff resolution failures

Use this lane when PatchPatrol starts but cannot decide what to review.

Common signals:

  • DIFF_CONTEXT_ERROR[INVALID_MODE_COMBINATION]
  • DIFF_CONTEXT_ERROR[MISSING_MR_BASE_REF]
  • DIFF_EXTRACTION_ERROR[NO_WORKING_TREE_CHANGES]
  • DIFF_EXTRACTION_ERROR[NO_STAGED_CHANGES]
  • DIFF_EXTRACTION_ERROR[MAX_DIFF_BYTES_EXCEEDED]

What to check:

  • Run ai-review run --mode auto --dry-run to print the resolved review context before a full rerun.
  • Confirm the GitLab runner can fetch the target branch used as the merge base.
  • Inspect meta.limits.chunk_omitted_reasons, meta.limits.chunk_omitted_count, and the configured limits when large files or many files were skipped.
  • Increase only the limit that matches the omission signal: AI_REVIEW_MAX_FILES, AI_REVIEW_MAX_DIFF_BYTES, AI_REVIEW_MAX_CHUNKS, or AI_REVIEW_MAX_FILE_DIFF_BYTES.

Next action: Get the diff plan into a known-good state first, then continue with First review output once artifacts are present again.

Provider, trust gate, and review-start failures

Use this lane when diff planning worked but the remote review step failed or never cleared the trust boundary.

Common signals:

  • PROVIDER_ERROR[...]
  • OPENAI_API_ERROR[...]
  • OLLAMA_API_ERROR[...]
  • EXIT_REASON[LLM_OUTPUT_INVALID]
  • TRUST_ALLOWLIST_*

What to check:

  • In .ai-review/ai-review.json, inspect meta.provider_runtime.status, meta.provider_runtime.category, and meta.provider_runtime.reason_codes.
  • Confirm the configured model exists at the endpoint and the runner can reach OLLAMA_HOST or OPENAI_BASE_URL.
  • If TRANSPORT_RETRY_EXHAUSTED appears, increase AI_REVIEW_TRANSPORT_RETRY_MAX_ATTEMPTS only when transient network failures are normal for that CI path.
  • If the run stopped at EXIT_REASON[LLM_OUTPUT_INVALID], inspect meta.limits.error_flags and reduce diff pressure before retrying.
  • If trust-gate signals appear before artifacts exist, return to Admin Quickstart and confirm AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS exactly matches OPENAI_BASE_URL or OLLAMA_HOST.

Next action: Repair the provider or trust-boundary failure first, then re-run readiness checks and only then start another full review.

Policy and partial-coverage failures

Use this lane when PatchPatrol completed enough work to produce findings or omission data, but your configured gate still failed the run.

Common signals:

  • EXIT_REASON[FAIL_ON_BLOCKER]
  • EXIT_REASON[FAIL_ON_HIGH]
  • EXIT_REASON[FAIL_ON_MEDIUM]
  • EXIT_REASON[FAIL_ON_PARTIAL_COVERAGE]

What to check:

  • Read ai-review.md first to confirm whether the gate was triggered by validated findings or by omitted chunks.
  • In .ai-review/ai-review.json, inspect meta.limits.chunk_omitted_count, meta.limits.chunk_omitted_reasons, and any related error_flags.
  • Confirm whether the current policy is intentional: AI_REVIEW_FAIL_ON sets the severity gate and AI_REVIEW_FAIL_ON_PARTIAL_COVERAGE=true turns omissions into a hard failure.
  • Treat policy changes as a team decision, not an emergency workaround.

Next action: Decide whether to fix the findings, increase review coverage, or intentionally change policy in Configuration Reference.

GitLab delivery and artifact-location failures

Use this lane when the review artifacts exist or should exist, but developers cannot find them or GitLab feedback delivery failed after artifact writing.

Common signals:

  • GITLAB_NOTE_ERROR[GITLAB_TIMEOUT]
  • GITLAB_NOTE_ERROR[NETWORK_ERROR]
  • GITLAB_NOTE_ERROR[GITLAB_HTTP_401], GITLAB_HTTP_403, GITLAB_HTTP_404
  • GITLAB_NOTE_ERROR[MISSING_*]
  • FEEDBACK_DRAFT_SUPPRESSED

What to check:

  • Open the job artifacts first and confirm .ai-review/ai-review.md and .ai-review/ai-review.json exist.
  • In .ai-review/ai-review.json, inspect meta.feedback.delivery.status and meta.feedback.delivery.reason_codes.
  • Confirm the job publishes .ai-review/** as GitLab artifacts and that AI_REVIEW_OUTPUT_DIR matches the published path.
  • If the run is still artifact-only, remember that missing MR notes are expected; the artifacts remain the source of truth.

Next action: Recover the artifact path first, then fix MR feedback delivery only if your team has already moved beyond the artifact-only baseline.

Read artifact metadata like a runbook

Open .ai-review/ai-review.json whenever the stderr line tells you what failed but not what to do next.

FieldWhat it answersUse it when
meta.limitsWhich review limits applied, which chunks or files were omitted, and which limit/error flags firedDiff extraction failed, scope looked smaller than expected, or EXIT_REASON[FAIL_ON_PARTIAL_COVERAGE] fired
meta.provider_runtimeWhether the provider failed in config, capability probe, transport, or provider response stagesYou see PROVIDER_ERROR[...], OPENAI_API_ERROR[...], OLLAMA_API_ERROR[...], or OPENAI_CONFIG_ERROR[...]
meta.trust_gateWhether the provider call was blocked by allowlist or redaction policy and which TRUST_* code appliedThe run exited 11 or stderr mentioned the trust gate
meta.feedback.deliveryWhether GitLab delivery was ready, suppressed, blocked, or failed after artifacts were writtenMR note delivery failed, draft/manual suppression is suspected, or artifacts exist without GitLab feedback

Runbook habit:

  1. Start with ai-review.md for the human-readable summary.
  2. Move to .ai-review/ai-review.json for the matching meta.* section.
  3. Take one next action from the lane above instead of changing multiple variables at once.

Provider configuration failures

Use this shortcut when the access/readiness lane points to provider config specifically.

  • OPENAI_CONFIG_ERROR[...] means PatchPatrol rejected provider settings before review execution.
  • The most common public fixes are a valid OPENAI_BASE_URL, the right secret handoff for OPENAI_API_KEY, and removing unsupported AI_REVIEW_NUM_CTX when AI_REVIEW_PROVIDER=openai.
  • After the fix, rerun ai-review test --gitlab-readiness --semantic-readiness before a full review.

Provider runtime failures

Use this shortcut when review execution reached the provider but did not finish cleanly.

  • OPENAI_API_ERROR[...], OLLAMA_API_ERROR[...], and PROVIDER_ERROR[...] belong here.
  • Read meta.provider_runtime before changing timeouts or models so you know whether the failure came from transport, endpoint contract, or provider response.
  • If the same failure repeats with TRANSPORT_RETRY_EXHAUSTED, fix endpoint availability or CI network path before widening the retry budget.

Diff extraction and scope misses

Use this shortcut when the repository/diff lane points to selection or omission problems.

  • DIFF_CONTEXT_ERROR[...] means PatchPatrol could not resolve the intended review target.
  • DIFF_EXTRACTION_ERROR[...] means PatchPatrol resolved the target but could not extract a usable review scope.
  • meta.limits.chunk_omitted_reasons tells you whether the miss came from large files, chunk caps, or another bounded-review control.

GitLab feedback failures

Use this shortcut when artifacts exist but merge request delivery still failed.

  • GITLAB_NOTE_ERROR[...] happens after artifact writing, so the first recovery move is always to open the artifacts.
  • meta.feedback.delivery.status and meta.feedback.delivery.reason_codes tell you whether the failure was a real GitLab transport problem or a suppression state such as draft/manual gating.
  • If your team is still in AI_REVIEW_FEEDBACK_MODE=artifact-only, keep using artifacts and do not treat missing MR notes as a broken baseline.

Artifact location confusion checklist

If the developer cannot find results after a run:

  1. Verify the job publishes .ai-review/ai-review.md and .ai-review/ai-review.json as artifacts.
  2. Confirm AI_REVIEW_OUTPUT_DIR still resolves to .ai-review.
  3. If artifacts are missing entirely, inspect the runner log around ARTIFACT_ERROR[...] or exit code 6.
  4. If needed, reproduce locally with ai-review run --output-dir ./tmp-review-debug --mode auto --dry-run.

Return to recovery flow

Use these pages once the immediate failure is understood:

On this page