PatchPatrol Docs

Configuration Reference

Exact public configuration coverage for the supported PatchPatrol workflow.

Configuration Reference

Use this page when you need the exact public configuration contract for the supported PatchPatrol workflow after the quickstart path is already in place.

Source of truth: this page is derived from shipped PatchPatrol contracts in patchpatrol/config.py and the checked-in configuration references that narrow those contracts to the supported public surface.

This is the baseline public configuration surface. For operator-tuned controls and explicitly scoped legacy compatibility entries, read the companion Advanced configuration page.

When the same setting exists in both an environment variable and the CLI, the CLI flag is the direct override. Use the CLI reference for command flags and Policy controls for change guidance.

Provider and Runtime

VariableDefaultSupported values or posturePublic meaning
AI_REVIEW_MODELdeepseek-coder-v2:16bNon-empty model identifierSelects the review model used for run and readiness checks.
AI_REVIEW_PROVIDERollamaollama, openai, mockChooses the provider path: Ollama, OpenAI-compatible endpoints, or mock iteration.
OLLAMA_HOSThttp://192.168.19.55:11434Valid http or https URLEndpoint used when AI_REVIEW_PROVIDER=ollama.
OPENAI_BASE_URLunsetRequired when AI_REVIEW_PROVIDER=openai; valid http(s) API root without query or fragmentPoints PatchPatrol at a customer-controlled/self-hosted OpenAI-compatible endpoint or the official OpenAI API fallback.
OPENAI_API_KEYunsetSecret value when the endpoint requires authSupplies bearer-token auth for OpenAI-compatible requests.
OPENAI_ORGANIZATIONunsetOptional header valueAdds the OpenAI-Organization header when the endpoint supports it.
OPENAI_PROJECTunsetOptional header valueAdds the OpenAI-Project header when the endpoint supports it.
AI_REVIEW_OUTPUT_DIR.ai-reviewWritable pathControls where PatchPatrol writes ai-review.md, ai-review.json, and related artifacts.
AI_REVIEW_TIMEOUT_SECONDSunsetPositive integer or unsetSets a provider-neutral request timeout and takes precedence over OLLAMA_TIMEOUT_SECONDS.
OLLAMA_TIMEOUT_SECONDSunsetPositive integer or unsetBackwards-compatible timeout alias for the Ollama path; ignored when AI_REVIEW_TIMEOUT_SECONDS is set.
AI_REVIEW_TRANSPORT_RETRY_MAX_ATTEMPTS2Positive integer; 1 disables retryBounds retryable provider and GitLab transport attempts for CI predictability.

Review Bounds and Semantic Context

VariableDefaultSupported values or posturePublic meaning
AI_REVIEW_MAX_DIFF_BYTESunsetPositive integer or unsetCaps total diff bytes considered for review.
AI_REVIEW_MAX_FILESunsetPositive integer or unsetCaps how many changed files PatchPatrol selects for review.
AI_REVIEW_MAX_CHUNKSunsetPositive integer or unsetCaps the number of reviewable chunks after filtering and ordering.
AI_REVIEW_MAX_FILE_DIFF_BYTESunsetPositive integer or unsetCaps diff size per file chunk and omits oversized files with reason metadata.
AI_REVIEW_CONTEXT_MODEdiff-onlydiff-only, diff+semanticChooses plain diff review or the best-effort semantic precheck path.
AI_REVIEW_SEMANTIC_TOTAL_TIMEOUT_SECONDS20Positive integerSets the total timeout budget for semantic precheck work when diff+semantic is enabled.
AI_REVIEW_MAX_SUPPORTING_CONTEXT_BYTES65536Positive integerCaps the bounded supporting context carried into semantic-aware prompts.

Delivery and Policy Controls

VariableDefaultSupported values or posturePublic meaning
AI_REVIEW_FAIL_ONnonenone, blocker, high, mediumChooses which validated finding severities turn ai-review run into a non-zero gate.
AI_REVIEW_FAIL_ON_PARTIAL_COVERAGEfalsetrue or falseFails the run when omitted chunks are present, even if no severity threshold was hit.
AI_REVIEW_ENABLE_FINAL_SUMMARYfalsetrue or falseEnables an extra summary-only model pass that can refine top issues without changing deterministic findings.
AI_REVIEW_FEEDBACK_MODEartifact-onlyartifact-only, mr, mr-manualKeeps the blessed artifact-first baseline or turns on GitLab merge request delivery.
AI_REVIEW_GITLAB_INLINE_DISCUSSIONStruetrue or falseWhen MR delivery is active, decides whether PatchPatrol also posts eligible inline discussions.
AI_REVIEW_ALLOW_DRAFT_MR_FEEDBACKfalsetrue or falseAllows MR delivery on draft or WIP merge requests for the current run.
AI_REVIEW_MANUAL_FEEDBACK_AUTHORIZEDfalsetrue or falseExplicitly authorizes mr-manual delivery when CI metadata alone should not decide it.
AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLSunsetComma-separated exact normalized http(s) base URLsRequired exact normalized provider base URLs for non-mock providers before the first real review run.

Trust-gate prerequisite for first real runs

Set the allowlist to the exact normalized provider base URL before the first real non-mock review run:

OPENAI_BASE_URL="https://llm-gateway.internal/v1"
AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS="https://llm-gateway.internal/v1"
OLLAMA_HOST="http://ollama.internal:11434"
AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS="http://ollama.internal:11434/"

If you are wiring the first supported public path, start with Admin Quickstart and return here for the exact variable contract.

Exit Semantics

Most teams only need a short mental model for exit behavior:

  • 0: no configured gate was tripped.
  • 10: AI_REVIEW_FAIL_ON or AI_REVIEW_FAIL_ON_PARTIAL_COVERAGE triggered a review gate.
  • 11: PatchPatrol stopped before remote calls because the trust gate blocked execution.
  • 8: invalid-output fallback occurred.

Public Boundary

  • AI_REVIEW_POST_MR_NOTE is retained as a backwards-compatible alias for existing installs and maps to AI_REVIEW_FEEDBACK_MODE=mr through AI_REVIEW_FEEDBACK_MODE resolution.
  • Variables not listed on this page and not covered on the companion page are outside the published public contract.

Use Policy controls when you are choosing how strict those gates should be, and use the CLI reference when you need the matching command-line overrides.

Return to Reference.

On this page