PatchPatrol Docs

Architecture

A practical walkthrough of the public PatchPatrol review runtime, boundaries, and artifact output.

Architecture

This page documents the shipped public PatchPatrol runtime model for the GitLab artifact-first path.

Public path scope: this document describes current runtime behavior in the ai-review run pipeline, plus how failures are surfaced.

What happens when ai-review run starts

The pipeline is shaped like a set of explicit stages:

  1. Read effective configuration from environment and command flags.
  2. Resolve a diff plan (auto, mr, branch, staged, working-tree).
  3. Build a bounded review scope:
    • file include/exclude filtering,
    • file-level and total-size guardrails,
    • chunking and omission tracking.
  4. Run trust checks before any provider call.
  5. Run optional semantic precheck (only in diff+semantic context mode).
  6. Collect repository overview signals.
  7. Execute provider review and validate the structured output.
  8. Emit artifacts (ai-review.json, ai-review.md, optional code_quality output).
  9. Optionally run GitLab MR feedback, depending on AI_REVIEW_FEEDBACK_MODE and environment.

Input and Output Boundaries

Inputs

  • Working tree/diff data from the selected git context.
  • Run configuration and runtime limits from flags/environment.
  • Optional repository overview evidence (file structure + detected tooling signals).
  • Optional local semantic diagnostics when AI_REVIEW_CONTEXT_MODE=diff+semantic.

Processing boundaries

  • Diff scope is bounded by fail-fast limits before provider execution.
  • Review inputs are filtered into bounded chunks before provider review.
  • If provider output is invalid for schema, PatchPatrol retries once and then emits an explicit fallback report.

Outputs

  • ai-review.json is the machine-readable structured payload.
  • ai-review.md is the operator-readable summary artifact.
  • gl-code-quality-report.json is written when code-quality export is enabled in runtime config.
  • Optional GitLab summary note and inline discussions are written only when delivery is configured.

Trust Gate and Redaction

Before any provider call, PatchPatrol evaluates:

  • Provider endpoint allowlist enforcement.
  • Secret-like pattern redaction on review chunk content.

If trust checks fail, the run exits with trust-fail artifacts and does not continue provider review.

For non-mock providers, an empty AI_REVIEW_PROVIDER_ALLOWLIST_BASE_URLS is considered a hard failure. That means provider review stops early (exit 11) unless an allowlist is explicitly configured. See the Configuration reference for details and required defaults.

Review execution behavior

  • The review phase builds prompts from bounded diff content and repository context.
  • On invalid provider output, PatchPatrol attempts a fallback validation strategy.
  • All report content is still validated against the shipped artifacts schema before writing.
  • Usage and runtime metadata are attached to the JSON report for transparency.

For exact command flags and environment variables, see:

Why the architecture is constrained this way

The default sequence is optimized for predictable, auditable review signal in CI:

  • strict scope controls before remote calls,
  • explicit boundary checks before provider interactions,
  • artifact-first output for human verification,
  • deterministic schema-validated payloads for downstream tooling.

This keeps the first deployment path lightweight while still making it clear where control points are.

Next steps

On this page