Show HN: Continue – Source-controlled AI checks, enforceable in CI

docs.continue.dev

33 points by sestinj 4 hours ago


We now write most of our code with agents. For a while, PRs piled up, causing review fatigue, and we had this sinking feeling that standards were slipping. Consistency is tough at this volume. I’m sharing the solution we found, which has become our main product.

Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently.

Here’s one of ours:

  .continue/checks/metrics-integrity.md

  ---
  name: Metrics Integrity
  description: Detects changes that could inflate, deflate, or corrupt metrics (session counts, event accuracy, etc.)
  ---

  Review this PR for changes that could unintentionally distort metrics.
  These bugs are insidious because they corrupt dashboards without triggering errors or test failures.

  Check for:
  - "Find or create" patterns where the "find" is too narrow, causing entity duplication (e.g. querying only active sessions, missing completed ones, so every new commit creates a duplicate)
  - Event tracking calls inside loops or retry paths that fire multiple times per logical action
  - Refactors that accidentally remove or move tracking calls to a path that executes with different frequency

  Key files: anything containing `posthog.capture` or `trackEvent`

This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.

---

To get started, paste this into Claude Code or your coding agent of choice:

  Help me write checks for this codebase: https://continue.dev/walkthrough
It will:

- Explore the codebase and use the `gh` CLI to read past review comments

- Write checks to `.continue/checks/`

- Optionally, show you how to run them locally or in CI

Would love your feedback!

esafak - 3 hours ago

This looks likes a more configurable version of the code review tools out there, for running arbitrary AI-powered tasks.

Do you support exporting metrics to something standard like CSV? https://docs.continue.dev/mission-control/metrics

A brief demo would be nice too.

bachittle - 4 hours ago

Is this the same continue that was for running local AI coding agents? Interesting rebrand.