Free AI Code Reviewer for GitHub & GitLab — line-level review, 40+ quality checks, and automatic blocking of low-quality PRs
PRGuard reviews every line of code in your pull requests — like a human reviewer, but in 30 seconds. It posts inline comments, scores quality 0-100, and auto-blocks bad PRs. Free rules mode works with zero config; AI mode uses your own API key (OpenAI, DeepSeek, Ollama, etc.).
| Feature | Rules Mode (Free) | AI Mode (BYOK) |
|---|---|---|
| Line-level AI code review | ❌ | ✅ |
| Import Graph context injection | ❌ | ✅ |
| Semantic code analysis | ❌ | ✅ |
| Hallucinated API detection | Basic | Deep |
| 40+ quality checks | ✅ | ✅ |
| AI slop pattern detection | ✅ | ✅ |
| Quality scoring (0-100) | ✅ | ✅ |
| Import verification against source | ✅ | ✅ |
| Code style mismatch detection | ✅ | ✅ |
| PR history / reputation analysis | ✅ | ✅ |
| Cross-repo spam detection | ✅ | ✅ |
| Auto-close low-quality PRs | ✅ | ✅ |
V3 AI Code Review (requires AI API key):
- 📝 Line-Level Review — AI posts inline comments on specific code lines, like a human reviewer
- 🧠 Import Graph Context — AI reads related project files (not just the diff) to catch duplicated logic, API misuse, and style inconsistencies
- ⚡ Auto Request Changes — Automatically marks PR as "changes requested" when critical issues are found
- 🏷️ Title — Conventional commit format, length, blocked patterns
- 📝 Description — Completeness, template compliance, AI filler phrase detection
- 📦 Commits — Message quality, author matching, lazy message detection
- 🌿 Branch — Source/target branch rules, naming convention
- 📁 Files — Change size limits, sensitive files, excessive comments detection
- 👤 Contributor — Account age, spam username detection, trust scoring
- 🤖 AI Slop — Emoji overload, hallucinated imports, over-engineering signals
V2 Advanced Detection:
- 📦 Import Verification — Checks imports against actual project source code (not guessing)
- 🎨 Code Style — Detects naming convention and indent style mismatches vs project
- 📜 PR History — Analyzes author's merge/rejection rate, flags serial rejected contributors
- 🕸️ Cross-Repo Spam — Detects users opening PRs across 10+ repos in 24h (bot detection)
When PRGuard detects a low-quality PR, it posts a detailed review comment:
❌ PRGuard Review — Failed
Quality Score: 57/100 🟡 Fair
⚠️ 4 issue(s) found:
🟡 Warnings (4)
- PR title does not follow conventional format. Got: "Update some files"
- PR description has too many emoji (13, max 10)
- AI slop patterns detected: "this pr aims to", "best practices",
"comprehensive solution", "seamless integration"
- 1 commit(s) have lazy/meaningless messages: "update"
🤖 AI Analysis
This PR appears to be AI-generated filler with placeholder logic,
suspicious imports, and excessive comment noise.
Slop Indicators:
- Hallucinated imports: some_nonexistent_package
- Excessive comments restating obvious code behavior
- Generic naming (do_stuff, another_function)
- Boilerplate with no real implementation
Line-level AI Code Review (with mode: 'ai'):
PRGuard also posts inline comments directly on the code — just like a human reviewer:
src/utils/auth-helper.ts line 8:
🔴 CRITICAL: `useAuth()` is a React hook and cannot be called from
a regular utility function. This violates the Rules of Hooks.
src/utils/auth-helper.ts line 13:
🟡 WARNING: `build_api_url` duplicates existing `buildApiUrl` logic
in `src/lib/apiClient.ts` and hardcodes a base URL.
src/utils/auth-helper.ts line 17:
🔵 NITPICK: Function naming `fetch_user_data` is inconsistent with
the codebase's camelCase TypeScript style.
The AI reads related project files via Import Graph — it knows your codebase, not just the diff.
In your repository, create the file .github/workflows/pr-quality.yml:
your-project/
├── src/
├── README.md
└── .github/
└── workflows/
└── pr-quality.yml ← create this file
# .github/workflows/pr-quality.yml
name: PR Quality
on:
pull_request_target:
types: [opened, reopened, synchronize]
permissions:
contents: read
pull-requests: write
jobs:
prguard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: 1137043480/PRGuard@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
# ↑ This is auto-provided by GitHub. You do NOT need to create it.That's it! Every PR now gets automatic quality checks. No API key needed.
-
Go to your repo → Settings → Secrets and variables → Actions → New repository secret
-
Add your AI API key:
- Name:
OPENAI_API_KEY→ Value: your key (e.g.sk-...) - (Optional) Name:
OPENAI_BASE_URL→ Value: your API endpoint
- Name:
-
Update the workflow:
# .github/workflows/pr-quality.yml
name: PR Quality
on:
pull_request_target:
types: [opened, reopened, synchronize]
permissions:
contents: read
pull-requests: write
jobs:
prguard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: 1137043480/PRGuard@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
mode: 'ai'
ai-provider: 'openai'
ai-api-key: ${{ secrets.OPENAI_API_KEY }}
ai-base-url: ${{ secrets.OPENAI_BASE_URL }} # optional
ai-model: 'gpt-5'🔒 Your API key is safe. It's stored in GitHub Secrets — never exposed in code, logs, or to PRGuard.
Next time someone opens a PR, PRGuard runs automatically within 30 seconds.
PRGuard also works with GitLab (including self-hosted instances). All 40+ quality checks and AI code review features are fully supported.
1. Create a GitLab API Token
Go to User Settings → Access Tokens (or Project Settings → Access Tokens) and create a token with api scope.
2. Add Token as CI/CD Variable
Go to your project's Settings → CI/CD → Variables and add:
- Key:
GITLAB_TOKEN, Value: your token, Masked: ✅
3. Add PRGuard dist to your repo
Download dist-gitlab/ from GitHub Releases and place it in your repository root.
4. Add to .gitlab-ci.yml
prguard:
stage: test
image: node:20-alpine
script:
- node dist-gitlab/index.js
variables:
PRGUARD_MODE: "rules" # or "ai" with PRGUARD_AI_API_KEY
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'prguard:
stage: test
image: node:20-alpine
script:
- node dist-gitlab/index.js
variables:
PRGUARD_MODE: "ai"
PRGUARD_AI_PROVIDER: "openai"
PRGUARD_AI_API_KEY: $OPENAI_API_KEY
PRGUARD_AI_MODEL: "gpt-5"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'All settings are configured via PRGUARD_ prefixed environment variables:
| Variable | Default | Description |
|---|---|---|
PRGUARD_MODE |
rules |
rules (free) or ai |
PRGUARD_MAX_FAILURES |
4 |
Max check failures before flagging |
PRGUARD_MIN_SCORE |
40 |
Minimum quality score (0-100) |
PRGUARD_CLOSE_MR |
false |
Auto-close failing MRs |
PRGUARD_ADD_LABEL |
needs-review |
Label for failing MRs |
PRGUARD_AI_PROVIDER |
openai |
AI provider |
PRGUARD_AI_API_KEY |
— | API key for AI mode |
PRGUARD_AI_MODEL |
gpt-4o-mini |
AI model name |
See
.gitlab-ci.example.ymlfor a complete example with all options.
⚠️ Note: gitlab.com free accounts require identity verification to use shared runners. Self-hosted GitLab instances (company private servers) have their own runners and work without any restrictions.
Any provider with an OpenAI-compatible /v1/chat/completions endpoint works. Just set ai-provider: 'openai' and point ai-base-url to your endpoint:
| Provider | ai-base-url |
Recommended Model | Notes |
|---|---|---|---|
| OpenAI | (default, no need to set) | gpt-5 |
Best for code review. Also: codex-mini-latest |
| DeepSeek | https://api.deepseek.com/v1 |
deepseek-chat |
Maps to DeepSeek-V3.2, great value |
| Groq | https://api.groq.com/openai/v1 |
meta-llama/llama-4-scout-17b-16e-instruct |
Ultra-fast inference |
| Together AI | https://api.together.xyz/v1 |
meta-llama/Llama-3.3-70B-Instruct-Turbo |
Good balance of speed/quality |
| Mistral | https://api.mistral.ai/v1 |
mistral-large-latest |
Always points to latest version |
| OpenRouter | https://openrouter.ai/api/v1 |
anthropic/claude-sonnet-4-20250514 |
Access any model via one API |
| NewAPI / One API | https://your-server.com/v1 |
any model | Self-hosted API gateway |
| Ollama (self-hosted) | http://your-server:11434/v1 |
llama3.3 |
Free, fully private, no data leaves your server |
For Anthropic Claude (native API, not OpenAI-compatible):
ai-provider: 'anthropic'
ai-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
ai-model: 'claude-sonnet-4-20250514'All settings below are optional — PRGuard works out of the box with sensible defaults. Add these to your workflow only if you want to customize behavior.
💡 These parameters control rule-based checks (used in all modes). AI mode settings (
mode,ai-provider,ai-api-key, etc.) are documented in the Quick Start section above.
- uses: 1137043480/PRGuard@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
max-failures: 6 # ← optional: override any default below
min-quality-score: 60| Input | Default | Description |
|---|---|---|
max-failures |
4 |
How many checks can fail before PRGuard flags the PR. Increase if you want to be more lenient |
min-quality-score |
40 |
PR must score above this (0-100) to pass. Set higher (e.g. 70) for stricter quality control |
| Input | Default | Description |
|---|---|---|
require-conventional-title |
true |
Require titles like feat: ..., fix: ..., docs: .... Set false if your project doesn't use conventional commits |
blocked-title-patterns |
Update README.md,... |
Auto-generated titles that indicate low effort. PRs with these titles are flagged |
| Input | Default | Description |
|---|---|---|
require-description |
true |
Require PR body to not be empty. Set false for small/trivial projects |
min-description-length |
30 |
Minimum characters in PR body. Short descriptions often mean low-effort PRs |
max-description-length |
5000 |
Flag overly long descriptions — a common sign of AI-generated slop |
max-emoji-count |
10 |
Flag descriptions with too many emoji — another AI slop signal |
| Input | Default | Description |
|---|---|---|
require-conventional-commits |
false |
Require each commit message to follow type: message format. Default off since many projects don't enforce this |
require-commit-author-match |
true |
Commit author email must match the PR author. Catches copied/stolen commits |
| Input | Default | Description |
|---|---|---|
max-files-changed |
50 |
Flag PRs that touch too many files. Large PRs are often AI-generated bulk changes |
max-additions |
2000 |
Flag PRs that add too many lines. Giant code dumps are a slop signal |
detect-excessive-comments |
true |
Detect AI-style comments like // Initialize the variable on let x = 0 |
detect-hallucinated-imports |
true |
Check if imported packages actually exist. AI often invents fake package names |
| Input | Default | Description |
|---|---|---|
min-account-age-days |
7 |
Flag accounts created less than 7 days ago. Fresh accounts are often spam bots |
detect-spam-usernames |
true |
Flag random-looking usernames like xjk283hd — typical of bot accounts |
| Input | Default | Description |
|---|---|---|
close-pr |
false |
Automatically close failing PRs. Use with caution — only enable if you get heavy spam |
add-label |
needs-review |
Label added to failing PRs so you can filter them in GitHub |
comment-on-pr |
true |
Post a detailed review comment with score and issues. Set false to run silently |
| Input | Default | Description |
|---|---|---|
exempt-bots |
true |
Skip Dependabot, Renovate, and other known bots |
exempt-draft-prs |
true |
Skip draft PRs (still work-in-progress) |
exempt-users |
`` | Skip specific users, e.g. user1,user2 (comma-separated) |
exempt-labels |
`` | Skip PRs with certain labels, e.g. skip-review,trusted |
Note: Repository Owners, Members, and Collaborators are automatically exempt — no configuration needed.
| Output | Description |
|---|---|
quality-score |
Overall quality score (0-100) |
passed |
Whether the PR passed |
failures |
Number of failed checks |
report |
Full JSON report |
In addition, PRGuard posts:
- 📋 Summary comment — Quality score, rule violations, AI analysis (on every PR)
- 📝 Inline review comments — Line-level feedback on specific code issues (AI mode only)
- 🏷️ Labels — Adds configurable labels on failing PRs
- 🔄 Request Changes — Marks PR as "changes requested" when critical issues are found (AI mode only)
| Feature | PRGuard | anti-slop | PR-Agent | CodeRabbit |
|---|---|---|---|---|
| Rule checks | 40+ | 31 | ❌ | ❌ |
| Line-level AI review | ✅ (BYOK) | ❌ | ✅ | ✅ |
| Import Graph context | ✅ | ❌ | ❌ | ✅ |
| AI slop detection | ✅ Deep | ✅ Basic | ❌ | ❌ |
| Import verification | ✅ 6 languages | ❌ | ❌ | ❌ |
| Quality scoring | ✅ 0-100 | ❌ | ❌ | ❌ |
| Self-hosted AI (Ollama) | ✅ | ❌ | ❌ | ❌ |
| Zero-cost mode | ✅ | ✅ | ❌ | ❌ |
| License | MIT | MIT | Apache | Closed |
See CONTRIBUTING.md for guidelines.
学AI,上L站!https://linux.do/
MIT © 1137043480