Skip to content

docs: add pentesting prompt methodology guidance#266

Open
mason5052 wants to merge 1 commit intovxcontrol:mainfrom
mason5052:codex/issue-79-pentest-methodology-guide
Open

docs: add pentesting prompt methodology guidance#266
mason5052 wants to merge 1 commit intovxcontrol:mainfrom
mason5052:codex/issue-79-pentest-methodology-guide

Conversation

@mason5052
Copy link
Copy Markdown
Contributor

Summary

  • add a pentesting prompt methodology section to the README for prompt authors
  • expand the prompt engineering guide with a pentester-specific methodology checklist and reference material
  • point contributors to the existing examples/prompts/base_web_pentest.md starter prompt

Problem

Issue #79 asks for better guidance on how AI prompts should approach pentesting. PentAGI already includes prompt templates and a sample web pentest prompt, but the documentation does not explain how to translate pentesting methodology into reusable agent guidance.

Solution

  • add a new README section under Development that describes a phase-based prompt methodology for offensive security work
  • extend backend/docs/prompt_engineering_pentagi.md with guidance for authorization boundaries, coverage-first mapping, attack-surface prioritization, low-risk validation, evidence capture, iterative memory use, and report-ready summaries
  • reference public methodology sources such as HackTricks and Pentest Book as inspiration without copying large external checklists into PentAGI's prompt docs

User Impact

Prompt authors now have a clearer path for turning pentest methodology into PentAGI prompts. This should make new prompt iterations easier to structure, easier to review, and easier to adapt to different engagement scopes.

Test Plan

  • confirm the new README section is linked from the table of contents
  • verify the documentation references existing PentAGI prompt assets and external methodology sources accurately
  • run git diff --check

Closes #79

Signed-off-by: Mason Kim(ZINUS US_SALES) <mkim@zinus.com>
Copilot AI review requested due to automatic review settings April 15, 2026 22:05
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds pentesting-specific prompt-author guidance to PentAGI’s documentation, aiming to help prompt authors translate standard pentest methodology into reusable, reviewable agent instructions.

Changes:

  • Adds a phase-based “Pentesting Prompt Methodology” section to the README and links it from the Table of Contents.
  • Extends backend/docs/prompt_engineering_pentagi.md with a pentester-oriented methodology checklist and external reference pointers.
  • Points contributors to existing prompt assets (notably examples/prompts/base_web_pentest.md and the pentester template).

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
README.md Adds a pentesting prompt methodology section and a TOC entry intended to help prompt authors structure offensive-security prompts.
backend/docs/prompt_engineering_pentagi.md Adds a pentesting methodology checklist plus curated reference links and pointers to in-repo prompt templates/examples.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread README.md
Comment on lines +42 to 43
- [Pentesting Prompt Methodology](#pentesting-prompt-methodology)
- [Testing LLM Agents](#testing-llm-agents)
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Table of Contents nests “Pentesting Prompt Methodology” under “Development”, but the actual “### Pentesting Prompt Methodology” section appears under “Testing LLM Agents” later in the README. This makes the TOC structure misleading even though the anchor works; please either move the section under the Development chapter or relocate the TOC entry under the correct parent heading (Testing LLM Agents).

Suggested change
- [Pentesting Prompt Methodology](#pentesting-prompt-methodology)
- [Testing LLM Agents](#testing-llm-agents)
- [Testing LLM Agents](#testing-llm-agents)
- [Pentesting Prompt Methodology](#pentesting-prompt-methodology)

Copilot uses AI. Check for mistakes.
#### Recommended Reference Material
- Use public methodology resources such as [HackTricks](https://book.hacktricks.wiki/en/index.html) and [Pentest Book](https://pentestbook.six2dez.com/) as inspiration for attack-surface coverage and testing depth.
- Translate those references into concise phases, priorities, and verification rules for the agent instead of copying long checklists into the system prompt verbatim.
- Keep prompt examples aligned with live PentAGI assets such as `backend/pkg/templates/prompts/pentester.tmpl` and `../../examples/prompts/base_web_pentest.md`.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bullet mixes a repo-root path (backend/pkg/...) with a relative-from-this-file path (../../examples/...). For clarity and consistency, prefer a single convention (e.g., repo-root paths like examples/prompts/base_web_pentest.md) and ideally make them markdown links so readers can click through.

Suggested change
- Keep prompt examples aligned with live PentAGI assets such as `backend/pkg/templates/prompts/pentester.tmpl` and `../../examples/prompts/base_web_pentest.md`.
- Keep prompt examples aligned with live PentAGI assets such as [`backend/pkg/templates/prompts/pentester.tmpl`](../pkg/templates/prompts/pentester.tmpl) and [`examples/prompts/base_web_pentest.md`](../../examples/prompts/base_web_pentest.md).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Enhancement]: better guides fo ai to use for pentesting

3 participants