Skip to content

Tier unsupported sites by scraping difficulty#357

Draft
guergabo wants to merge 6 commits into
mainfrom
hypeship/scraperly-link
Draft

Tier unsupported sites by scraping difficulty#357
guergabo wants to merge 6 commits into
mainfrom
hypeship/scraperly-link

Conversation

@guergabo
Copy link
Copy Markdown
Collaborator

@guergabo guergabo commented May 11, 2026

Summary

Restructures the Unsupported Websites section of the browsers FAQ into a tiered list (Very Hard / Hard) of sites that are difficult or infeasible to automate today, with a short description of what to expect at each tier.

Inspired by per-site difficulty resources elsewhere in the scraping community, but using our own observations rather than linking out.

Drops the earlier <Info> callout in bot-detection/overview.mdx.

Preview

https://tbd-6fc993ce-hypeship-scraperly-link.mintlify.app/browsers/faq#unsupported-websites

Test plan

  • Mintlify preview renders the tiered list cleanly
  • #unsupported-websites anchor still resolves

@mintlify
Copy link
Copy Markdown
Contributor

mintlify Bot commented May 11, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
Kernel 🟢 Ready View Preview May 11, 2026, 5:22 PM

💡 Tip: Enable Workflows to automatically generate PRs for you.

cursor[bot]
cursor Bot approved these changes May 11, 2026
cursor[bot]
cursor Bot approved these changes May 11, 2026
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stale comment

Risk assessment: Very Low.

The actual diff only adds a short informational <Info> callout with an external link in browsers/bot-detection/overview.mdx. There are no codepath, config, navigation, prompt, infrastructure, auth, permissions, or data changes, and I didn’t find a CODEOWNERS file in the repo.

The PR is already approved on the current head commit, so I’m leaving this assessment as a comment only and not re-approving.

Open in Web View Automation 

Sent by Cursor Automation: Assign PR reviewers

@guergabo guergabo changed the title Link to Scraperly in bot anti-detection overview Tier unsupported sites by scraping difficulty May 11, 2026
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stale comment

Risk assessment: Very Low.

The actual diff only adds a short <Info> callout with an external Scraperly link in browsers/bot-detection/overview.mdx. There are no codepath, config, navigation, prompt/model-instruction, infrastructure, auth, permissions, or data changes, and I didn’t find a CODEOWNERS file in the repo.

The PR already has an approval recorded, so I’m leaving this assessment as a comment only and not re-approving.

Open in Web View Automation 

Sent by Cursor Automation: Assign PR reviewers

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Risk assessment: Very Low.

The current diff only updates copy in browsers/faq.mdx to categorize unsupported websites by scraping difficulty. There are no codepath, config, navigation, infrastructure, auth, permissions, data, or model-instruction changes, and I didn’t find a CODEOWNERS file in the repo.

The PR is already approved, so I’m leaving this assessment as a comment only and not re-approving.

Open in Web View Automation 

Sent by Cursor Automation: Assign PR reviewers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant