Key takeaways
- A practical guide to "Submitted URL blocked by robots
- txt": how to decide if the URL should be indexed, how to unblock safely, and how to avoid keeping bad URLs stuck in the index
Table of Contents
Start with the map:
Related:
What this status means
You submitted a URL (usually in a sitemap), but robots.txt blocks crawling.
Important nuance: robots blocks crawling, not indexing. Blocking can also prevent Google from seeing a noindex directive.
Pick a strategy
Strategy A: you want it indexed
- remove the disallow rule for that path
- ensure there is no noindex
- request indexing only for core pages
Strategy B: you do NOT want it indexed
Best practice:
- allow crawling
- add noindex so Google can see the directive
- remove the URL from the sitemap
Validation
- Test robots.txt in GSC.
- URL Inspection: make sure Googlebot can fetch the page.
Next in GSC statuses
Browse the cluster: GSC indexing statuses.
- GSC Indexing Statuses Explained: What They Mean and How to Fix Them (2026)
- Page with redirect (Google Search Console): What it means and how to fix it
- Redirect loop: How to find it and fix it (SEO + GSC)
- GSC redirect error: The fastest fix checklist (chains, loops, and canonical URLs)
- Submitted URL marked 'noindex': The fastest fix checklist (GSC)
- robots.txt unreachable: Why it happens and how to fix it
Next in SEO & Search
Up next:
robots.txt unreachable: Why it happens and how to fix itA practical guide to "robots.txt unreachable": what Googlebot is seeing, common causes (timeouts, 403/5xx, WAF), and how to validate the fix in Search Console.