0.665 min read

Submitted URL blocked by robots.txt: What it means and what to do (GSC)

By Official

Key takeaways

  • A practical guide to "Submitted URL blocked by robots
  • txt": how to decide if the URL should be indexed, how to unblock safely, and how to avoid keeping bad URLs stuck in the index

Start with the map:

Related:

What this status means

You submitted a URL (usually in a sitemap), but robots.txt blocks crawling.

Important nuance: robots blocks crawling, not indexing. Blocking can also prevent Google from seeing a noindex directive.

Pick a strategy

Strategy A: you want it indexed

  • remove the disallow rule for that path
  • ensure there is no noindex
  • request indexing only for core pages

Strategy B: you do NOT want it indexed

Best practice:

  • allow crawling
  • add noindex so Google can see the directive
  • remove the URL from the sitemap

Validation

  • Test robots.txt in GSC.
  • URL Inspection: make sure Googlebot can fetch the page.

Next in SEO & Search

View topic hub

Up next:

robots.txt unreachable: Why it happens and how to fix it

A practical guide to "robots.txt unreachable": what Googlebot is seeing, common causes (timeouts, 403/5xx, WAF), and how to validate the fix in Search Console.