MediaWiki:Robots.txt

From Freephile Wiki
Revision as of 13:12, 7 September 2017 by Freephile (talk | contribs) (lifted from en.wikipedia.org)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
  1. robots.txt for http://www.wikipedia.org/ and friends
  2. Please note: There are a lot of pages on this site, and there are
  3. some misbehaved spiders out there that go _way_ too fast. If you're
  4. irresponsible, your access to the site may be blocked.
  1. advertising-related bots:

User-agent: Mediapartners-Google* Disallow: /

  1. Wikipedia work bots:

User-agent: IsraBot Disallow:

User-agent: Orthogaffe Disallow:

  1. Crawlers that are kind enough to obey, but which we'd rather not have
  2. unless they're feeding search engines.

User-agent: UbiCrawler Disallow: /

User-agent: DOC Disallow: /

User-agent: Zao Disallow: /

  1. Some bots are known to be trouble, particularly those designed to copy
  2. entire sites. Please obey robots.txt.

User-agent: sitecheck.internetseer.com Disallow: /

User-agent: Zealbot Disallow: /

User-agent: MSIECrawler Disallow: /

User-agent: SiteSnagger Disallow: /

User-agent: WebStripper Disallow: /

User-agent: WebCopier Disallow: /

User-agent: Fetch Disallow: /

User-agent: Offline Explorer Disallow: /

User-agent: Teleport Disallow: /

User-agent: TeleportPro Disallow: /

User-agent: WebZIP Disallow: /

User-agent: linko Disallow: /

User-agent: HTTrack Disallow: /

User-agent: Microsoft.URL.Control Disallow: /

User-agent: Xenu Disallow: /

User-agent: larbin Disallow: /

User-agent: libwww Disallow: /

User-agent: ZyBORG Disallow: /

User-agent: Download Ninja Disallow: /

  1. Misbehaving: requests much too fast:

User-agent: fast Disallow: /

  1. Sorry, wget in its recursive mode is a frequent problem.
  2. Please read the man page and use it properly; there is a
  3. --wait option you can use to set the delay between hits,
  4. for instance.

User-agent: wget Disallow: /

  1. The 'grub' distributed client has been *very* poorly behaved.

User-agent: grub-client Disallow: /

  1. Doesn't follow robots.txt anyway, but...

User-agent: k2spider Disallow: /

  1. Hits many times per second, not acceptable
  2. http://www.nameprotect.com/botinfo.html

User-agent: NPBot Disallow: /

  1. A capture bot, downloads gazillions of pages with no public benefit
  2. http://www.webreaper.net/

User-agent: WebReaper Disallow: /

  1. Wayback Machine: defaults and whether to index user-pages
  2. FIXME: Complete the removal of this block, per T7582.
  3. User-agent: archive.org_bot
  4. Allow: /


  1. Friendly, low-speed bots are welcome viewing article pages, but not
  2. dynamically-generated pages please.
  3. Inktomi's "Slurp" can read a minimum delay between hits; if your
  4. bot supports such a thing using the 'Crawl-delay' or another
  5. instruction, please let us know.
  6. There is a special exception for API mobileview to allow dynamic
  7. mobile web & app views to load section content.
  8. These views aren't HTTP-cached but use parser cache aggressively
  9. and don't expose special: pages etc.
  10. Another exception is for REST API documentation, located at
  11. /api/rest_v1/?doc.

User-agent: * Allow: /w/api.php?action=mobileview& Allow: /w/load.php? Allow: /api/rest_v1/?doc Disallow: /w/ Disallow: /api/ Disallow: /trap/