About Cmdsbot

Cmdsbot is the official crawler for Cmds Search. It helps us discover and index public webpages so they can appear in our search results. Our crawlers follow standard protocols to ensure site owners control what does and doesn’t appear in search results.

Our Crawlers

Each crawler identifies itself clearly in the User-Agent string and links back here.

Robots.txt

Cmdsbot will obey all robots.txt directives. You can control access and discovery with standard directives such as User-agent, Allow, Disallow, Sitemap, and Crawl-delay.

Example:

User-agent: cmdsearchbot
Disallow: /secret/
Crawl-delay: 2

Cmdsbot will cache the robots.txt file for 7 days. It will be automatically refreshed when needed after the cache period has expired.

A robots refresh can be immediately invoked via Search Console.

Page Meta

We honor page-level robots rules, including:

Example:

<meta name="robots" content="noindex">

Crawl Behavior

Cmdsbot waits ~5 seconds between requests (unless manually triggered), avoids blocked paths, and does not bypass the listed rules.

Identify Cmdsbot

Our crawlers use a clear User-Agent string including cmdsearchbot and a link back to this page, such as:

Mozilla/5.0 (...) Safari/537.36 (compatible; cmdsearchbot/1.0; +https://search.cmds.media/bot)

Contact

If you have any concerns about Cmdsbot, please reach out at support@cmds.media.