When You Don’t Want to Get Google’s Attention

It seems entirely counter intuitive. You spend a great deal of effort and search engine optimization time to try and get Google, Bing, Yahoo and the likes to find and rank your website at the top of the search results page. So why on earth would you want them NOT to look at your site?

A common example of why you would want to block the search engine spiders is to avoid being dinged for duplicate content. You may have a mobile site that uses the same content as your desktop version or you may have print ready pages. These would be considered duplicate content if not handled properly.

You can stop the web crawlers or bots from indexing certain pages of your website by using a robot.txt file. This file allows you to talk to the bots and give them directions. It is called the Robots Exclusion Protocol. Web crawlers are looking for robot.txt files to find out where they are allowed to go on your site.

Type in “yourdomain.com/robots.txt” and you should see a list of directories of your website that you are asking the search engines to “skip” (or “disallow”). Look but don’t touch unless you are an experienced SEO. Making the wrong change could significantly hurt your business.

Syntax is critical. If Google cannot crawl your robot.txt file is will skip your entire site, rendering it invisible. Dealing with your website’s rotob.txt file can be delicate. This is one for an SEO expert. If you don’t have one in house, it might be time to reach out for help.

Keep in mind that your rotob.txt file is public and anyone can see what you are asking the robots to ignore. So this is not the way to try and hide information.