Using a robots.txt File
04/25/99 by John Pollock
Article Index


This is a useful file that keeps search engines from indexing pages you do not want spidered. Why would you not want a page indexed by a search engine? Perhaps you want to display a page that shows an example of spamming the search engines. This type of page might include an example of repeated keywords, hidden tags with keywords, and other things that could get a page or an entire site banned from a search engine.

An example of such a page is on this server, it is another one of the articles here- and it talks about search engine spammers. To look at the article, see The "Secrets" of Spamdexers.

The robots.txt file is a good way to prevent this page from getting indexed. However, not every site can use it. The only robots.txt file that the spiders will read is the one at the top html directory of your server. This means you can only use it if you run your own domain. The spiders will look for the file in a location similar to these below:

http://www.pageresource.com/robots.txt
http://www.javascriptcity.com/robots.txt
http://www.mysite.com/robots.txt

Any other location of the robots.txt file will not be read by a search engine spider, so the file locations below will not be worthwhile:

http://www.pageresource.com/html/robots.txt
http://members.someplace.com/you/robots.txt
http://someisp.net/~you/robots.txt

Now, if you have your own domain- you can see where to place the file. So let's take a look at exactly what needs to go into the robots.txt file to make the spider see what you want done.

If you want to exclude all the search engine spiders from your entire domain, you would write just the following into the robots.txt file:

User-agent: *
Disallow: /

If you want to exclude all the spiders from a certain directory within your site, you would write the following:

User-agent: *
Disallow: /aboutme/

If you want to do this for multiple directories, you add on more Disallow lines:

User-agent: *
Disallow: /aboutme/
Disallow: /stats/

If you want to exclude certain files, then type in the rest of the path to the files you want to exclude:

User-agent: *
Disallow: /aboutme/album.html
Disallow: /stats/refer.htm

If you are curious, here is what I used to keep the spamming article from getting indexed:

User-agent: *
Disallow: /zine/spam1.htm

If you want to keep a specific search engine spider from indexing your site, do this:

User-agent: Robot_Name
Disallow: /zine/spam1.htm

You'll need to know the name of the search engine spider or robot, and place it where Robot_Name is above. You can find these names from the web sites of the various search engines.

So, if you need to exclude something from search engine indexing, this is the most effective tool recognized by the search engines- so use it to keep the spiders out of any part of your web you want them to avoid.


                                                              Article Copyright  1999 by John Pollock



By: John Pollock


Main Page  |  HTML  |  JavaScript  |  Graphics  |  DHTML/Style Sheets |  ASP/PHP
PutWeb/FTP  |  CGI/Perl  |  Promotion  |  Java  |  Design Articles
Support Forums  |  Site Search  |  FAQs  |  Privacy  |  Contact

Copyright  1997-2002 The Web Design Resource. All rights reserved.