Understanding robots.txt
The robots.txt file is a simple text file that tells search engines and bots which parts of your site to crawl or avoid. It helps you control which parts of your site are indexed.
To block the WSEIL bot from crawling your entire domain (e.g., example.net.il), add the following to your robots.txt file:
User-agent: WSEIL
Disallow: /
If you only want to block a specific directory (e.g., /private), use this:
User-agent: WSEIL
Disallow: /private/
To set a crawl delay for all bots, use the following:
User-agent: *
Crawl-delay: 10
To set a specific crawl delay for the WSEIL bot:
User-agent: WSEIL
Crawl-delay: 5
We would be very happy if you allow us to crawl and index your site, and we believe that you will benefit from it as well.
The robots.txt file, as described on Wikipedia, is placed in the root directory of a website and provides guidelines to search engine bots on how to handle crawling the website’s content. Learn more about robots.txt configuration on Wikipedia.