Richard S. wrote:FYI you can see what a website is blocking from indexing by bots by looking at the robots.txt file, for example:
One thing to be aware of is you can serve dynamic robots.tx files based on the user agent string/IP etc so what is served to you might not be the same thing served to google. Useful if for example you wanted serve a different file to Google and another to suspected scraper bots.
That is all very confusing to me. Ill need to hire someone to show me how to use it