You can use the /robots.txt file to give instructions about your site to web robots; called The Robots Exclusion Protocol.
How it works: a robot wants to vists a Web site URL, say https://www.m3server.com/features.html. Before it does so, it firsts checks for https://www.m3server.com/robots.txt, and finds:
User-agent: * Disallow: /admin Allow: /
‘User-agent: *’ means this section applies to all robots.
‘Disallow: /’ tells the robot that it should not visit any pages on the site.
As a web site owner you need to put it in the right place for that resulting URL to work correctly. Usually that is the same place where you have your web site’s main “index.html.”
User-agent: * Disallow: /Cached Disallow: /hosting/hostingoptions Disallow: /div Allow: /
In this example, three directories are excluded.
Remember to use all lower case for the filename: ‘robots.txt’, not ‘Robots.TXT’
See how to add Crawl Delay to throttle bots.