How Google crawls your site?
Today’s lesson on how Google recognizes your site and what methods it uses
We will know with some of the methods that Google uses to index web pages on the World Wide Web
1- Finding information from the crawler:-
Google uses the “Google Bot” method. The name uses the function of crawling sites by navigating between pages and links. It takes a copy of the page and sends it to Google’s own index and Google focuses on the new sites in its indexing and re-crawls the indexed pages because of any update in it and detection Broken links “Pages still exist”
His mission begins with Sitemaps, which the site manager submits to Google to index his site pages, and his job is to crawl the links included within the map and send them for indexing and re-work on them periodically if they are updated by adding new pages or not.
2 – Webmaster:-
Google led an option for the webmaster to use the “Robots.txt” file as a itinerary for search spiders ” Google Bot “
By crawling pages or folders or not .. and you can through the Robots file permits Google Pot to index specific pages and not to crawl for specific pages or not to crawl the site from the basis through orders within the robots file
Finally, I leave you with a summary of what we have presented with a video of the CEO of Quality Research Matt Cutts