Tag: "Crawling"
- For every page that cannot be crawled successfully, a network error code is stored. This is only possible, if there are no network errors during crawling. Source vs. Target Network Error A network error can occur on both ends of a link, the source… more »
- Recrawling of all the links, every time You get fresh data 1-30 days, not 5 years old data. With LRT you can be 100% sure that you base your decisions on accurate link data. Because we re-crawl all the links for you before you see them. That's why it… more »
- Depending on the interpretation of META tags and Robots.txt, for different types of links and redirects, LRT displays different status icons next to each link in your reports. more »
- To help you quickly detect if certain URLs are blocked by robots.txt, we provide detailed robot.txt metrics for every source page. Metric Description Robots.txt Bots txt allows general bots for currents URL Robots.txt Googlebot txt allows Googlebot for… more »