Tag: "Crawling"

  • For every page that cannot be crawled successfully, a network error code is stored. This is only possible, if there are no network errors during crawling. Source vs. Target Network Error A network error can occur on both ends of a link, the source… more »
  • For every page that can be crawled an HTTP code is returned and stored. This is only possible, if there are no network errors during crawling. LRT Smart In LRT Smart the column HTTP-Code is called "Source HTTP Code" and is enabled by default.… more »
  • Recrawling of all the links, every time You get fresh data 1-30 days, not 5 years old data. With LRT you can be 100% sure that you base your decisions on accurate link data. Because we re-crawl all the links for you before you see them. That's why it… more »
  • Depending on the interpretation of META tags and Robots.txt, for different types of links and redirects, LRT displays different status icons next to each link in your reports. more »
  • To help you quickly detect if certain URLs are blocked by robots.txt, we provide detailed robot.txt metrics for every source page. Metric Description Robots.txt Bots txt allows general bots for currents URL Robots.txt Googlebot txt allows Googlebot for… more »

© 2009 - 2024 - All Content and Intellectual Property is under Copyright Protection.
Terms & Conditions | Privacy Policy | Cookie Policy | ImprintContactHelp