One of the most critical aspects of proactive SEO (search engine optimization –> aka Google Search) is monitoring your crawl budget. Monitoring your crawl budget includes examine what types of bots crawl your website (Google, Bing, Yandex, Baidu…), how many times they visit, average bytes received (how large your pages are), response time (how fast they load), and response codes (what’s broken).
This is a canary in the coal mine so to speak. If Google were a person, Google Bots (Bots = Robots) would be the eyes, ears, and hands of the search engine. Reading bot traffic can tell where they’ve looked, what they’ve touched, and how they might have reacted. This can signal items needing fixing; like upgrading your server, minimize your images, and fixing broken links.
Bots have a set attention span, they are busy drone workers, and must accomplish their task before moving on to the next website. Because their time is valuable and the internet is infinite, they only have so much time per site (this is your crawl budget).
Google doesn’t disclose what your crawl budget is, but it’s safe to say, if you aren’t Apple or Amazon, it is probably pretty small. That’s why it’s essential to optimize your website as much as you can. You want to give the bots a nice planned route/path, guiding them where to crawl, with the most relevant information first.
First Link Rule
Google Bots always crawl your first link, ignoring the second, so make your first on-page link count! (we can tell this by reading the log traffic)
If “Home” is the first link in your navigation, you are missing out on a huge SEO opportunity. Rename “Home” to your top keyword (giving you a high power anchor), or remove that link altogether.
Autor + Meta Tags
Author credentials and metadata (date posted specifically) are always first in blog posts and some pages. Remembering the first-link rule, this is sending your bot traffic to your index page, when it really needs to be going to an internally linked SEO rich keyword.
Google loves Author bios, it boosts your EAT score (Expertise, Authoritativeness, Trustworthiness), so add it to the bottom of the article. If you notice this page you are reading, the first link Google crawls is an internal keyword link, and my bio last.
Crawl vs Directory Tree
Crawling your site will give you valuable insights into how Google Bots crawls your website. From that, you can create crawl & directory trees.
A crawl tree is a representation of how a bot finds its way around your website. A directory tree is how you try to tell it where to go.
It is always important to remember the basics, which we can quickly forget when creating and launching new websites under tight or demanding timelines. Remember to have your robots.txt properly set-up, auto-generated sitemaps (XML + HTML) enabled, Google Analytics or Matomo tracking traffic, and Google Search Console + Bing Webmaster Tools tracking errors across all three of your website aliases: WWW, HTTP, HTTPS.
Full Stack Developer, Digital Marketer, and InfoSec enthusiast. He received his Bachelor’s Degree from the University of Western Sydney and his Business Diploma from Georgian College before joining various marketing positions in search portals, e-commerce, higher education, and addiction recovery services.Follow @ twitter
How Yoast Offers a New Schema Markup With Its Latest Update
Yoast SEO is offering a revamped Schema markup with its most recent update, version 11.0. Rather than helping your content rank higher, Schema markup makes your listings look better on the search page. Anyone using WordPress with the Yoast SEO plugin can now access the revamped schema version. Putting it into practice comes with several… Read More
Should I noindex Category & Tag pages?
Category & tag pages should be left indexable. These pages are critical pages that you want to be crawled regularly. As soon as you start noindexing them, Google will stop crawling them. The more you try to dictate what Google should be doing, the less they like it. Google will index category & tag pages… Read More