Need the #1 SEO strategist and optimiser in Brisbane?Click here →

How Search Engines Work

12 min readLast reviewed: March 2026

The full crawl-index-rank pipeline explained simply and why each step matters for SEO.

The Three Phases: Crawl, Index, Rank

Search engines operate in three distinct phases. Understanding this pipeline is critical because each phase has different constraints and timelines. Most SEO failures trace back to misunderstanding which phase is the bottleneck.

Why This Matters
Many sites fail because they optimise for ranking before ensuring they are being crawled and indexed. The order matters: crawl first, index second, rank third. Skip any phase and you have zero visibility.

Phase 1: Crawl

Googlebot is an automated spider that discovers and downloads web pages. It does not have access to your admin panel, database, or private files — it can only see what a user would see in a browser.

How Googlebot Discovers URLs

Googlebot finds pages through three mechanisms:

  • Links: Following hyperlinks from already-crawled pages. This is the primary discovery mechanism.
  • Sitemaps: XML sitemaps tell Google which pages you want indexed. They are a hints file, not a guarantee.
  • Search Console submissions: You can request indexing in Google Search Console, and Google will prioritise that URL.

Google does not crawl at infinite speed. Each site has a crawl budget — the number of pages Googlebot will crawl in a given timeframe. Established sites with strong authority get higher crawl budgets. New sites get lower budgets. Crawl budget is finite, so wasting it on duplicate pages, auto-generated content, or parameter variations is a real cost.

Rendering and JavaScript

Googlebot executes JavaScript. This was not always true — prior to 2019, JavaScript was optional — but now Google renders most pages. However, rendering takes additional time and resources. If a page is resource-heavy or has JavaScript errors, indexing may be delayed or incomplete.

The key insight: server-side rendering (SSR) or static generation is faster for Google to process than client-side JavaScript rendering (CSR). If you use a JavaScript framework like React, consider pre-rendering critical pages or using dynamic rendering to reduce crawl friction.

Phase 2: Index

Crawling does not guarantee indexation. Google crawls far more pages than it indexes. After crawling, Google processes the page, extracts text and links, and decides whether to add it to the index. The index is a database, not a live scan of the web — it contains a snapshot of pages Google has chosen to store.

What Prevents Indexation

Common reasons pages are crawled but not indexed:

  • Thin content: Pages with very little text may be filtered as low-quality or near-duplicate content.
  • Duplicate content: Google indexes the primary version and may not index duplicates. This is not a penalty, but duplicates do not rank.
  • Noindex tag: If you accidentally added rel="noindex" or X-Robots-Tag: noindex, pages will not be indexed.
  • Robots.txt blocks: If you block a URL in robots.txt, Googlebot cannot crawl it and therefore cannot index it.
  • Login walls: Content behind login forms cannot be indexed because Googlebot does not have credentials.
  • Low quality: Google has quality thresholds. Pages that fail to meet them may be crawled but not indexed.

Indexation Lag

Do not expect instant indexation. New pages typically take 2-7 days to be indexed, though some take weeks. High-authority sites can be indexed faster. Refreshes to existing content may be indexed within hours if the page is frequently crawled.

Check indexation status in Google Search Console under the Coverage report. This shows crawled but not indexed pages, errors, and excluded pages.

Phase 3: Rank

Once a page is indexed, it enters the ranking pool. When a user searches, Google retrieves indexed pages relevant to that query and ranks them using a combination of signals. These signals include content relevance, backlinks, page experience metrics, domain authority, and many others.

It Is Not a Formula

Google has confirmed over 200 ranking signals, but the exact algorithm is secret and constantly changing. Practitioners sometimes talk about "ranking factors" as if there is a checklist to follow — there is not. Ranking is the output of neural networks and machine learning models trained on billions of search examples. You cannot reverse-engineer it.

What you can do: follow confirmed best practices (write relevant content, earn backlinks, improve page speed, ensure mobile-friendliness) and avoid known penalties (keyword stuffing, cloaking, buying links). Results will follow naturally.

Personalisation

Your ranking is not the same as your neighbour's ranking. Google personalises results based on:

  • Location: Search results vary by geography. A search for "pizza" shows different results in New York versus Los Angeles.
  • Search history: If you frequently visit tech sites, tech results may rank higher for ambiguous queries.
  • Device: Mobile and desktop results are slightly different.
  • User signals: Click history, dwell time, and pogo-sticking (clicking back to search results) may influence personalisation.

This means you cannot trust your own search results to evaluate ranking performance. Use Google Search Console, Rank Tracker, or similar tools to see ranking data across different locations and devices.

The Time Gap Between Phases

Understand that crawl, index, and rank are not synchronous. A page can be crawled but not indexed for weeks. A page can be indexed but not ranking yet. There is no shortcut through these phases — they require time.

A common mistake is publishing a page and checking rankings the next day. Realistic timelines:

  • Crawl: 0-7 days (faster for established sites, slower for new sites).
  • Index: 2-14 days after crawl (sometimes longer).
  • Rank: Weeks to months after indexation, depending on competition and signals.

Pipeline Overview Table

PhaseWhat HappensGoogle Can Confirm
CrawlGooglebot discovers URLs through links, sitemaps, and Search Console submissions. It fetches the page content, executes JavaScript, and extracts signals.Crawl budget exists and varies by site authority. New pages are crawled, but crawl frequency depends on freshness needs and authority.
IndexGoogle processes crawled content and stores it in a database. Only indexed pages can rank. Google renders pages and analyzes HTML, text, links, and structured data.Not every crawled page gets indexed. Index inclusion requires quality thresholds. Indexation can take weeks to months for new or refreshed content.
RankWhen a user searches, Google retrieves indexed pages matching the query and scores them against 200+ signals. Results are personalised by location, search history, and device.Relevance, links, and page experience matter. Core Web Vitals are a ranking factor. Rankings vary by personalisation. Ranking is not static — it shifts frequently.

Common Misconceptions

Myth: Sitemaps Guarantee Indexation

False. Sitemaps are hints. Google will crawl URLs in your sitemap faster, but indexation depends on quality and uniqueness. A sitemap full of duplicate pages will not get those duplicates indexed.

Myth: Faster Crawling = Faster Ranking

False. Crawl speed is only one variable. A slow site that is crawled monthly might rank better than a fast site crawled daily if the slow site has better content and more authority. Focus on content quality, not crawl speed alone.

Myth: Google Indexes Every Page

False. Google indexes roughly 5-10% of the web. Massive sites with billions of pages will never have every page indexed. This is fine — you want your important pages indexed, not every variation and duplicate.

What You Control

You cannot control Google's crawl budget, indexation decision, or ranking algorithm. You can influence them by:

  • Creating crawlable, indexable pages (no JavaScript errors, no login walls, no noindex tags).
  • Building a logical site structure so Googlebot can find all pages efficiently.
  • Submitting sitemaps and new URLs via Search Console.
  • Creating high-quality, original content that serves user intent.
  • Earning backlinks from authoritative sources.
  • Improving page experience (Core Web Vitals, mobile-friendliness, HTTPS).
  • Reducing crawl waste (fixing redirects, eliminating duplicate content, removing low-value parameter pages).
Practical Next Step
Audit your site in Google Search Console. Check which pages are crawled versus indexed versus indexed but unranked. That gap is where your SEO work should focus first.

Why This Matters for SEO

Most technical SEO work is about optimising one or more of these three phases. Sitemap improvements target crawl. Fixing indexation issues targets index. On-page optimisation and link building target rank. Understanding the pipeline helps you diagnose where your site is weak and prioritise fixes accordingly.