OotyOoty
SEOComing soonSocialComing soonVideoComing soonAdsComing soonAnalyticsComing soonCommerceComing soonCRMComing soonCreatorsComing soon
Join the waitlist
FeaturesToolsPricingDocs

Products

SEOComing soonSocialComing soonVideoComing soonAdsComing soonAnalyticsComing soonCommerceComing soonCRMComing soonCreatorsComing soon
FeaturesToolsPricingDocs
Log in
Join the Waitlist

Launching soon

OotyOoty

AI native tools that replace expensive dashboards. SEO, Amazon, YouTube, and social analytics inside your AI assistant.

Product

  • Features
  • Pricing
  • Get started

Resources

  • Free Tools
  • Docs
  • About
  • Blog
  • Contact

Legal

  • Privacy
  • Terms
  • Refund Policy
  • Security
OotyOoty

AI native tools that replace expensive dashboards. SEO, Amazon, YouTube, and social analytics inside your AI assistant.

Product

  • Features
  • Pricing
  • Get started

Resources

  • Free Tools
  • Docs
  • About
  • Blog
  • Contact

Legal

  • Privacy
  • Terms
  • Refund Policy
  • Security

Stay in the loop

Get updates on new tools, integrations, and guides. No spam.

© 2026 Ooty. All rights reserved.

All systems operational
  1. Home
  2. /
  3. Blog
  4. /
  5. seo
  6. /
  7. JavaScript SEO: How Google Renders JS and What Can Break
15 December 2025·10 min read

JavaScript SEO: How Google Renders JS and What Can Break

Googlebot queues JS pages for rendering, and the delay can last hours or days. What breaks during that gap, and how SSR, CSR, and SSG each handle it.

By Maya Torres

Google can render JavaScript. This has been true since 2019 when Googlebot upgraded to an evergreen Chromium-based renderer. But "can render" and "will always render correctly and promptly" are very different things. JavaScript-heavy sites still have more indexing problems than server-rendered sites, and the reasons are worth understanding if your site depends on JS for content delivery.

How Googlebot processes JavaScript pages

Googlebot does not process pages the way a browser does when you visit a site. It uses a multi-stage pipeline:

Stage 1: Crawl

Googlebot fetches the raw HTML from your server. At this point, it sees exactly what you would see if you right-clicked "View Source" in your browser. For a server-rendered page, this HTML already contains all the content. For a client-side rendered React SPA, this HTML is mostly an empty <div id="root"></div> and a bunch of <script> tags.

Google extracts all links from the raw HTML at this stage. Links that only exist after JavaScript execution are not discovered until after rendering.

Stage 2: Render queue

If the page requires JavaScript to display content, it gets placed in a rendering queue. This is where things get interesting. The rendering queue is a shared resource, and pages can sit in it for seconds, hours, or even days before they get rendered. Google has not published exact timelines, but the delay depends on the site's crawl priority, server resources available, and overall queue depth.

During this delay, the page is essentially invisible to Google. It has been crawled but not rendered, so any content that depends on JavaScript execution is unknown.

Stage 3: Render (Web Rendering Service)

Google's Web Rendering Service (WRS) loads the page in a headless Chromium instance, executes JavaScript, and captures the final DOM state. WRS runs an up-to-date version of Chromium, so modern JavaScript features, ES modules, and recent Web APIs all work.

However, WRS has limitations:

  • It does not click buttons or interact with the page
  • It does not scroll (content loaded only on scroll is invisible)
  • It does not log in or submit forms
  • It has a timeout (the exact duration is not public, but pages that take too long to render get abandoned)

Keyword data, site audits, and rankings from Google APIs inside your AI assistant.

Try Ooty SEOView pricing
Share
Maya Torres
Maya Torres

SEO Strategist at Ooty. Covers search strategy, GEO, and agentic SEO.

Continue reading

5 Mar 2026

Next.js SEO: The Technical Checklist for React Developers

React applications have a reputation for being invisible to search engines. That reputation is outdated, but the underlying concern is valid: if your content is rendered entirely in the browser with JavaScript, Google has to execute that JavaScript to see it.

15 Apr 2026

ChatGPT SEO Audit: How to Audit Your Site with AI (Step by Step)

A ChatGPT SEO audit is a manual site review where you feed page data, crawl output, or Google Search Console exports into ChatGPT and use targeted prompts to identify technical issues, content gaps, and ranking opportunities. It works best for analysis and pri

18 Mar 2026

Duplicate Content in SEO: What Actually Causes Problems (And What Doesn't)

The "duplicate content penalty" is one of the most persistent myths in SEO. Site owners panic when they find identical text on two URLs, convinced that Google is about to punish their entire domain. That is not how it works. Google does not have a penalty for

On this page

  • How Googlebot processes JavaScript pages
    • Stage 1: Crawl
    • Stage 2: Render queue
    • Stage 3: Render (Web Rendering Service)
    • Stage 4: Index
  • What breaks JavaScript SEO
    • Content behind JavaScript that WRS cannot execute
    • Lazy loading without proper implementation
    • Client-side-only meta tags and canonical tags
    • Hash-based routing
    • Infinite scroll without pagination
  • SSR vs CSR vs SSG: which rendering strategy to use
    • Server-Side Rendering (SSR)
    • Client-Side Rendering (CSR)
    • Static Site Generation (SSG)
    • The clear recommendation
  • Framework-specific guidance
    • Next.js
    • React (standalone SPA)
    • Vue / Nuxt
    • Angular / Angular Universal
  • How to test JavaScript rendering for SEO
    • Google Search Console URL Inspection
    • View Source vs rendered DOM
    • site: search operator
    • Fetch and render tools
  • Practical checklist
  • It does not store state between pages (no cookies, no localStorage persistence)
  • Stage 4: Index

    After rendering, Google extracts the content from the rendered DOM and indexes it. Links discovered during rendering are added to the crawl queue for future processing.

    The key problem: stages 1 and 4 can be separated by a significant delay. For server-rendered content, crawl and index happen almost simultaneously. For JS-dependent content, there is a gap.

    What breaks JavaScript SEO

    Content behind JavaScript that WRS cannot execute

    If your content requires user interaction to appear (clicking a tab, expanding an accordion, scrolling to trigger lazy loading), Googlebot will not see it. WRS captures the page as it appears on initial load without any interaction.

    This is a common problem with:

    • Tabbed content where only the first tab is rendered initially
    • "Load more" buttons that fetch additional content
    • Infinite scroll without proper pagination fallbacks
    • Accordion FAQs where answers are hidden by default

    If content matters for SEO, it should be in the initial rendered DOM, not behind an interaction.

    Lazy loading without proper implementation

    The intersection observer pattern works with Googlebot because WRS does render the initial viewport. But if your lazy loading implementation depends on scroll events, Googlebot will only see content in the initial viewport. Everything below the fold stays unloaded.

    The fix: use native loading="lazy" on images (Googlebot handles this correctly), or ensure your intersection observer has a generous rootMargin so elements near the viewport boundary are loaded even without scrolling. For critical content, do not lazy load it at all.

    Client-side-only meta tags and canonical tags

    This is one of the more dangerous JavaScript SEO mistakes. If your <title>, meta description, or canonical tag is set by JavaScript after page load, Google processes them during the render stage, not the crawl stage. That means:

    • During the render queue delay, Google has no accurate title or description for your page
    • If rendering fails or times out, those tags are permanently missing
    • Google may use the raw HTML title (often a generic template title) instead of the JS-set one

    Always include critical meta tags in the server-rendered HTML. This is non-negotiable for any page you want indexed correctly. You can check your meta tags and robots directives to verify they are present in the initial HTML.

    Hash-based routing

    URLs using hash fragments for routing (example.com/#/products or example.com/#!/about) are problematic because Google strips everything after the # in URLs. To Google, example.com/#/products and example.com/#/contact are both just example.com.

    Use HTML5 History API (pushState) for routing. Every framework supports this natively. If you are still on hash routing, migrating is one of the highest-impact JavaScript SEO fixes you can make.

    Infinite scroll without pagination

    Infinite scroll is a UX pattern where content loads continuously as the user scrolls down. Googlebot does not scroll, so it only sees the first batch of content. If your product listing shows 20 items initially out of 2,000, Google only sees those 20.

    The fix: implement paginated URLs alongside infinite scroll. Each page of results should have its own URL (/products?page=2, /products?page=3) that Googlebot can crawl. The infinite scroll experience works for users. The paginated URLs work for search engines. Both can coexist.

    SSR vs CSR vs SSG: which rendering strategy to use

    Server-Side Rendering (SSR)

    The server generates complete HTML for each request. When Googlebot (or a user's browser) fetches the page, it gets fully rendered content. JavaScript then "hydrates" the page to make it interactive.

    SEO impact: Excellent. Google sees all content immediately. No render queue delay. No dependency on WRS.

    Best for: Dynamic content that changes frequently, personalized pages, e-commerce product pages with live pricing.

    Client-Side Rendering (CSR)

    The server sends a minimal HTML shell. JavaScript running in the browser fetches data and builds the page. This is the default behavior of a vanilla React, Vue, or Angular app.

    SEO impact: Poor for content pages. Google must queue and render the page to see any content. If rendering fails, the page is essentially empty. Meta tags set by JavaScript may not be processed correctly.

    Best for: Authenticated dashboards, internal tools, and applications where SEO does not matter. Not suitable for any page you want to rank in search.

    Static Site Generation (SSG)

    Pages are pre-rendered at build time into static HTML files. Every page is fully rendered before any user or bot visits it.

    SEO impact: The best option for content that does not change between requests. Google gets complete HTML instantly, served from a CDN with minimal latency. Core Web Vitals scores are typically excellent because there is no server processing time.

    Best for: Blog posts, documentation, marketing pages, landing pages. Any content that is the same for every visitor.

    The clear recommendation

    For any page that matters for SEO, use SSR or SSG. Client-side rendering is only acceptable for pages behind authentication or pages where search visibility is irrelevant.

    Most modern frameworks make this straightforward.

    Framework-specific guidance

    Next.js

    Next.js supports SSR, SSG, and CSR out of the box. Pages use SSG by default in the App Router. Dynamic data fetching happens server-side with async components. This is the best default for SEO: every page sends complete HTML to Googlebot without any special configuration.

    Verdict: Strong SEO defaults. If you are building a new content site with React, Next.js is the obvious choice.

    React (standalone SPA)

    A plain create-react-app or Vite React project is client-side rendered by default. Google must render it via WRS to see any content. Meta tags, routing, and content are all JavaScript-dependent.

    Verdict: Not suitable for SEO without adding SSR (via a framework like Next.js or Remix) or pre-rendering. If your React SPA needs to rank in search, migrate to a framework that supports server rendering.

    Vue / Nuxt

    Similar to the React/Next.js split. Standalone Vue is client-rendered. Nuxt provides SSR and SSG with good defaults. Nuxt 3 with useFetch handles data fetching server-side automatically.

    Verdict: Use Nuxt for any Vue project that needs SEO. Standalone Vue SPAs have the same problems as standalone React SPAs.

    Angular / Angular Universal

    Angular is client-rendered by default. Angular Universal adds SSR support but requires more setup than Next.js or Nuxt. Angular 17+ has improved SSR support with @angular/ssr.

    Verdict: Workable with Angular Universal, but more configuration required than competing frameworks.

    How to test JavaScript rendering for SEO

    Google Search Console URL Inspection

    The most authoritative test. Enter a URL, and Google shows you:

    • Whether the page is indexed
    • The rendered HTML (what Google actually saw after JS execution)
    • Any resources that failed to load
    • Any JavaScript errors encountered during rendering

    Compare the "rendered HTML" to what your page should contain. If content is missing, Google could not render it.

    View Source vs rendered DOM

    Open your page in Chrome. Right-click and select "View Page Source" to see the raw HTML your server sends. Then open DevTools (F12) and look at the Elements panel to see the fully rendered DOM.

    If content appears in the Elements panel but not in View Source, that content depends on JavaScript. It will only be indexed after WRS renders it, with the associated delay and failure risk.

    site: search operator

    Search site:yourdomain.com in Google and review the cached versions of your pages. If cached content is missing sections you expect, those sections may not be rendering for Googlebot.

    Fetch and render tools

    The Ooty SEO Analyzer checks whether your pages deliver content via server-rendered HTML or depend on client-side JavaScript. It flags pages where critical content, meta tags, or canonical tags are only present in the rendered DOM, not the raw HTML source. This is one of the fastest ways to catch JavaScript rendering issues before they affect indexing.

    Practical checklist

    Here is a quick reference for JavaScript SEO:

    • Serve critical content in the initial HTML response, not behind JS execution
    • Use SSR or SSG for all pages that need to rank in search
    • Include <title>, meta description, and canonical tags in server-rendered HTML
    • Use History API routing, not hash-based routing
    • Implement pagination alongside infinite scroll
    • Do not hide important content behind tabs, accordions, or "load more" buttons
    • Set image dimensions to prevent layout shifts during hydration
    • Test with URL Inspection in Search Console to see what Google actually renders
    • Monitor crawl budget if your JS framework generates many URL variations
    • Verify your robots.txt is not accidentally blocking JS resources Google needs to render your pages

    JavaScript and SEO coexist well when you render content server-side and use JS for interactivity. The problems start when JavaScript becomes the only way to deliver content to the browser. Keep the rendering pipeline simple, and Google will index your pages without issues.