Issue: 3073

Google crawler parses empty DOM with SSR

[issue link]

Logic below explained:
If user agent is a crawler/bot (e.g. google), then render the page with SSR.

  serverMiddleware: [{
    handler(req, res, next) {
      let isBot = crawlersRegex.test(req.headers['user-agent'])
      res.spa = !isBot

Using the fetch as google:

I get this strange result:

Obviously the content is empty. That was when loading indicator is disabled.

If loading indicator is enabled, then the loading indicator will be there:


  • Google uses Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) User Agent. If I fake it in my chrome, I get correctly a SSR rendering.
  • Curl returns the correct DOM structure if a bot agent (like the above) is used
  • Google result looks like they get a CSR rendering
  • Facebook, twitter etc parse the content also correctly (e.g. from the open graph meta). This issue happens only with google crawler.


  • If no one else is affected from this but me, then the conditional rendering above might be an issue
  • If Nuxt returns basic initial html and the Google crawler is programmed to stop when there is some html (even before the server has not completed) this might also be a reason


  • If I force set res.spa = false in every request, google will render the results correctly. Therefore something is happening with the conditional rendering
This question is available on Nuxt.js community (#c2660)