Google's New Changes to Noindex

Alas, poor noindex! I knew it, Horatio, a directive of infinite usefulness, of most excellent fancy. It hath borne us on its back a thousand times…

The robots.txt noindex directive, which we have known and loved for so long, will no longer be supported by Google starting September 1st, 2019.

Quick!  Everyone panic!

Relax.  Let’s take a closer look. We’ve always known that the robots.txt version of noindex was not an official part of the robots.txt standard protocol.  It was sort of like collecting cash when landing on Free Parking—not officially a “real” rule, but one that almost everyone seems to play by nonetheless.  

Google has decided (in their infinite wisdom) that they will no longer support unofficial code in the robots.txt protocol.

According to Google: “In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019.”

What do we do now?

The solution is really quite easy:

Step 1: If you’d like to keep it clean, remove the noindex line(s) from your robots.txt file. This isn’t completely necessary unless you have hundreds of instances, in which case you may want to remove them in the interest of speed and efficiency.

Step 2: Add a noindex to the head tags of the pages you don’t want indexed.  To do this, just add the line <META NAME=”robots” CONTENT=”noindex”> between the <head> and </head> tags of any page you don’t want indexed.

Sure, it’s a bit more work (especially if you have hundreds or thousands of pages), but it’ll bring you into best practice compliance, which is what we should all be striving for anyway.

There. Wasn’t that easy?

Thou know'st 'tis common; all that unsupported protocols must die, Passing through nature to eternity. It was such, take it for all in all. I shall not look upon its like again.