Get ready to change the way that you tell search engines to ignore certain pages within your website!
While there are a number of alternatives that will be made available to web developers, the most simple way to put this, is that if you want a page on your website to not be indexed by search engines, the way in which you can ensure this happens has changed. While once upon a time it was perfectly acceptable to put this into the “robots.txt” file (which the uninitiated can find an explanation of here), this is no longer the case. If we were to offer the quickest alternative, then we’d say that such an instruction now belongs within your meta tags, but it’s not quite that simple.
You can also make sure that search engines stop indexing pages that you don’t want them to by applying 404 or 410 statuses to them, adding password protection to them (so that they can’t be pulled up without a password being applied to them, which effectively means they can’t be pulled up by search engines), using Google Search Console to remove the URL’s you don’t want indexed (as is explained here), or simply using the “Disallow” function within your robots.txt file instead of the “Noindex” function, which is probably the simplest option.
Whatever option you choose, we’re pretty sure people aren’t going to miss the “Noindex” function all that much as there are a tonne of other options available. But if you’d like to see exactly what is happening, then click here to see more!