Retour haut de page

Last year was full of changes and uncertainty, but presented new consumer trends and a greater emphasis on digital and SEO than ever. But what’s key to include in your 2021 SEO strategy?

Let’s take a look at 5 important areas that will help you be a cut above the rest.

1. Search Intent

Search intent in SEO continues to grow in importance. While it’s hardly a new concept, every year it’s a good idea to re-examine user behaviour to keep on top of changes. Especially after the year that was 2020 when online behaviours changed so rapidly.

Understanding the why behind a search query and matching it to one of the four types of Search Intent (informational, commercial, navigational and transactional) will help you write content that answers consumers questions.

There is a large section on the topic in Google's most recent edition of Quality Rater Guidelines.

Google is getting better at understanding how people search so make sure you create content that reflects the different types of user intent.

2. Core Web Vitals

In May 2021 Core Web Vitals will become a Google ranking signal.

Core Web Vitals are designed to measure how users experience the speed, responsiveness, and visual stability of a page and combine the following signals:

  • Mobile-friendliness
  • Safe-browsing
  • HTTPS-security
  • Intrusive interstitial guidelines

It will be critical for marketers to make sure they stay on top of this to be competitive with the average website and ensure their traffic and conversions are not affected. So start looking at this now ahead of the update in May.

Overtime Core Web Vitals will change as user expectations of web pages change so staying up to date and checking these elements of SEO regularly should be an important part of your SEO strategy.

3. Mobile First

By March 2021, Google will have switched all websites from desktop-first to mobile-first indexing. This means Google will predominantly use the mobile version of a website's content for indexing and ranking, meaning it has never been more important for marketers to focus on a mobile-first strategy.

If you haven't already, now is the time to check your website pages and make sure they are easy to navigate, and all images and content are displayed well.

It’s ok to have a different desktop and mobile website experience, but considering Google will essentially ignore the desktop version, if you still have a separate mobile site, now might be the time to migrate to a mobile responsive site instead.

4. Structured data & SERPs

In 2021 Google is set to offer even more answers directly on search result pages without people having to visit a site. This means structured data, or schema mark-up should be an important part of your SEO strategy in 2021.

Marketers should use structured data to help Google better understand who you are, what you offer and what audience you serve, increasing rich results from your website on SERPs. This can have a remarkable impact on CTA’s and attention from users.

Using Googles’ Structured Data Testing Tool, you familiarise yourself with the concept and start applying structured data for your website.

Although not always easy, if you can win FAQ or how-to schema on SERPS you can significantly increase the likelihood of people clicking on your result. You will want to make sure you're creating content with the user in mind and answer common questions on your pages.

5. Long-Form Content & Topic clusters

While the word count of content is not a ranking factor, long-form content generally suggests more information, more expertise and more questions answered.

Marketers should research topic clusters around one central content theme. Cover all aspects of a topic, in as much detail as possible with strategic interlinking. This will send signals to Google that the content of your site has a high level of breadth and depth.

Break up your long-form content with lots of keyword-rich H2 and H3 tags.

You should also take a look at updating old content with relevant new information. Answer questions you haven't previously touched on, and provide extra breadth and depth.

Conclusion

Adapting to SEO trends and keeping up to date with Google’s criteria is fundamental to SEO success. 2021 should see marketers put consumers' interests first, with a focus on excellent user experience and in-depth, interesting written content.

For more information about SEO strategy, get in touch.

You can also follow us on Twitter and Facebook for the latest updates.

Stop Using Robots.txt Noindex By September

Let’s all get prepared ahead of time, ready for the 1st of September, when Google will officially stop supporting noindex in the robots.txt directory.
Over the past 25 years, the unofficial standard of using robots.txt files, to make crawling an easier process, has been widely used on sites across the internet. Despite never being officially introduced as a web standard, Googlebot tends to follow robots.txt to decipher whether to crawl and index a site’s pages or images, to avoid following links and whether or not to show cached versions.

It’s important to note, robots.txt files can only be viewed as a guide and don’t completely block spiders from following requests. However, Google has announced that they plan to completely stop supporting the use of the noindex in the robots.txt file. So, it’s time to adapt a new way of instructing robots to not index any pages in which you want to avoid being crawled and indexed.

Why is Google stopping support for noindex in robots.txt?

As previously mentioned, the robots.txt noindex isn’t considered an official directive. Despite being unofficially supported by Google for the past quarter of a decade, noindex in robots.txt is often used incorrectly and has failed to work in 8% of cases. Google deciding to standardise the protocol is another step to further optimising the algorithm. Their aim with this standardisation is to prepare for potential open source releases in the future, which won’t support robots.txt directories. Google has been advising for years that users should avoid using robots.txt files so this change, although a major one, doesn’t come as a big surprise to us.

What Other Ways Can I Control The Crawling Process?

In order to get prepared for the day that Googlebot will stop following noindex instructions, as requested in the robots.txt directory, we must adapt to different processes in order to try and control crawling as much as we possibly can. Google has provided a few alternative suggestions on their official blog. However, the two we recommend you use for noindexing are:
• Robots meta tags with ‘noindex’
• Disallow in robots.txt

Robots meta tags with ‘noindex’

The first option we’re going to explore is using noindex in robots meta tags. As a brief summary, a robots meta tag is a bit of code that should be located in the header of a web page. This is the preferred option as it holds similar value, if not more, to that of robots.txt noindex and is highly effective for stopping URLs from being indexed. Using noindex in robots meta tags will still allow Googlebot to crawl your site but it will prevent URLs from being stored in Google’s index.

Disallow in robots.txt

The other method to noindexing is to use disallow in robots.txt. This form of robots.txt informs the robot to avoid visiting and crawling the site, which in turn means that it won’t be indexed.

Important things to bear in mind

There are some important things to keep in mind when using robots.txt to request for pages not to be indexed:
• Robots have the ability to ignore your instructions in robots.txt. Malware robots, spammers and email address harvesters are more likely to ignore robots.txt, so it’s important to think about what you’re requesting to be noindexed and if it’s something which shouldn’t be viewed by all robots.
• Robots.txt files are not private, which means anyone can see what parts of your site you don’t want robots to crawl. So, just remember this because you should NOT be using disallow in robots.txt as a way to hide certain information.

And over to you

We’ve given you an overview of our two recommendations for alternative noindexing methods. It’s now up to you to implement a new method ahead of the 1st of September so that you’re prepared for Google to stop supporting noindex robots.txt. If you have any questions, make sure to get in touch with us.

Sign up for our newsletter at the bottom of this page and follow us on Facebook and Twitter for the latest updates.

Meta robots tags are something that you’re almost inevitably going to come across if you work in SEO, but what are they, how do they work and how do they differ from the good old robots.txt? Let’s find out.

What Is A Meta Robots Tag?

A meta robots tag is a snippet of code that’s placed in the header of your web page that tells search engines and other crawlers what they should do with the page. Should they crawl it and add it to their index? Should they follow links on the page? Should they display your snippet in search results in a certain way? You can control all of these with meta robots tags and, while there may be a bit more development resource required in certain content management systems, they’re generally more effective than robots.txt in a lot of regards. I’ll talk more about that later.

Typically speaking, a robots tag would look like this in your HTML source.

As you can see, it’s comprised of two elements: the naming of the meta tag (robots, in this case – meta tags have to declare their identity to work) and the directives invoked (the “content" – in this case, “noindex, follow").

This is probably the most common meta robots tag that you’ll come across and use; the meta robots noindex tag tells search engines that, while they can crawl the page, the noindex directive tells them that they should not add the page to their index. The other directive in the tag, the “follow" tells search engines that they should follow the links on the page. This is useful to know because even if the page isn’t in the search engine index, it won’t be a black hole with the flow of your site’s authority – any authority which the page has to pass to others, either on your site or off, will still be passed by using the “follow" directive.

If you wanted to completely void that page and not have any links on there followed, the tag would look like one of the following:

By adding the “nofollow" attribute, you are telling search engines to not index the page, but also not to follow any links on that page, internal or external. The “none" directive is effectively the same as combining noindex and nofollow, but it’s not as commonly used. In general, we recommend “noindex,follow" if you need to noindex a page.

What Other Meta Robots Tags Are There?

Now we’ve covered the anatomy of the most common meta robots tag, let’s take a look at some of the others:

  • noimageindex: Tells the visiting crawler not to index any of the images on the page. Handy if you’ve got some proprietary images that you don’t want people finding on Google. Bear in mind that if someone links to an image, it can still be indexed.
  • noarchive: This tag tells search engines not to show a cached version of the page.
  • nosnippet: I genuinely can’t think of a viable use case for this one, but it stops search engines showing a snippet in the search results and stops them caching the page. If you can think of a reason to use this, ping me on Twitter @ben_johnston80.
  • noodp: This tag was used to stop search engines using your DMOZ directory listing instead of your meta title/ description in search results. However, since DMOZ shut down last year, this tag has been depreciated. You might still see it in the wild, and there are some SEO plugins out there that still incorporate it for some reason, but just know that since the death of DMOZ, this tag does nothing.
  • noydir: Another one that isn’t really of any use, but you’re likely to see in the wild and some SEO plugins push through – the noydir tag tells search engines not to show the snippet from the Yahoo! Directory. No search engines other than Yahoo! Use the Yahoo! Directory, and I’m not sure anyone has actually added their site to it since 2009, so it’s a genuinely useless tag.

When Should You Use Meta Robots Tags?

There are a number of reasons to use the meta tags over robots.txt, but the main one is the opportunity to deploy them on a page-by-page basis and have them followed. They are typically more effective than robots.txt and robots.txt works best when it’s used on a by-folder basis rather than a by-URL basis.

Essentially, if you need to exclude a specific page from the index, but want the links on that page to still be followed, or you have some images that you don’t want indexed but you still want the page’s content indexed, this is when you would use a meta robots tag. It’s an excellent, dynamic way of managing your site’s indexation (and there are loads of other things that you can do with them, but that’s another post).

But here’s the challenge: it’s really easy to add another line to your robots.txt file, but with some content management systems, it’s not that easy to add a meta tag to a specific page. Don’t worry, Google Tag Manager has you covered.

Adding Meta Robots Tags Through Google Tag Manager

If you have Google Tag Manager installed on your site to handle your tracking (and, seriously, why wouldn’t you?), you can use it to inject your meta robots tags on a page by page basis, thus eliminating the development overhead. Here’s how.

Firstly, create a new Custom HTML tag, incorporating the following code:

meta robots noindex google tag manager

Replace YOURDIRECTIVE1, YOURDIRECTIVE2 with what you want it to do (noindex, follow, for example) and if you want to remove one of the directives, that’s fine. The screenshot below will show you how to do this.

meta robots tag manager

Now create a trigger and set it to only fire on the pages you want the meta robots tag to apply to, as seen below.

meta robots tag manager trigger

And there you go, that’s how you can inject your meta robots tags through Google Tag Manager. Pretty handy, right?

And We’re Done

Hopefully today’s post has given you a better understanding of what meta robots tags are, what you’d use them for and how to use them. Any questions or comments, drop me a Tweet or send us a message through our contact form.

Sentiment analysis is one of the current hot topics in the data and analytics world, with more and more tools and algorithms out there attempting to understand the intent behind the written word rather than just the definition of the words. Platforms like IBM’s Watson (which I will always hold a grudge against for purchasing my beloved AlchemyAPI), MeaningCloud and a number of packages for programming languages like R and Python are becoming more and more popular as everyone races to offer the complete text analysis solution.

Personally, text classification and sentiment analysis have been a part of my approach to SEO since around 2012, but this was largely limited to keyword research and competitor analysis. With search engines incorporating more of this, I felt it was time to up my game, hence the sentiment analysis project I’ve recently begun on my personal site.

What Are The SEO Implications?

Bing is currently utilising this to improve the quality of their featured snippet results (called Multi-Perspective Answers), allowing them to give better-quality instant answers. Google’s version of this, both in terms of featured snippets and the answer box, is often criticised for providing answers that are biased by the way the question is asked, rather than taking the intent behind it and providing the most relevant answers.

It would be naive to think that Google hasn’t been working sentiment analysis into RankBrain, especially with their Natural Language API being out there in the wild, and they are likely waiting until RankBrain has gathered and analysed enough data to make sentiment analysis a core part of the algorithm. What are the implications of this? They could be fairly wide-ranging.

Online Branding Considerations

We know that Google likes websites and online businesses to be a brand and, like in the real world, online brands can have positive and negative sentiments. I wouldn’t be at all surprised if backlinks which talk negatively around a brand begin to provide less authority in search engine results – possibly even being discounted overall, encouraging brands to stay in their customers’ good graces.

It’s not dissimilar from the offline world, really, but if Google is to go in this direction, they will need to have a very robust sentiment analysis process and much better spam detection. This will not be difficult for unscrupulous competitors to exploit.

So Is It The Next Big Thing?

The key thing to understand about search engines – Google in particular – is that they’ve always wanted to understand content the way humans understand it so that they can provide the best results possible. Better results lead to more searches, which leads to more advertising revenue, after all.

Will better understanding of textural topics and sentiment become an important part of SEO in the coming years? I’m absolutely certain that it will, but will it be the next big thing in search? I’m not so sure about that. I suspect it will be a gradual rollout, with a lot of testing and adaptation and it will eventually find its way into the algorithm. As for the weighting it has in the algorithm? We’ll have to wait and see, but I wouldn’t be shocked if it was a fairly important factor.

Whether your brand’s sentiment becomes a significant ranking factor or not, I definitely think now is the time to ensure that your reputation is good with your customers and your peers. SEO aside, it’s just good business and, in the end, that’s what we’re all here for.