Google seem to have shot themselves in the foot a bit with their link based ranking search algorithm.

In it’s day it was the best by such a huge margin that it shot Google to the top of the pile and made it firmly the worlds Number One search engine.
In fact I doubt that any other company in the world has such a big market share of their particular business as Google does.

The simple premise was that other websites would “link” to any website that it thought was good. So each link was a “vote” or a “like” for that website. The more votes or likes it had the more popular it was and so the higher up the search results it should be.

It had one major flaw though. The nearer the top of the search rankings a website gets the more people will see it. The more people that see it the more likely they are to link to it.

This means that established sites get an advantage over new sites as new sites will have no links and therefore will not rank highly in the search engines.

Now Google has done some work to help alleviate this like giving new sites  a temporary higher ranking for additional exposure. However as webmasters became more savvy it was obvious that links were the way to get top rankings. The start of modern SEO was born and suddenly Google’s algorithm was becoming ineffective as 10,000’s of webmasters were able rank any webpage for any keyword just by firing links at it. The natural selection process that Google envisaged was already drowning.

So move a few years down the line and Google are wishing they could rank websites by the quality and relevance of the content (back around in one big circle!).

So Panda was born and that brings us basically up to date.

Over the last 12 months we have seen sweeping changes in Google’s approach to ranking websites.
Panda has put the focus firmly on higher quality content.

Regular Panda updates are still ongoing and have caused a lot of sites to drop dramatically in the search rankings. Most of you will already heard of Panda but I will go into it a little deeper in a minute just to be sure you are clear on how it affects your website.

Panda may be “old news” but I need to talk about it to put the other changes into context.
For instance did you know that there was Panda update in March and they will be ongoing for some time?

OK, so Google is sending a message – We want unique, quality, useful and relevant content on the websites.

Does that mean backlinking is dead?

Not by a long shot! Technology does not allow machines to judge the quality of written content well enough to work as the major factor in search rankings. In fact try searching Google and you will be surprised at how little information there is on the subject. The info that exists is mainly theoretical and a long way from practical application.
But never say never, it will come, it’s just a matter of time but for now backlinks are still hugely important.

How do I know that?

Well lets look at Google’s recent attack on blog networks. A number of well known blog networks have seen mass de-indexing and some have closed their doors.

The type of networks that have been hit are those that mass distribute articles to the blogs loaded with backlinks so in practice they are basically a huge networks of autoblogs with thousands of posts and tens of thousands of outgoing backlinks.

These networks were very popular simply because they worked. Why else would Google bother to put so much effort into killing them off if they were not effective enough to “manipulate” Google’s results.

So even Google (indirectly) is telling us that links are still important.

Now lets look at Google’s most recent publicized changes from March as published on Google’s own blog. You can see full listing at

http://insidesearch.blogspot.ca/2012/04/search-quality-highlights-50-changes.html

There are 50 published changes but I focus only on few that I think are important to this post.

It’s important to understand that Google do not tell us what their algorithm does or how it does it or any detail on the effect of changes so any interpretations are my own.

Better indexing of profile pages. [launch codename “Prof-2”] This change improves the comprehensiveness of public profile pages in our index from more than two-hundred social sites.

This sounds like they are indexing more information on more public profile pages of social sites like Facebook and Twitter. Social standing or visibility on social networks is one of the big future measure for ranking. Clearly in the same vein as “votes” by backlinking, the Diggs, Likes and Tweets etc. are also votes for the web page content. The profile of the source (may not only supply a backlink) but may be judged (similar to PR) to decide the “authority” of those votes.

High-quality sites algorithm data update and freshness improvements. [launch codename “mm”, project codename “Panda”] Like many of the changes we make, aspects of our high-quality sites algorithm depend on processing that’s done offline and pushed on a periodic cycle. In the past month, we’ve pushed updated data for “Panda,” as we mentioned in a recent tweet. We’ve also made improvements to keep our database fresher overall.

So the old Panda again. More changes of course but worth understanding how it works. I like to think of Panda as more of a penalty rather than a measure.  I.e. you don’t really get rewarded for good content but you do get penalized for bad content. Although all pages of a site are checked for “quality” the whole site is penalized even if one page falls foul of the Panda filters.

The penalty is the set against your site and is only changed every few months. So improving your site will not have an immediate positive effect. It won’t change until the next “Panda Refresh”.

These checks are clearly “robotic” so Google has started to be able to measure content quality and will only get better.

Improvements to freshness. [launch codename “Abacus”, project codename “Freshness”] We launched an improvement to freshness late last year that was very helpful, but it cost significant machine resources. At the time we decided to roll out the change only for news-related traffic. This month we rolled it out for all queries.

It makes clear sense for news items to be fresh. Hey, who wants hear old news. Now you need to be looking at your own content. It has long been accepted that adding regular additional content to your site is a good thing. It is now likely to be bigger factor in your rankings.

We don’t know how Google measure and use this. Maybe a new post to your blog will give all your pages a boost, or more likely a lack of fresh content give you a site wide penalty. The question is should you update old posts?

Interestingly blog and forum post date detection is one of the recent updates (below)

Improvements in date detection for blog/forum pages. [launch codename “fibyen”, project codename “Dates”] This change improves the algorithm that determines dates for blog and forum pages.

Tweaks to handling of anchor text. [launch codename “PC”] This month we turned off a classifier related to anchor text (the visible text appearing in links). Our experimental data suggested that other methods of anchor processing had greater success, so turning off this component made our scoring cleaner and more robust.

Better interpretation and use of anchor text. We’ve improved systems we use to interpret and use anchor text, and determine how relevant a given anchor might be for a given query and website.

There are two updates relating to anchor text. I find this interesting both as more evidence that backlinks are far from redundant, and in light of the recent discussions around an “over optimization” penalty (quote by Matt Cutts below).

There are obvious “over optimizations” like keyword stuffing and mass backlinking but I see things like variations in anchor anchor text and content relevance becoming more important.

For instance it would be “unnatural” to expect everyone to link to your site with your chosen keyword. It would be much more natural to link with a variety of anchor texts including apparently unrelated ones.

Content or contextual relevance may also be important.  Again not every single one though, as that would be unnatural but mainly links relevant to the content. This may be stepped up from “better to be diverse” to “penalties for not enough diversity”.

“And the idea is basically to try and level the playing ground a little bit. So all those people who have sort of been doing, for lack of a better word, “over optimization” or “overly” doing their SEO, compared to the people who are just making great content and trying to make a fantastic site, we want to sort of make that playing field a little bit more level.
And so that’s the sort of thing where we try to make the web site, uh Google Bot smarter, we try to make our relevance more adaptive so that people don’t do SEO, we handle that, and then we also start to look at the people who sort of abuse it, whether they throw too many keywords on the page, or whether they exchange way too many links, or whatever they are doing to sort of go beyond what a normal person would expect in a particular area. So that is something where we continue to pay attention and we continue to work on it, and it is an active area where we’ve got several engineers on my team working on that right now.” – Matt Cutts

full details http://searchengineland.com/too-much-seo-google%E2%80%99s-working-on-an-%E2%80%9Cover-optimization%E2%80%9D-penalty-for-that-115627

Remove deprecated signal from site relevance signals. [launch codename “Freedom”] We’ve removed a deprecated product-focused signal from a site-understanding algorithm.

This was an interesting one and although not strictly in the same vein as the above I thought I would highlight it anyway. I have no idea what this means exactly but you should take note if you have e-commerce sites or Amazon sites or any sites that are “product” based (and that may even include mini-sites with product name in the domain).

Check your rankings and income levels. Have they changed in last few weeks? If they have and you can see no other reason then this may have effected you.

My thoughts are that e-commerce sites for instance would need a different measure (of SEO) than an informational site. Especially for content. I guess that Google uses measures to decide when a site is “product focused” and hence applies a slightly different ranking algorithm.
For instance normally duplicate content would be filtered from the search results but for e-commerce sites duplicate content would equate to competition which would equate to lower customer prices. Not something Google would want to filter out.

So in summary, what do you need to be doing.

To be fair to Google it’s pretty much what they have been saying for a long time but now Google are better at detecting and penalizing those that do not step up to the mark.

  • Create good quality, unique, useful and relevant content.
  • Add fresh content regularly building authority and age.
  • Focus your site pages to one subject and make it clear to Google what your page is about.
  • Do not over optimize I.e. keyword stuff, mass link with poor quality links.
  • Make your backlinks and anchors varied and “natural”. Include inner pages not just home pages.
  • Ensure social exposure across many networks – but naturally of course!
  • Don’t overload your site with ads, especially above the fold.

It’s all about quality and variety. Quality content, natural links and a great visitor experience.

Pin It on Pinterest

Share This