[005] Avoiding and Getting Out Of Penalties

Over the last couple of years many web sites have either been hit by Google penalties or never even got off the ground in the first place.

If your site never got anywhere it can be difficult to know if it was just not a good enough site and had not earned enough Google love to get ranked highly against it’s competitors, or if it was immediately hit by penalties out of the gate.

 

The Easy Times

A few years back the only penalties you could get were manual review penalties. That meant a real person looked at your site and decided if it was falling foul of any of the current webmaster guidelines. Oh, and btw it never ceases to amaze me just how many have not read and fully understood Google’s webmaster guideline.

 

And Now!

Things are bit different now since the likes of Panda and Penguin as the majority of penalties are now automatic. In other words Google’s algorithm (the program that assess web pages and decides where they should rank) can apply automatic penalties based on certain “signals” that they find.

I don’t want to too deeply into the technicalities of the automatic penalties but it goes something like this:

 

Automatic Penalties

botGoogle hire thousands of humans to look at thousands of websites and to mark them based on a  number of questions. There a list of questions that were leaked a while back but they are questions like

Is this page informative?
Is it easy to navigate?
Would you buy from this web page?
If it was medical advice would you trust the content?
Does it appear to be written by and expert in the field?
Etc.

Basically a whole bunch of questions designed to quantify the page quality.

Next they run a computer program to analyze the resulting good pages and the bad pages. That program can then identify commonality found on the good pages and the commonality found on the bad pages.

In a simplistic view it now means that the Google algorithm can look for these  “bad” signals in all web pages and if found can automatically penalize the web page. Thus saving a lot of wages paying humans.

Bad “signals” could be anything of course, like link profiles, amount of content, amount of ads, on-page seo factors, page load speeds, bounce rates etc. They do not have to be directly related to the visual or  actual experience of the visitor. They just have to be present. This is really statistical analysis that just says, for instance:

If a page marked as bad by humans the target keyword generally occurs more than 5 times on a page. On a site marked as good the target keyword generally occurrences less than 5 times on a page.

So statistical it is likely that if you find a page with the main keyword occurring more that 5 times is is statistically likely that the page gives a bad user experience. (Not necessarily for that reason. It is simply an indicator).

The more “bad signals” you find on a page then the more likely it is to be bad (statistically). So at some point Google draw a line and any page going over the line will be penalized.

 

So how do you avoid automated penalties?

two-wayWell there are two approaches. One is to find “current” beliefs about SEO and  how it applies to those bad signals and ensure they do not exist on, or are removed from, your site.

To help in that to some degree Google themselves do tell you what they don’t want (in part in Google WebMaster Guidelines).

Also there are regular Q&A videos with Matt Cutts at Google. You can search YouTube for all those or if you have specific question try searching on that. However some of the responses can be ambiguous and some people even say misleading so you do need to be careful when taking on-board info from Google sources that relate directly to their algorithm After all it is trade secret and one they keep very close to their chest. So it unreasonable to expect then to give you all the answers.

The second and by far the best way to discover what signals are likely to sink your site is to reverse engineer Google’s own results. This is the basic method that is used by all top SEO experts. In simple terms whatever Google is ranking on it’s first page results are the kind of pages it wants to see there. Of course this is related to the user intent as I discussed in the previous [Tony’s Tips]. So the top web pages returned for any particular search needs to satisfy the user intent. If it does not then it really doesn’t matter what the content is as it won’t rank anyway. However, for the purposes of this article, I am only looking at the web page’s content quality and not it’s user intent satisfaction.

We are discussing penalties based on page signals here so user intent is irrelevant in this case.

 

Main Automated Penalties

Panda and Penguin (in their various guises) are the main instigators of automatic penalties. The good news is that there has been a lot said and written about both. That will give you a start to help identify what is causing the penalties.

Penguin is primarily about Link Spam and I will address that in tomorrows tip!

In general Panda focuses on content quality so mainly includes the site and web page content. So some obvious things are:

Duplicate content
Spun content
Poor grammar & spelling
Over optimization (keyword stuffing)
Thin content (Especially to affiliate offers)
Made For AdSense content (MFA)
Doorway/Gateway Pages
Hidden text etc.

However some of the signals that indicate a poor quality site or page may be external, like:

Links from the websites that are de-indexed or penalized by Google
Links to bad neighbourhoods (adult sites, spam, sites or any other suspect site)
Excessive reciprocal linking on your website (indicates collusion)
Excessive or poorly structures internal linking.

Now some of these things are simply wrong and against the Google guidelines i.e. doorway pages and hidden text. But others all have some level or degree so need to be understood better.
So how much is too much? What is the right amount of text on a page? , what ratio of ads to content, how many reciprocal links can you have?

 

How Do I Know What’s Right?

Truth is Google doesn’t tell you but you will find plenty of people give you their opinion. So it’d imperative to only take advice from know authorities and people you trust on the subject.

Much of the SEO information shared in Forums etc. is just wrong. Simply because that is their own source of information and they learn the wrong things from other people and simply pass on that same duff info to others.

 

What The Experts Do

expertSome top SEO experts work to reverse engineer Google and figure out what these algo changes are and what is good and what is bad. They do this by literally reverse engineering Google search results.

Many do it by tracking a lot of web sites, seeing how these sites behave when new change is made in Google, analyzing the site changes, applying different fixes and seeing which sites respond positively. All this can give them a very good idea of what is and is not working in SEO.
One of the best authorities on SEO is Moz (moz.com) and they publish a lot of information on SEO and is a port of call you should make. If you are really new to SEO download their free Moz Starters Guide To SEO. It is probably one of the best starters guides I have seen and will set your foundations and basic understanding.

 

Google Algorithm Changes

Moz also have a full history of all the Google algorithm changes and it absolutely crucial when trying to identify what penalty may have hit your site. If you know you lost a significant amount of  traffic on a certain date (and yes you should be running some kind of historical analytics on your site) then you can check the date in the Moz Algo History and identify exactly what penalty you were hit by.

http://moz.com/google-algorithm-change

Why do you need to know this? Because if you know what penalty you got hit by, then you are more likely to know what to do to get out of it. If you are hit by a Penguin update then little point in improving your site content. That won’t get you out of the penalties as it will more likely be a link issue.

 

Comparing With Google Top Ten Results

As I have said previously reverse engineering Google is the very best way to be sure what you should be doing. For most people this really means looking at the Google top ten search results. Basically you can be sure that the top ten result contain the types of pages Google wants on it’s first page. By comparing these sites you too can understand what is needed.

In practical terms you will want to compare your site with each of the sites on the first page result for a specific keyword. So if you are targeting “dog collars” then type that into Google and see what sites come back.  In simple terms you need to look at as many factors as possible (like the ones mentioned earlier) and see if your site is lacking (or in many cases has too much or too many).  The idea is to make your site have the same ranking signals that the top ten sites have but of course  different (better) content.
Below you can find an interactive table of all the “ranking signals” and how much they are likely to impact your web pages. Remember that too little or too much of anything is a bad thing so the ranking factor are derived from and so relative to existing top results. For example if says “the number of backlinks can improve your rankings”, that is based on the current results. Any site with excessive or poor practice linking would not be seen in the top results and so a upper thresholds could be broken with negative results.

The table is interactive and you can select or deselect the various categories in the top box to focus on specific areas of interest.

You can see the full information on this table and it’s data at the Moz.com web site

In my next [Tony’s Tips] I will look more deeply into backlinks and anchor text and how they can affect penalties as well as rankings.

Pin It on Pinterest

Share This