Why Google Hates Duplicate Content and How It Treats Identical Articles

Want to know a pretty incredible secret? Most people who use SEO don’t actually know how it works. Even gurus and experts can have trouble keeping up to date. This isn’t their fault, but rather a consequence of the fast changing world of search engine optimization.

Google is a big fan of changing things up. They regular update – and sometimes completely turn upside down – their algorithm, indexing procedures and even their penalty policy. It is all part of an evolving search system that has to grow as the web does, and find more efficient and effective ways of doing things.

The Panda Problem

It was in 2011 that things really changed. Originally, the general method of dealing with certain SEO headaches were conducted in the same way. A good example is duplicate content, which was isolated to page violations.

Every time you had duplicate content on your website, whether unintentional or not, it only effected that page. The crawler searching through your site would put a black mark on that content, and it would lose preference in search results. Sometimes, it would be fully omitted. But because it was on a page by page basis, there wasn’t so much worry.

Now, things are different. Google released their Panda update, and one major factor had changed: duplicate content effected your entire website.

If a crawler came across something unoriginal on a unique URL, it would report it back to the search engine. Which would have an effect on your entire page ranking, not just the page it had been on.

Google has repeated released “updates” to Panda. This is a monthly data refresh, which ensures they managed to knock down the ranking of every site guilty of duplicate content. Or that they catch those that were hit by mistake, to return them to their former rank.

What Qualifies As Duplicate Content

Anything on your site that is identical or similar to what is on another site…easy enough to understand. You have the identical duplicates that are word for word and the same in images, formatting and other content.

Then there are near duplicates, which use most of the same content but might differ slightly in images, formatting or certain changes in a block of text.

Finally, we have cross-domain duplicates. This is when two or more websites share the same content, either near or identical. An example would be news sites, which host the same Associated Press article that has been legally authorized for sharing

Of course, you can imagine the problems this caused with many ecommerce sites that had made the mistake of using manufacturer descriptions on their products, or contained matching content to affiliates.

When the crawlers came, they saw nothing but the similarities. The nature of bots is not to gain any context from this kind of situation, and so it is treated with the same rules as anything else would be. Providing a lesson for all shops on the web: write your own, unique descriptions. Even if it is more of a hassle.

Conclusion

You have to watch your duplicate content, even for crossposting through sites you own. Anything that is either identical or near in content is sure to put a red flag on your page. Unlike in the past, this will effect your overall search ranking, which is a serious penalty that can cost you a lot of traffic

Always remember that original content is key to good SEO, and be careful of what you host.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>