In high school, I knew a guy who made a habit of recycling his essays and papers. Let’s call him Murphy. Murphy would write a science essay about gravity, print it, then change the title and print it again as a math paper. Each time he got away with it, he grew bolder—he’d even submit essays multiple times to the same teacher. We were all amazed that he had beaten the system…until the day he got caught. The teachers looked back through his papers and discovered his ruse; needless to say, they weren’t happy. We all got a lecture, and Murphy spent many afternoons in detention, rewriting all the essays he had so cleverly dodged.
Murphy may have learned his lesson way back then, but I often come across websites and blogs that do the same kind of thing that got him into trouble: they stuff their website with recycled or duplicate content. I wanted to understand why they think that’s a good idea, so I asked around a bit, and almost always got one of the following answers.
It’s no secret that Google and other search engines like fresh content. A new article is likely to rank higher than an older one on the same subject, and a blog that posts weekly will probably achieve a better ranking than one that last posted three years ago. That much is true, because search engines want to provide the most recent, relevant data to answer your queries. But some people take the wrong lesson from that, focusing on being “frequent” instead of “fresh.” Yes, you can frequently repost the same article so that it looks new, but that isn’t going to fool someone who has read it before, and the search engines will catch on. Google doesn’t help those who help themselves—Google helps those who help users.
This one is a common misconception that sounds like one of the old commercials for Doublemint Gum. Some websites and SEO companies get fixated on Keywords and look for any opportunity to employ keyword stuffing strategies. They reason that if you have the same content on multiple pages or posts, then it’s going to boost your keyword saturation and improve your rankings for those keywords. But what really happens is that now those pages are competing against each other for rank…quite the opposite of what was intended. And if it’s evident that your duplicate content is meant to manipulate rankings, then the search engines will penalize your website.
Duplicate content isn’t always intentional—in fact there are several ways it might occur inadvertently. Category systems (URL parameters) are a common cause of duplicate content, as are printer-friendly versions of articles and session IDs. The good news is that search engines do not actively penalize these accidental duplications. As Google says in its Webmaster Tools, “Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results.” Still, it’s better not to take the chance. If you know that there is unintentional duplicate content on your website, there are steps you can take to prevent it from becoming an issue.
If you’re looking for a good rule of thumb that applies to any duplicate content situation, just try asking yourself, “Will the people who use my website benefit from this?” That should be the true measure of what you’re doing. Google fights for the users, and you don’t want to be on the wrong side of that fight.