On a rather unremarkable Tuesday, beyond the heat, the folks at the WSJ tracked me down with an interesting story. Tis a tale of a very disgruntled CEO whom had been mauled by a Panda and lived to tell the tale. Care to hear it? Sure. Why not?
It seems that the fine folks over at HubPages have been having a hard time due to the Google ‘Panda’ update from earlier this year. Not surprising given them were #17 on the list of hardest hit sites. And since that time CEO Paul Edmondson has been shouting it from the mountain tops from Techcrunch, to WebPro News, the Google boards and elsewhere.
There was even a thread on WMW about using this approach and citing the Hubpages moves
This was somewhat interesting because as far as we’ve heard there is no getting up from the Panda mauling. The only REAL fix was to do away with the crap content and build out a better offering. Or that’s been the story at least. It seems Hubpages is claiming to have a work-around.
Can you recover from Panda?
The story broke yesterday in the WSJ with; Site claims to Loosen Google Death Grip.
Being somewhat close to this story I can say that it does seem that there is something going on. They took some of the authors from the site and put them on sub-domains. At which time, some articles that weren’t ranking on the main domain, started to rank again. Of course, the data I have seen from this is limited and I can’t state definitively that this is actually the case (don’t have much before/after data).
And of course the obvious, now that they’ve run off to the WSJ, Google is sure to no know about it and it’s shelf life will be limited. In fact, in talking with a Googler yesterday it seems they were quite aware of the situation already.
So, I wouldn’t go running off just yet to try this on your own site (if it was hit by Panda).
Dear Mr. Edmondson
The part we (my cohorts and I at the SEO Training Dojo) found somewhat stunning is that Hubpages would go running over to the WSJ to talk about it. Paul, brother, the first rule of fight club, is that we don’t talk about fight club. Sheesh. Your ongoing desire to poke the bear will ultimately cost you and your staff. Bad form ol chap. Next time get in touch with me first ok?
And you have been vocal in how Panda has effected you. What about Tumblr? Squidoo? From what I know they weren’t heavily affected. These are relative UGC sites. The real secret to getting past Panda is simply to work harder at getting the house in order.
Some other things to consider; Are SEOs manipulating the system? See here and here . Are there a lot of dupes from RSS? Have you considered changing the policies and judgements on when the no-follow is removed? And not to be cheeky, but the Hubpages site even has a hub post about using sites like yours for building links (with an author score of 96).
Let’s look at this small snippet from your Techcrunch post;
“We are concerned that Google is (1)targeting platforms other than its own and stifling competition by reducing viable platform choices simply by (2)diminishing platforms’ ability to rank pages. Google is (3)not being transparent about their new standards, which prevents platforms like ours from (4)having access to a level playing field with Google’s own services. We want to comply with and exceed Google’s standards. Google has my contact information. Hope to hear from them soon. “
- Sadly untrue as others in his niche weren’t affected to the same degree.
- Again, not entire acurate. Google has made massive and minute changes to the algos many times over the years. That’s why we SEOs have a preofession. We adapt and move on. They can as well (as noted with recent sub-domain approach)
- Yes, because the nafarious types that manipulate the engines would have a field day. Then he’d complain about the crap in the SERPs right?
- See #2
Anyway… Look me up Paul, we’ll do lunch! Have your peeps contact mine. M’kay?
Moving Along
Anyway, it was an interesting experience and it does make me wonder if our perceptions (as SEOs) of how Google is treating sub-domains these days. In the past we were told there is a tight relation to the core domain from subs because of various manipulations in the past. Does this show that this may not be the case? Or is it just a Panda anomaly? It’s hard to say.
Should you be looking at this as a Panda fix? I am not entirely sure that it’s going to be worth the investment of resources. Google is certainly aware of this situation and as such I’d likely advise just cleaning house instead.
I shall update this post if I hear anything more on the situation.
UPDATES;
Here’s is some of the test data Paul posted in this thread (on HPs);
Paul Edmondson (subdomain activated 06/23/11)
Week |
Views |
Change |
2011-02-05 |
1,431 |
+12.2% |
2011-02-12 |
1,433 |
+0.1% |
2011-02-19 |
1,405 |
-2% |
2011-02-26 |
1,291 |
-8.1% |
2011-03-05 |
972 |
-24.7% |
2011-03-12 |
1,019 |
+4.8% |
2011-03-19 |
965 |
-5.3% |
2011-03-26 |
1,064 |
+10.3% |
2011-04-02 |
1,052 |
-1.1% |
2011-04-09 |
990 |
-5.9% |
2011-04-16 |
873 |
-11.8% |
2011-04-23 |
831 |
-4.8% |
2011-04-30 |
785 |
-5.5% |
2011-05-07 |
882 |
+12.4% |
2011-05-14 |
858 |
-2.7% |
2011-05-21 |
747 |
-12.9% |
2011-05-28 |
673 |
-9.9% |
2011-06-04 |
702 |
+4.3% |
2011-06-11 |
719 |
+2.4% |
2011-06-18 |
694 |
-3.5% |
2011-06-25 |
707 |
+1.9% |
2011-07-02 |
1,222 |
+72.8% |
2011-07-09 |
1,202 |
-1.6% |
2011-07-16 |
1,462 |
+21.6% |
Here’s an analytics screen shots from that thread;
There are some ongoing discussions over on WMW as well and a Matt Cutts post from 07 about SDs and directories.
And of course, Aaron grabbed his tin-foil and got into the fray as well.




Dave,
You nailed it. The running off to WSJ, and by proxy Google, means the sub-domain work-around is probably already dead and no one knows it yet.
In an interesting update, the single site crushed by Panda has made a come-back, and only a little over a month. I can say definitively that I pulled from the depths of hell. It did take a new platform, a new architecture, and all-new content, but it can be done.
It’s still the only advice I’d give anyone: fix your content, fix your architecture. In essence, present yourself as a different site.
My Two Cents,
Tony
I mentioned in the Podcast… and think still true is that the sub-domain being treated as separate may only be on Panda ie: a “feature” in M$ parlance or bug in others.
@Tony “It did take a new platform, a new architecture, and all-new content, but it can be done.” ie: perhaps not even a Panda problem to begin with… and the changes fixed what was wrong. To say you got up you have to take exact same page make change and see it place as in Hubpages which was same page moved to sub.
There is no need for conspiracy theories where Panda is concerned. I think Google has done a pretty good job of explaining what Panda does (it separates the wheat from the chaff according to Google’s preferences).
Other people have tried the sub-domain trick before now and they reported only temporary improvement. If Hubpages doesn’t change the basic page layout their sub-domains may be downgraded by the next Panda release.
On the other hand, if it really is down to the “quality” of the content (unlikely) then only some of their sub-domains will work well after the next Panda release and then what happens to all the low-quality sub-domains?
@Terry,
You may be right that it was not a Panda problem (in fact I would agree with you there), but the data points to implementation of Panda as the downfall and the obliteration of SERP positions and traffic for the site.
So, while the site had major flaws to begin with, it went unnoticed by Google until the Panda Algo update. It took better (more informative and benefit driven content) and new thematic structure to begin its resurrection. 😉
Michael has pretty much nailed it… a new subdomain likely doesn’t *have* a Panda score yet.
Until it starts to pull in some traffic and that triggers Panda and all the other non-real-time spam filtering stuff to run, or gets caught up in another big update.
If the content is still crap, playing games to hide it is a temporary measure at best… and easy enough for Google to close this little loophole.
I am certainly of the mind-set that Michael has stated as well. From what I heard from Google, this may not be a fix and they essentially said, “Give it some time”.
This does imply to me what he’s saying, that this may only be a temporary fix. Thus my advice remains the same, don’t go running out just yet to start changing things just yet.
I hadn’t thought of the possibility that the hit on HP wasn’t really a Panda issue. Possible, I suppose, but it doesn’t seem likely to me.
I’m sure that now that Paul has let the cat out of the bag, he’ll soon have more to bitch about, when Google slams the door on his fingers.
The story within the story, of course, is how anyone could be stupid enough to broadcast such a thing. I doubt that the possibility of someone using SDs to circumvent poor rankings has escaped Google’s attention. It was bound to get nullified eventually, without HP ever bringing it into the light of day.
But rubbing their nose in it like this, Paul… did that REALLY seem like a good idea to you?
I don’t believe how anyone in their right mind would spout off to WSJ or Techcrunch. The subdomains will not isolate you from further versions and passes of Panda. If anything, it will be a temporary fix. Any good and real SEO knows temporary is not the route to take.
—-“And you have been vocal in how Panda has effected you. What about Tumblr? Squidoo? From what I know they weren’t heavily affected. These are relative UGC sites. The real secret to getting past Panda is simply to work harder at getting the house in order.”
You missed his point. Totally. Especially when it comes to Google’s own junk.
—–“It did take a new platform, a new architecture, and all-new content, but it can be done.”
Might as well get a new domain name too.
I think he’s telling Google to approve folders as a strategy.
I don’t think Hubpages needed to broadcast what they were doing to get noticed.
From Google’s perspective, “#17 most affected” probably translates to “17th best Panda success story.”
Someone at Google is watching this thing and doing quality control, and *any* of the top junk peddlers coming back in SERPs is going to get noticed no matter what they do.
I think this is not the whole picture. The issue relates to each subdomain being responsible for good content. If you have good content your subdomain will win out. if you don’t it won’t. I think you missed that author.
Here is the bottom line author. I took some of my hubpages articles and posted them to a prestigious financial website where I contribute. Before Panda Google did the moral and right think and listed my ORIGINAL content higher. Panda ruined this ethical relationship with content producers. I was screwed to be honest with you. Google is wrong, wrong, wrong.
Hopefully this subdomain correction will allow Google to do the right thing by my content. We shall see won’t we?
Gary has a point.
We’re an online publisher of original content and we’ve seen sites that republish our content rank higher. So how did this happen?
Firstly we got crushed by Panda, even though we have unique, valuable content. The reason being (I guess) is that our site contains a research report store that sells off the shelf research. The report descriptions are standard and appear on other vendors and reseller sites (so it could be viewed as duplicate content).
So Panda comes along, views duplicate content (products) and then hits the whole site. We’ve put no index on the reports now (not the best business strategy perhaps) but still little recovery.
For me, ecommerce sites was the one area Panda didn’t consider. It just looked at all web pages as content – and duplicate content is bad.
But not if you’re shopping around for a product and want the best price.
We continue to work on more unique and value added content. Not for SEO purposes, because it’s what we do. Hopefully the Panda hit will wear off over time.
ummm there was a change just before Panda that is possibly the problem with aggregated and scraped sites beating the original… you can pretty well assume the “attribution algo update” was a disastrous failure at actually helping the true source. As big as the success of Panda has been that change was a collossal flop.
“For me, ecommerce sites was the one area Panda didn’t consider. It just looked at all web pages as content – and duplicate content is bad.”
I was noticing thae other day that there were few if any ecom sites in the Sistrix list. I also know a few vendors that take their descriptions from the manufacturer… no problems… if it were many of the stores on the internet would be Pandaized.
It is just my view that it was not just a flop, it was a form of stealing. Google stole my original content and replaced it with my secondary publishing of it. That is disgusting. Google is disgusting unless they fix this.
I went to ebooks since I couldn’t trust Google anymore. No one should trust Google anymore. They have proven themselves untrustworthy.
The biggest problem is that their extensive testing covered two weeks. So, Panda did it’s thing, Hubpages made their changes, Hubpages put loads of content on what were essentially brand new domains, hubbers noticed a rise in traffic because new domains were not supressed by Panda, Hubpages decided to roll out subdomains based on this data. What happens when the next Panda update hits? Lets see!
As a hubber I am wondering why I should continue to use Hubpages if they are essenially goign to offer me a version of Blogger except with a 40% take of my earnings
I fault Google for not being true to their content rules. It will be interesting to see what happens when there are more changes, Oli.
Gary, I’m not sure Google or any search engine is capable of solving the problem you expect them to solve. That your “original copy” was ranked first previously is a matter of luck. If you and your publishing partners aren’t using rel=author, how is Google to identify the “original.”
They aren’t stealing from you, they’re indexing (and ranking) a copy that YOU put out on the web.
Does Google suck at attribution? Sure. So do all the other search engines. If you know how to solve that problem, at scale (think a trillion documents), they probably have a job for you.
Has anybody noticed that Quantcast figures are showing no actual gains in Hubpages traffic since the first roll out of subdomains? This was doomed to be ‘no news’ from the very beginning.
they seem to be ranking, i noticed more hubpages near the top etc.