Today we have an interesting case of a major corporation/site that may or may not have made an inadvertent mistake. To me? It’s an odd move. It seems they’ve all but BLOCKED EVERYTHING (via Robots.txt) in a downloads subdirectory of the site. This of course makes it more than a bit tricky finding support documents, at least via Google search. Some details;
Then some others got into the mix
Then even a few of my community members got into the act (in the chat room) including this classic from one nameless member;
“Oh that’s my favorite, I did same robots.txt once. Then …surprisingly the site was out of the index…and then I fixed it and managed to tell the owner like I did good job fixing the "issue"
Oddly, though apparently blocked, we can still find it like so. And there are a bunch of results for that sub-domain actually. Why is that? It’s because there are some links pointing to those pages and no other relevant results for that query. Thus we see the link, but no description
That’s another story tho…
What gives Samsung?
To me this is a curious move as it not only would undoubtedly send more traffic but also create a better user experience. Are they worried about ‘no click searching’? Seems unlikely. Concerns of duplicate content? I couldn’t see how while running around the site.
I was able to find it searching the site, does this mean it is some type of ploy to get people using the site search not finding things via Google? Also unlikely and makes little sense.
A mistake? Did the people in charge of the site simply forget they had blocked everything? Entirely possible and it’s something you might want to ensure is in every SEO site audit (and run audits quarterly). I know it’s in our audits.
So, barring us hearing from the Samsung peeps, what do you make of it?
UPDATE: Harith also noted the Times results and you can see why because of their robots as well http://www.thetimes.co.uk/robots.txt – not nearly as confusing IMO, that one is on purpose.