Given the madness that has gripped the SEO world over the last few months I felt it was time I started publishing some highlights and insights into the monthly Google blog posts on search quality updates. Up until now I have generally just posted them over on the SEO Training Dojo forums.
So let’s give it a go shall we?
What can we learn from the latest updates?
After the recent SQ update announcement, a lot of chatter we’ve seen tries to (as usual) make the connection between major updates, such as the recent Penguin one, and these weather reports. This isn’t generally the case though. For the most part there really isn’t anything one could relate beyond this one;
Keyword stuffing classifier improvement. [project codename "Spam"] We have classifiers designed to detect when a website is keyword stuffing. This change made the keyword stuffing classifier better.
We do know this was a target. Was it really part of the Penguin or just another secondary update? We’ll never know. I personally believe that all of these are outside of the officially named Penguin algo updates.
I am not going to be getting into all of the 52 stated changes; only the ones that seem highly relevant. As always, this is just a personal perspective, we can never know truly what ol Googly is up to. M’kay? Let’s roll!
Anchors bug fix. [launch codename "Organochloride", project codename "Anchors"] This change fixed a bug related to our handling of anchors.
I have seen a few folks intimate that this was Penguin related. I personally am leery of that stance and more inclined to believe that this is a low end change.
Local is one of the more active areas over the last 6 months of search quality updates, and this month was no different. Here’s the interesting ones;
Country identification for webpages. [launch codename "sudoku"] Location is an important signal we use to surface content more relevant to a particular country. For a while we’ve had systems designed to detect when a website, subdomain, or directory is relevant to a set of countries. This change extends the granularity of those systems to the page level for sites that host user generated content, meaning that some pages on a particular site can be considered relevant to France, while others might be considered relevant to Spain.
I felt this one was interesting because it addresses locality on the page level. The other telling aspect is that it seems to be related to UGC sites, social being the one that comes to mind readily.
More local sites from organizations. [project codename "ImpOrgMap2"] This change makes it more likely you’ll find an organization website from your country (e.g. mexico.cnn.com for Mexico rather than cnn.com).
This one’s interesting and something we’ll have to see how it plays out in the wash. We see a lot of strange listings in localized SERPs that miss the mark. Ultimately though, I would suggest ensuring that you have a strong directory or sub-domain structure to your website to ensure that if they do ultimately get it right, you’re in a position to capitalize from it.
Improvements to local navigational searches. [launch codename "onebar-l"] For searches that include location terms, e.g. [dunston mint seattle] or [Vaso Azzurro Restaurant 94043], we are more likely to rank the local navigational homepages in the top position, even in cases where the navigational page does not mention the location.
Quite interesting given some of the other changes in local searches over the last few months. It speaks not only to query classification, but in how the results are coming back. A site can be given a location categorization instead of just a page (ie; ranking a home page over a contact page for example).
More comprehensive predictions for local queries. [project codename "Autocomplete"] This change improves the comprehensiveness of autocomplete predictions by expanding coverage for long-tail U.S. local search queries such as addresses or small businesses.
Nothing massive here, but again reinforces the ongoing attention to local that we’ve seen a lot of this year with the stated search quality changes. The other aspect I like is that it’s a bit of a cross over with the auto-complete system and query classification, which we’ve seen a fair bit of in this round of updates.
Smoother ranking changes for fresh results. [launch codename "sep", project codename "Freshness"] We want to help you find the freshest results, particularly for searches with important new web content, such as breaking news topics. We try to promote content that appears to be fresh. This change applies a more granular classifier, leading to more nuanced changes in ranking based on freshness.
This is an area we’ve seen a lot of over the various search quality updates this year. In fact, local and freshness are the predominant elements. Obviously this one speaks to more temporally sensitive query spaces such as news and the like.
Improvement in a freshness signal. [launch codename "citron", project codename "Freshness"] This change is a minor improvement to one of the freshness signals which helps to better identify fresh documents.
This falls under the ‘not enough information’ department, but worth having here to establish the pattern of interest in fresh results (anyone remember the ‘real time’ search?).
No freshness boost for low-quality content. [launch codename “NoRot”, project codename “Freshness”] We have modified a classifier we use to promote fresh content to exclude fresh content identified as particularly low-quality.
Freshness, often known in SEO circles as ‘Query deserves freshness‘, is when a web page gets a boost because it’s new and potentially more relevant. In this case it seems a bit of a combination of the Panda-like updates (addressing low quality pages) and the QDF. Potentially one could circumvent the Panda (which runs on given time frames) by posting frequently. This seems to try and address that issue.
More domain diversity. [launch codename "Horde", project codename "Domain Crowding"] Sometimes search returns too many results from the same domain. This change helps surface content from a more diverse set of domains.
This one seems to be a long standing issue. The term ‘domain crowding‘ (also called ‘host crowding’) is when “Google will show up to two results from each hostname/subdomain of a domain name. “. Something that (Google’s) Matt Cutts had talked about back in 2007 (some stuff on SEL as well). Most commonly seen when various directories or sub-domains appear in the search results. Those benefiting from this will certainly want to be watching their query spaces.
More efficient generation of alternative titles. [launch codename "HalfMarathon"] We use a variety of signals to generate titles in search results. This change makes the process more efficient, saving tremendous CPU resources without degrading quality.
This one is more about infrastructure, but worth noting. For a few years now Google often creates it’s own TITLE for a listing, instead of the one in the code. This simply reinforces this and also notes the 2011 ‘need for speed’ which saw Google seeking to streamline performance.
More concise and/or informative titles. [launch codename "kebmo"] We look at a number of factors when deciding what to show for the title of a search result. This change means you’ll find more informative titles and/or more concise titles with the same information.
Again with the TITLE elements. This one seems to be looking at possibly shortening longer ones? Changing them to be more concise? Hard to say but certainly if you’re CTR is dropping because of Google changing yours, you might split test some to get them to show your through being more concise with them.
Improvements to how search terms are scored in ranking. [launch codename "Bi02sw41"] One of the most fundamental signals used in search is whether and how your search terms appear on the pages you’re searching. This change improves the way those terms are scored.
I almost didn’t include this one as it really doesn’t say anything. If I had to take a guess, it would be that the reliance on said terms usage (or over use) is in play. That related terms to the core concept become more valued. Again, hard to say. More on query classification here.
Better query interpretation. This launch helps us better interpret the likely intention of your search query as suggested by your last few searches.
Again, this is about query classification and this time it looking at what is known as reformatting of a query. And of course an element of personalization. This one isn’t new, but it seems they’re likely using the query data more in a session.
Also not likely a big deal to many, but I am including it for historical record.
Better ranking of expanded sitelinks. [project codename "Megasitelinks"] This change improves the ranking of megasitelinks by providing a minimum score for the sitelink based on a score for the same URL used in general ranking.
Sitelinks data refresh. [launch codename "Saralee-76"] Sitelinks (the links that appear beneath some search results and link deeper into the site) are generated in part by an offline process that analyzes site structure and other data to determine the most relevant links to show users. We’ve recently updated the data through our offline process. These updates happen frequently (on the order of weeks).
I do like the bits about choosing site links more based on authority of the pages within a site. I also like the naming of ‘megasitelinks’ because I never know what to call them (as opposed to the smaller in-line sitelinks).
There were a whack of spelling type changes, but those really aren’t a huge deal, so I left them out. This one though was fun because of the recent video of the meeting to deal with this one;
More spell corrections for long queries. [launch codename "caterpillar_new", project codename "Spelling"] We rolled out a change making it more likely that your query will get a spell correction even if it’s longer than ten terms. You can watch uncut footage of when we decided to launch this from our past blog post.
There were also some interesting (to me at least) changes in infrastructure, such as index tiers and and a 15% increase in index size, but most of you could care less. Be sure to read the entire post if your inner geeks wants more.
Again, none of these on their own is big news. Resist the urge to treat it as such. What’s more important is to keep watching the weather reports and look at the updates in aggregate. That’s where the gold is. Stay abreast of the ongoing evolution. And of course be sure to tune in each month as we’ll be looking to add more context to these over time.