Recently we were working on a new site for a client in the industrial sector. Over the course of a few weeks we cracked away at this site, approaching our launch date. During the process, a number of internal pages (top level pages, sitemap pages, landing pages, and so on) were being built and worked on collaboratively by our designers, developers, and writers. To build these pages, they were added to the site but never fully published or linked anywhere. According to all of our experience building sites in the past, these pages had no chance of showing up in Google. However, there was one wildcard we didn’t consider that we can now use to our advantage: Google Chrome, which we now lovingly refer to as the Googlebot.
Eventually we got a call from our client who explained that they had already gotten calls inquiring about the availability of a certain product, and our lead developer was as confused as them.
“That’s impossible. Nobody knows the URL, every visit to the site redirects to a placeholder page, and the new pages aren’t linked anywhere. Our SEO is good, but not good enough to get a site ranked before it’s even launched.”
When his eyes shot from screen to screen, noticing that everyone was testing with Chrome, a compelling new theory was born.
“Chrome is Googlebot, it makes so much sense”
As is the case with anything at Google, we can’t say anything for sure. However, by testing the new pages directly in Chrome it seems like we were constantly feeding Google’s indexing bots with data that had never existed on the site before. With the pages already SEO’d, a number of niche keywords were already ranking at the top of search engine results — almost overnight.
As we all know, Google’s entire empire has been built on top of Google Search. Search is the company’s bread and butter, and it seems like every one of their products ties into search in one way or another. So why would Google bother itself by entering into the fray that is the browser wars alongside browser giants like Internet Explorer, Firefox, and…well, that’s about it (sorry Opera).
Google engineered a fast, light, and reliable web browser from the ground up which is freely available to the general public. But why? After our “accidental” experiment we think we know; Chrome is a prettied up version of Google’s search crawler, made ready for the masses to help feed Google all the new data it needs to effectively organize the information on the internet (which is a pretty big place), unless they opt out.
Is a “Browser Spider” Feasible? Probably
Take a look at one of Google’s Patents to see the motive behind the search giant’s big ideas.
Assuming Google can develop a car that drives itself, and without going into the technical explanation of extracting data from the many ranking features on a website, it would be hugely beneficial for Google to bundle “Googlebot” with Chrome. I checked into some Google Patents at the USPTO to simply search for the word “Browser” and found some interesting details in a patent filed in 2010 for a “Full-Text and Image Database”.
Warning: Legalese Incoming!
This patent outlined “a system [that] generates a model based on feature data relating to different features of a link from a linking document to a linked document and user behavior data relating to navigational actions associated with the link. The system also assigns a rank to a document based on the model.” In non-legalese, it would seem that the abstract for the patent basically describes a system where a model and ranking is generated by gathering data from linked documents and user behavior.
Here’s are a few snippets of what I found outlined in the patent:
“The user behavior data might be obtained from a web browser”
“For example, the web browser or browser assistant may record data concerning the documents accessed by the user and the links within the documents (if any) the user selected’
“Model applying unit 420 may store the document ranks so that when the documents are later identified as relevant to a search query by a search engine, such as search engine 225, the ranks of the documents may be quickly determined. Since links periodically appear and disappear and user behavior data is constantly changing, model applying unit 420 may periodically update the weights assigned to the links and, thus, the ranks of the documents.”
It would seem that the patent applied for in 2010 gives us a very clear insight into Google’s vision for Google Chrome. It would seem that these principles are already in practice, but we’ll update this post as we learn more.
What does it mean for SEO?
So far, what we think to be an interesting (if not invasive and potentially creepy) development with Chrome may have gotten our site-to-be crawled before even submitting it for indexing through webmaster tools or optimizing its link structure. Fortunately, the rankings got better and better — but we can say one thing: when you want exposure for your site, Google products are your friend.