Last week the New York Times Freakonomics blog featured a Q&A between readers and Google’s chief economist Hal Varian. One of the questions was as follows:
“Q: How can we explain the fairly entrenched position of Google, even though the differences in search algorithms are now only recognizable at the margins? Is there some hidden network effect that makes it better for all of us to use the same search engine?”
Google’s dominance in a significant part of Europe is striking. In the Netherlands, Google’s user share goes over 95 % (in searches, not in searchers – although I do not think there will be a big difference). The question is very relevant. Dominance in search has been a recurring issue in my research. Unfortunately, but understandingly Varian evades the question. The European Commission still needs to approve the Doubleclick merger. The issue is altogether sensitive. Varian’s answer is as follows: (He posted a more lengthy version on the official Google blog.)
“A: The traditional forces that support market entrenchment, such as network effects, scale economies, and switching costs, donâ€™t really apply to Google. To explain Googleâ€™s success, you have to go back to a much older economics concept: learning by doing. Google has been doing Web search for nearly 10 years, so itâ€™s not surprising that we do it better than our competitors. And weâ€™re working very hard to keep it that way!”
I am sure he would not be sitting 12 feet away from Eric Smith if he did not have a better answer to this question. Eszter Hargittai was especially unsatisfied with the part about switching costs. She posted a lengthy response on Crooked Timber, discussing the possible lock-in of Google users on the basis of her extensive experimental academic research of internet usage.
In the replies, Daniel Feygin notes that the search volume reinforces search quality and calls this a network effect. I am not sure whether one should call this a network effect -i think not- but it is certainly true that major search engines can use past searches to increase search quality. One can look into the Web search privacy discussion to discover the value of search engine logs for search engine providers. One can look into the click fraud discussion to find the value of these data to protect and secure the advertisement platforms.
To me it seems that Varian also implicitly refers to search logs and other data sources and Google’s ability to learn from it, when he speaks about Google’s secret sauce:
“[...]we have better recipes. And we are continuously improving those recipes precisely because we know the competition is only a click away.”
And later on in his answers he recommends a young person asking for career advice “to take lots of courses about how to manipulate and analyze data: databases, machine learning, econometrics, statistics, visualization, and so on.” Google has lots and lots and lots of data: 10 years of increasing dominance in search without any significant deletion of user data, multiple copies of the Web, the best index of the Web in the world, the broadest set of online advertisers and their preferences. Their secret sauce is to use these data in all relevant and possible ways and of course to innovate frantically and secretly- in Varian’s words to learn from it.
A data source that might not be discussed in the context of the reasons for Google’s dominance is input on spam, and illegal or harmful references in the index. I would suggest that because of its popularity Google is the first search engine to be addressed in case of spam and illegal or harmful content. Google has shown to have sufficient staff and resources to deal with these notices and possible court cases and consequently shows a relatively nice and clean index in return to user queries. In case the spam, illegal or harmful material is not removed from the Web, search engines which did not receive a notice because they are harmlessly unpopular still contain the references. Newcomers have to start from scratch. I would not be surprised if Google and maybe other major search engines thus profit from obligations and requests to remove unlawful references.
I found one other comment by Kamal Jain interesting. (He seems to be working for Microsoft.) He states that because of the absence of price competition, where prices would normalize the differences in perceived qualities, a small perceived difference in quality of search engines can be magnified. Paradoxically, that seems to suggest that the lack of switching costs is working in favour of Google. I suggest the following solution: Microsoft pays its users to get people to use its (perceived) inferior Web search product. This does not have to be a joke. The reward can be relative to the price users pay in privacy/control over their user data.