The bbc reports that Google has put a ban on advertisement for essay writing in its advertisement policy.
Archive for May, 2007
It seems Google wants to seriously move into the business of shaping the future of its users. Google CEO Eric Schmidt said this week in the UK at a conference organized by Google: “The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow?’ and ‘What job shall I take?’.”[..] “We are very early in the total information we have within Google. The algorithms will get better and we will get better at personalisation.“, as reported by The Independent.
The question for Google is how to get people to ask such questions to a computer and how to make money out of having users asking them. A question I would really like to them to discuss publicly is whether a world in which everyone asks a computer what to do tomorrow, is such a nice place to be in. I tend to raise my eyebrows when people ask ME what to do tomorrow, because i think they should decide for themselves or be bored until they understand they should think about it themselves.
Apart from that there is a huge privacy issue, when lots of people ask such questions to the same entity. Google makes clear its data collection is not big enough for its appetite: â€œWe cannot even answer the most basic questions because we donâ€™t know enough about you. That is the most important aspect of Googleâ€™s expansion.â€, according to the FT reporting on the same conference.
Last friday I attended the Open Net Initiative conference on Internet filtering at Oxford University. The conference came at the moment that ONI launched the results of an extensive survey of internet filtering and detailed country reports of internet filtering in 40 countries world wide. See also EDRI-gram number 5.10.
I had the pleasure of meeting some very interesting people, from a range of different backgrounds. I especially liked talking with Nart Villeneuve, Director of Technical Research of the Citizen Lab, University of Toronto, who is working on some interesting new search engine research tools, and explained ONI’s research methods and results in the opening session.
I agree with John Palfrey, that probably the most important question for future debate is the question about a ‘best practice’ for Internet filtering. As I see it, that question leaves open the possibility that there should be no filtering by Internet Service Providers, or other mediating parties involved in transporting content over the Internet. In fact this question is the same as the normative question about Internet filtering. Opennnet Initiative does not fully address this normative question yet. The underlying normative position of ONI (at this moment) seems to be that Internet filtering as it is happening now is a bad thing, because it is not in line with freedom of expression and the free flow of information.
I think the ONI results are of such great importance, because they open the floor for a thorough normative debate on Internet filtering. ONI members seemed to disagree in that debate. As Urs Gasser proposes, one of the crucial questions to ask when addressing Internet content regulation is ‘which parties should be committed or obliged to filter certain content’. There is a range of options for parties to be involved: the publisher, the hosting provider, all transport intermediaries, the Internet access provider, software & hardware on the computer accessing the Web (such as the browser and network hardware) and finally the user itself. The search engine and others that facilitate access to content (such as anyone that links to other material) form another category that could play a role in content filtering.
In my research I will have to address the proper role of the search engine in State organized Internet filtering. My initial position would be that intermediaries such as ISP’s should not be filtering content, but act as common carriers. Search engines are harder to tackle. In some ways search engines are common carriers. One would hope that they apply their algorithms equally to all the indexed webcontent (that is not all web content!). But these algorithms are made to discriminate between content. Search engines are content selection tools and as such not neutral to the content they process. That makes it a different story, because it follows that they are already a (soft) content filtering institution. Luckily I have some more time to think about it. The ONI reports will keep me busy for a while as well.
Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, CV-03-09386-PA (9th Cir. May 15, 2007)
UPDATE: EFF has filed an amicus curiae brief about the ruling of the 9th Circuit and its possible implications on search engine freedom. “If service providers have to worry about potentially crushing liability, it will strongly discourage the development of new tools for online users. In fact, many of the tools we use already would be impacted by this ruling, potentially crippling innovations in search and customization.“
Google has won an interesting but awkward case in The Netherlands, concerning its obligation to remove search results from its index. Plaintiff, A Dutch T.V. hostess, complained to Google about search results linking to porn sites, as response to her name as search term. She claimed these results were illegal because they suggested that she’d be naked on these sites, or acting as a playboy or sexcamgirl. Google responded to her complaint by making excuses for these “unexpected results’, which were later removed from its index.
Plaintiff consequently went to court for a preliminary ruling and demanded that Google would be ordered to keep the results unavailable, until a judgement on the merits of case would have been issued.
In the Netherlands there are no special rules for the liability of search engines for search results and the information on the websites these point to. The question about such liability of search engines have to be answered by applying the general law of torts. The court states that the duty of care as regards information Google is processing is such that it has to remove results when the information is evidently unlawful. The court concludes this is not the case.
When determining the responsibility of Google as regards the content of the search results and the content they link to, the court first deals with the question whether Google is obliged to prevent unlawful information from entering its search results. The court concludes that Google does not have such responsibility. Its argument for that is that Google has shown it does not take such preventive action. I am puzzled by this reasoning. How can the fact that you don’t act in a certain way, be conclusive for not being obliged to act in that way. Of course there are very good reasons to rule out preventive duties of care for search engines concerning their search results. First of all, the fact that they are a special type of intermediaries, that need legal space to operate. More fundamentally it is freedom of expression and information that demands to rule out preventive duties of care.
The court then proceeds by assessing whether Google was obliged to remove the results after it got notified of the stated unlawfulness of the results and the information they linked to. As said, the court should be considering whether the information was evidently unlawful. The court states that “it was reasonably impossible for Google to come to the conclusion that the information was incorrect. Therefore the information can not be considered as evidently unlawful”.
That concludes the case, I would say, but the court continues. It concludes that plaintiff should go to the owner of the websites that used her name. (The fact that the damage of search engine spam is intricately related to the search engine service stays unnoticed. The spamming sites would return into oblivion, when they would be removed from the index, and the corresponding damage with respect to plaintiff would be non-existent.) Last but not least the court adds that Google has removed the results, not because of the the stated unlawfulness of the results, not because it is legally responsible for doing so, but because these sites were manipulating the search engine.
The Guardian reports about a patent filing by Google that shows how the company could start to use analysis of gaming behaviour to draw up a database of psychological profiles. Google has commented that the patent is one in a number of recent patent filings and the company has no plans to actually roll out this technology.
The patent says: “[T]he system may collect information about a user’s game-play behavior. Examples of information that could be useful, particularly in massive multiplayer online RPG’ s, may be the specific dialogue entered by the users while chatting or interacting with other players/characters within the game. For example, the dialogue could indicate that the player is aggressive, profane, polite, literate, illiterate, influenced by current culture or subculture, etc. Also decisions made by the players may provide more information such as whether the player is a risk taker, risk averse, aggressive, passive, intelligent, follower, leader, etc.”
These characterisations can then be used to provide advertisments more “relevant to the user”.
I went to look for the patent, but didn’t find it yet. I did run into a patent application from another company for the ‘invention’ of the use of personality types for search engines: “What is therefore needed are methods and apparatus that enable internet searches to be performed with consideration of the personality type and/or qualities of the user who is performing the search when ordering and presenting search results for that user. Unfortunately automated search engines such as Google and Yahoo of the prior art do not currently account for the personality of the searcher.” US patent application 20070106663
Today I found the patent (I got it e-mailed by Theo RÃ¶hle ). It describes how advertisent would be targeted to certain ‘types’ of users:Â “As yet another example, in some systems, an advertiser might specify that its ad is to be served only to a certain type of user, or a user having certain attributes.”
“The game information may be different for different users. Consider, for example, a virtual racing video game used by three (3) users – A, B, and C. Suppose that user A selects an outdoor, dirt, 4×4 course, selects a yellow H2 Hummer, selects a male driver, and drives aggressively during the race. Suppose, that user B selects a city race, selects a tuned Toyota Supra in multi-color with a pink base, selects a female driver, and drives in a neutral manner during the race. Finally, suppose that user C selects a World Cup Race track in Madrid Spain, selects an Audi R8R in multi-color, selects a male driver, and drives in a strategic manner during the race.  Given the assumptions in the foregoing example, suppose that Dodge wants to place an advertisement. It may have various alternative ads with different serving constraints or targeting criteria. Suppose further that it has a variable color, with a default value. Thus, the system may show a “Dodge RAM-Tough Truck” ad creative with a yellow truck to user A, a “Dodge Neon Sport” ad creative with a pink car to user B, and a “Dodge Viper” ad creative with a Dodge Viper in a default color to user C. Suppose that a ticket broker wants to advertise tickets for various events. Three ads for three events, each having different serving constraints or targeting criteria, may be ~ tickets for an NFL football game, tickets for a Gwen Steffani concert, and tickets for the US Open Golf Tournament. Thus, the system may show the ad creative for the NFL football game tickets to user A, the ad creative for the Gwen Steffani concert tickets to user B, and the ad creative for tickets for the US Open Golf Tournament to user C.”
It sounds to profitable not to happen.
Yesterday, the 10th of May, Google shareholders rejected a proposal that asked Google to change its policies in view of human rights. The proposal came from New York City pension funds. The Google board had advised against it. I don’t think anyone suspected the proposal to be passed. The board itself has too many shares for that as well. There has been quite some public support for it though, for instance by Amnesty International. When such support reaches lots of users it does force Google to respond. CEO Schmidt did do that with the official company lines on why it chose to go into China.
I find this kind of shareholder activism interesting, but I think success through this course of action will be rare. Shareholders pushing for immaterial values, such as freedom of expression and privacy. We got a little more of that lately in the rejection of the Ruport Murdoch bid for Dow Jones. The rejection was not based on shareholder value concerns, but on Murdoch’s reputation as regards journalistic freedom and integrity in his media conglomerate. It was the Bancroft family that could make such a decision, because it has control over Dow Jones. I highly doubt it that such concerns weigh heavily in many shareholder meetings. Most are in it for the money by principle, aren’t they?