The French Parliament has passed the three strikes provisions for the termination of Internet access of file sharers.
Archive for the 'governance' Category
News.com reports that Wales is giving up on the Wikia search engine project, which was supposed to build a search engine based on wiki principles, but never really managed to implement those principles for an environment as dynamic as web search. Luckily there are a number of open search projects that are alive and experimenting with ways to provide search in a more open and transparent way. See for instance YaCy.
Last week, Google announced it will start to offer what it calls interest-based advertising through its network of AdSense partners and on YouTube. With the move, Google taps further into its unequaled database of Web behavioral data by end-users, aiming to increase the economic value of the advertisement space for its AdSense partners, and using the same to monetize traffic on YouTube. The use of the database for YouTube is maybe least remarkable considering Google’s problems to make money on the leading global video platform. Some of the features of the program for end-users are remarkable and positive from the end-user’s perspective but it is important to acknowledge their limitations.
Relation with acquisition of DoubleClick
The move is partly a result of Google’s acquisition of DoubleClick, one of the biggest players in the field of online advertising that used behavioral targeting for many years. The new service seems to use some of DoubleClicks technology, including the cookie that is used to track end-user behavior. Google has been less clear about the data collection architecture. Does the use of one cookie for tracking imply that the underlying database of click-streams on the Google AdSense network and on DoubleClick customers have been integrated or are ready to be integrated?
Users in control
Google’s interest-based advertising service has been praised because it offers end-users access and control over their profiles and offers an opt-out. True, this is a remarkable move, as no competitor in behavioral targeting was doing this yet. Most competitors do not place as much emphasis on their relation with end-users as Google does. By putting users in control, Google strikes a new balance between the interests of advertisers and content producers on the one hand, and end-users on the other hand. It will be interesting to see if DoubleClick will make a similar move towards end-users.
Still, I am skeptical how substantial these controls really are. First, end-users only get access to the tip of the iceberg of the technological and behavioral data-processing architecture. Consider this quote from Search Engine Land about a Q&A with Google:
[C]an an advertiser pass along a specific ad to a specific user? For example, can I show an ad for the Sony HDR-XR200V if this user added the Sony HDR-XR200V to their shopping cart on my site but did not check out? Bender said yes, but ultimately it is up to the advertiser how specific they want to get with those ads.
That means that advertisers have more control over targeting than end-users do. I would be able to access and control my interest categories, such as the category “Video Players & Recorders”. Advertisers and e-commerce sites that use the program can reach me through much more granular controls facilitated by Google. To some extent, the control and transparency is merely a façade, behind which a (for the end-user) opaque sophisticated data processing architecture is doing the real work.
Opting out – of what?
Of course, there is the option of opting out through a special cookie and Google has designed (with the help of EFF) a browser plug-in to ensure that opt-outs are persistent for end-users that regularly delete their cookies. An opt-in model is not considered to be economically feasible. I would not be surprised if research would show that expected opt-out numbers would be around the same level as expected opt-in. The large majority of end-users will simply not notice anything of the targeting based on their browsing. You can make as many videos as you want, there is a limit to the number of people you will be able to reach if you do not force them to listen before making them subject to certain treatment.
Apart from the many shades of gray between an opt-in and an opt-out, we should ask ourselves what the offered opt-out really means. Does it mean that Google stops to target ads based on a profile of the interests of end-users, which is derived from the navigational history of end-users? Yes, it does. Does it mean that Google will stop to collect those same click streams? No, I do not think so. These click streams will still end-up in Google’s database, (without a unique cookie id). Google will still show ads, and it will still need logs for its AdSense accounting, click fraud prevention, service management and research. In addition, it’s hard to imagine opting out of Google’s immense network of services in way that does not allow these logs to be correlated with individual end-users. In other words, the opt-out only touches a tip of the iceberg of data processing that is taking place.
The International Centre for the Study of Radicalisation and Political Violence (ICSR) released an interesting policy report ‘Countering Online Radicalisation‘. The report critically examines negative measures such as filtering hiding and removal of material, addresses freedom of expression concerns and proposes a number of new positive measures to make the Internet less attractive as a platform for extremism and radicalisation.
Interestingly, the section in the report on negative measures contains a subsection on the strategy of hiding content on the Internet through the removal of material from search engines and the deployment of SEO strategies:
In general, the various tools that have been deployed by governments in recent years can be grouped into three categories: removing content from the web; restricting users’ access and controlling the exchange of information (filtering); and manipulating search engine results, so that undesirable content becomes more difficult to find (hiding).
The report forgets to mention the highly relevant co-regulatory frameworks in Germany and France that do precisely that. It does refer to China’s targeting of search engines and mentions that:
Though technically feasible, it is highly unlikely that Western governments would consider pursuing this course of action.
As governments and third parties are increasingly using the strategy of hiding content through the targeting of search engines, also in the Western world, it is unfortunate that the researchers did not develop their concerns in more detail. The deployment of SEO by governments to reduce the prominence of online extremist material is problematic and rather hypothetical in my opinion.
The report is less than enthusiastic about the use of any of these strategies, noting that removal of content amounts to fighting the symptoms and not the cause, negative externalities of negative measures, the technical imperfection of filtering, freedom of expression concerns and political controversy within certain communities. It does recommend that law enforcement strategically targets illegal material for removal, while focusing on the perpetrators and not the material.
Above all, the report proposes a number of interesting positive measures that could help to make the Internet less attractive for extremists, namely empowering the online communities, reducing the appeal by strengthening media literacy and promoting positive messages. These proposals are sympathetic, but I feel ambivalent about the proposal to strengthen the role of end-users to regulate content. One the one hand, user empowerment is what the Internet and many successful online services are about. On the other hand, community empowerment might lead to the over-empowerment of ultra-sensitive users that are not part of the community but merely active to restrict others in their online communications. Most user-driven sites are far from homogeneous and that is a good thing. Promoting user-empowerment should go hand in hand with promoting tolerance.
The Uk Government finished its consultation on P2P Filesharing and appropriate responses. I was struck by a quote of Audible Magic, a controversial p2p monitoring and filtering technology company. About the accuracy of its tracking technology it states:
Audible Magic’s technology has a 99% positive identification rate and zero false positives. This means that better than 99 % of actual copyrighted content is identified correctly as copyrighted content. With zero false positives, unknown content is not identified as copyrighted content when it is not.
With actual copyright protected content, Audible Magic means the set of content of its clients. It’s definition of a false positive is content that is not part of that set, but identified as such. In other words, the rights holders industry, not copyright owners are protected in this model. (I make music myself, which is copyright protected and shared freely on the Web) In Audible Magic’s world, copyright protected works are works that come from a certain source. (This excludes me and many others.) No limitations, exceptions or fair use apply. In fact that does make it a lot easier to come up with a technological solution to draw the line between legitimate and illegitimate filesharing.
James Love of Knowledge Ecology International just posted some details on secret ongoing ACTA negotiations. It looks like ACTA tries to be the forum for many things that failed at the national and European level, due to a lack of democratic support. James Love concludes:
These are only a few elements of the negotiation, and the outline suggests a much larger agreement. These proposals are formally available to cleared corporate lobbyists and informally distributed to corporate lawyers and lobbyists in Europe, Japan and the U.S. There are inexcusably secret from the U.S. [and world] Public.
In half an hour, the Berkman Center is hosting a talk on Child Safety Online. There is a live webcast and the video will be made available on their website. The Berkman Center has recently finished a big study on Online Child Safety (It participated in the Internet Safety Technical Task Force). The speakers include John Palfrey, danah boyd and Dena T. Sacco.
Earlier today, the European Parliament adopted a recommendation to the Council (here is the report), in the context of child abuse, exploitation and pornography. The recommendation calls for new measures that would affect criminal liability on the Internet and of online services. It asks for the:
criminalisation of providers of paedophile chat rooms or Internet paedophile fora
measures to ensure that the Member States, in the context of a comprehensive strategy of international diplomatic, administrative and law enforcement cooperation, take appropriate steps to have illegal child abuse materials taken offline at source, thereby giving victims maximum protection, and work with Internet providers to disable websites which are used to commit, or to advertise the possibility of committing, offences established in accordance with the Framework Decision;
allowing the national enforcement agencies to require Internet providers to block access to websites which are used to commit, or to advertise the possibility of committing, offences established in accordance with the Framework Decision and, if they fail to do so, to require the deletion of the registered domain names which are used for those purposes;
And finally, IAPP reports about the real life implications of (alleged) criminal liability of online service providers. Peter Fleisscher, Google’s global privacy council, will appear in Italian court this week on criminal charges of defamation and failure to exercise control over personal data.
Teh U.S. Supreme Court has denied certiorari in the famous COPA case. This settles the case after years of ping ponging between different courts. Will it settle the attempts of governments to pass laws or facilitate policies that are overly suppressive? Will it stop governments from asking ISPs to voluntarily filter their networks? I am afraid not. It’s like the Total Information Awareness program. The program was abolished after public outcry, but the program stayed.
Yahoo has announced that after reviewing its retention policies for user data, it has decided to start anonymize data after 90 days. The last block of the ip address and the cookie ID will be erased.
It’s important to note that even after this measure the data will stay to be very sensitive and the process can probably be reversed because of the richness of the data that remains, but it is a big step for the industry.
The New York Times reports:
Privacy advocates said that the new policy was a step in the right direction and credited the change to pressure from European regulators.
“As much as the U.S. search firms talk about how they are improving their practices, I think they are really afraid that the Europeans are going to bring an enforcement action under European privacy laws,” said Marc Rotenberg, executive director of the Electronic Privacy and Information Center. “That’s where the push is really coming from.”
The European Court of Human Rights has issued its judgment in the case K.U. v. Finland. The Court concludes that Article 8 of the Convention puts member states under a positive obligation to protect people against grave interferences with their private life by others on the Internet. This obligation includes that the member state has to criminalize grave interferences with the right to private life and provide for a legal framework that allows for the identification and effective prosecution of offenders.
The Court mentions that this framework has to respect the right to freedom of expression and private life of internet users. The Court does not say that the member state has to make sure that data to identify individuals are available. In fact, it says that only on occasion the right to private life and freedom of expression of internet users can be interfered with legitimately. The Court makes very clear that if identifying data of an alleged offender (the offense being a grave interference with the right to private life) are available, the law must provide for access to those data to allow effective prosecution. Here a few of the key conclusions and considerations:
The Court concludes that grave interferences with the right to private life must be criminalized:
While the choice of the means to secure compliance with Article 8 in the sphere of protection against acts of individuals is, in principle, within the State’s margin of appreciation, effective deterrence against grave acts, where fundamental values and essential aspects of private life are at stake, requires efficient criminal-law provisions
The State’s positive obligation under Article 8 ECHR to prosecute grave interferences with Article 8 ECHR may extend to questions of criminal procedural law:
the State’s positive obligations under Article 8 to safeguard the individual’s physical or moral integrity may extend to questions relating to the effectiveness of a criminal investigation even where the criminal liability of agents of the State is not at issue.
The Court concludes that Article 8 implies that there needs to be a way to identify offenders and bring them to justice:
It is plain that both the public interest and the protection of the interests of victims of crimes committed against their physical or psychological well-being require the availability of a remedy enabling the actual offender to be identified and brought to justice, in the instant case the person who placed the advertisement in the applicant’s name, and the victim to obtain financial reparation from him.
Obviously, this need runs into other fundamental rights of internet users. In the following excerpt, the court notes that also offenders (I would say alleged offenders) can rely on the guarantees of the Convention, in particular the right to respect for private life and the right of freedom of expression:
Another relevant consideration is the need to ensure that powers to control, prevent and investigate crime are exercised in a manner which fully respects the due process and other guarantees which legitimately place restraints on crime investigation and bringing offenders to justice, including the guarantees contained in Articles 8 and 10 of the Convention, guarantees which offenders themselves can rely on.
The Court makes clear that the prevention of crime and disorder and the protection of the rights and freedom of others makes this consideration relative:
Although freedom of expression and confidentiality of communications are primary considerations and users of telecommunications and Internet services must have a guarantee that their own privacy and freedom of expression will be respected, such guarantee cannot be absolute and must yield on occasion to other legitimate imperatives, such as the prevention of disorder or crime or the protection of the rights and freedoms of others.
From the perspective of data retention, the words to note here are “on occasion”. That could reasonably be interpreted as standing in the way of blanket data retention of Internet traffic and location data.
As TJ McIntyre concludes, the judgment raises a lot of very difficult questions. The Court concludes it is primarily up to the member states to resolve them:
Without prejudice to the question whether the conduct of the person who placed the offending advertisement on the Internet can attract the protection of Articles 8 and 10, having regard to its reprehensible nature, it is nonetheless the task of the legislator to provide the framework for reconciling the various claims which compete for protection in this context.