That which is not explicitly permitted, is forbidden. That which is not explicitly forbidden, is permitted. Two competing models of thinking about policy and law. I find these interesting to bear in mind in considering the current policy debate about privacy online, which I rejoined on June 18 at the American Constitution Society’s 2011 Convention (they have video of the panel posted here).
The privacy debate has been going on now in full throttle for over a decade. In 1998 I wrote a fun paper on privacy, Privacy As Censorship: A Skeptical View of Proposals to Regulate Privacy in the Private Sector. My post-ideological views of regulation today are somewhat different. Today, perhaps both the private sector and consumers would benefit somewhat from more clarity as to their rights and duties in this space. However, I agree with my past self in thinking that there is considerable tension between innovation and free speech on the one hand, and privacy on the other.
The current flap about Facebook’s rollout of technology that helps users tag their photos by recognizing faces is a case in point. Facial recognition technology seems kind of scary, as seen on tv. But much of this reaction is irrational. Every human being is in significant part a facial recognition device—being social animals, doing this sort of thing is important to us. Our own brains are designed to be quite good at it. Some details on the way it works in humans are covered at LiveScience and Science Daily.
Now, the rollout of the technological equivalent on a large scale has a different potential impact than facial recognition by biological humans, including a potential for mischief—but also potential to do good. Alas, the current coverage of the issue is such that speculation about potential harms is rampant—and the potential benefits almost entirely neglected. Alarmism sells better than good news. But this makes a poor basis for policy that affects innovation.
If one is bound to speculate about alternative futures, commitment to fairness and balance means devoting at least as much energy to possible positive outcomes (ranging from the trivial, such as time saved in tagging photos, to the nontrivial, such as use in identifying bad guys or abducted kids) as possible negative ones. Or hold off on speculation until real harms have been identified—along with the actual perpetrators. (And note that some of the pressure to push for more restrictive privacy rules for online services stems from the fact that enforcement against actual perpetrators of things like online fraud or stalking tends to need improvement).
Those who seem about to let a speculative parade of horribles drive innovation policy also rarely do so consistently. One panelist argued that Facebook should be held morally responsible for downstream abuses of their technology—even potential abuses. But if we are painting with the broad brush of moral culpability, well, what about others who work to advance facial recognition technology? There are all sorts of researchers who could be pilloried, unless one is bound on insisting, a la Plato, that researchers as “philosopher-kings” and should be exempt from the rules that govern the rest of us. One researcher mentioned by the panel ran an experiment showing that facial identification technology could be used to photographs to names and then to social security numbers (an example of similar research is here—not the same study discussed by the panel). But if someone might use Facebook’s technology to make mischief, well, someone might use such an academic paper as a nice blueprint for how to conduct identity theft. 1) If one is concerned with moral responsibility (rather than legal liability narrowly conceived) why a social networking service would be blamed for downstream abusers but a researcher not be makes little sense. Facebook’s rollout is commercial, certainly—but professors do not work for free, and many end up with lucrative consulting contracts related to their research. 2) Some might note that Facebook goes a step beyond the researchers in actually supplying the technology, often a distinction useful in considering legal liability, but in the vast array of other contexts in which online services do nothing more than supply platforms or tools capable of mischief—such as illegally distributing copyrighted content, for example—very few think that online services automatically should be liable for that, and none would impose liability when no actual harms have been reported. If one is concerned with innovation policy, it makes little sense to support the development of new ideas only so long as they are not commercially developed. If one’s goal is gaining knowledge of the larger consequences of facial recognition technology, well, as much or more is likely to be learned from Facebook’s rollout than from academic research.
One might also think about the controversy as an illustration of the conflict between free speech and broad privacy rules. An advertiser-supported magazine article detailing exactly how to write facial recognition software or use it on Facebook images would be protected by the first amendment; several courts have ruled that software itself is a form of free speech. Facebook has in essence provided user with an editing and publishing tool, another angle on the free speech question. How exactly a court would view all this (commercial speech doctrine and all) is well beyond the scope of this blog—but that there is a fundamental conflict seems undeniable.
One set of questions legitimately raised by the rollout of new services by online sites involve contract law. How should we view changes to a site’s practices and policies that are introduced after one has signed up? Is it enough that one may cancel one’s account and that the EULA reserves the site’s right to make changes? Even if not, certainly in the online environment consumers are unlikely to benefit from a rule that a site’s first offerings are set in stone.
Next week, on April 26th, IPI is scheduled to release my paper entitled, "Traditional Knowledge in the Industrialized World," at their conference on "Intellectual Property: Protecting the Spark of Innovation." The paper is a series of case studies of products derived from North American traditional knowledge, as well as analysis and observations based on those case studies. It is intended to help shed light on some of the controversies that have arisen in the context of the adoption of intellectual property rules and the use of traditional knowledge in the developing world. I will also be joining a morning panel on "Traditional Knowledge: Sharing and Commercializing the Benefits of Innovation" along with Dr. Mary Palmer, MD—Chief Executive Officer, ToxEM.
If you are interested in attending, you will find RSVP information on IPI's site (link above).
Traditional Knowledge: Sharing and Commercializing the Benefits of Innovation Dr. Mary Palmer, MD—Chief Executive Officer, ToxEM Solveig Singleton—IPI Adjunct FellowTraditional Knowledge: Sharing and Commercializing the Benefits of Innovation Dr. Mary Palmer, MD—Chief Executive Officer, ToxEM Solveig Singleton—IPI Adjunct Fellow
Now the Fun Begins. AEI Enterprise Blog, Nov. 2, 2010 (Blog). Assuming the polls are anywhere near right, November 3 marks the opening of the real wars, such as the one between the Tea Party and the Republican establishment.
China & Intellectual Property, Digital Society, 22 Nov. 2010 (Blog). Notice of a session on session on China and IP with Fuli Chen, Intellectual Property Rights Attache for the Chinese Embassy to the United States.
EU & Net Neutrality, Digital Society, 13 Nov. 2010 (Blog). Nellie Kroes says: “We should allow network operators and services and content providers to explore innovative business models.”
In the Mail: “The Shift”, Digital Society, 18 Aug. 2010 (Blog). From the authors: [W]e outline how additional value is inserted into the ecosystem when service providers expose intelligent network capabilities.”
Technology & Academia, Digital Society, 4 May 2010 (Blog). Link to the Technology|Academics|Policy (TAP) website.
Making The Digital Society Work, Digital Society, 7 April 2010 (Blog). My inaugural post for Digital Society: “My particular focus is on the legal and regulatory structures and rules that are necessary to make this digital society work, with emphasis on intellectual property, markets and their importance in facilitating innovation, creativity and (to lift a word from the academic left) generativity.”
The Non-Uses of History, Digital Society, 27 Dec. 2010 (Blog). Net Neutrality – comments on the FCC’s lack of interest in its experience with the Open Video Systems concept or in other lessons from regulatory history. Breaking the Piñata, Digital Society, 22 Dec. 2010 (Blog). Net Neutrality as a gift to the DC communications bar.
A Market Working! What Will They Think of Next!Digital Society, 20 Dec. 2010 (Blog). Net Neutrality –anyone who creates demanded content can force the ISPs to pay them rather the reverse. So why a major ISP tick off its customers by cutting them off from desired content?
Danger! Kibitzers at Work, Digital Society, 14 Dec. 2010 (Blog). Net Neutrality – none of the 82 organizations that endorsed more Internet openness actually do anything to make the Internet work or to deliver content.
Network Nation, Digital Society, 13 Dec. 2010 (Blog). Links to Network Nation: Inventing American Telecommunications, and to a presentation by the author.
Net Neutrality & the Fifth Amendment, Digital Society, 6 Dec. 2010 (Blog). Links to Virtual Takings: The Coming Fifth Amendment Challenge to Net Neutrality Regulation, by Daniel Lyons of BC Law School.
Papering over the Problem, Digital Society, 14 July 2010 (Blog). Ye good olde days of journalism were pretty bad, and the last thing we need is government subsidies to bring them back.
Creative Content Needs Functioning Markets, Digital Society, 13 July 2010 (Blog). Never before have we had such total technological disruption of functioning markets, with a corresponding disruption of legal rules that were designed to fit the old technologies.
Catching Flies, Digital Society, 7 July 2010 (Blog). Litigation over the hot news doctrine.
Adding Value on the Internet, Digital Society, 8 June 2010 (Blog). Economist/columnist/blogger/ex-TreasuryGuyBruce Bartlett has started Bartlett’s Notations.
Intellectual Property: More on S.3804, Digital Society, 10 Dec. 2010 (Blog). S.3804 - The Leahy Amendment and the effort to combat content piracy and counterfeit goods via payments systems and Internet infrastructure.
Intellectual Property & the Leahy Bill, Digital Society, 9 Dec. 2010 (Blog). S.3804 - The Leahy Amendment and the effort to combat content piracy and counterfeit goods via payments systems and Internet infrastructure.
Protecting Property on the Internet , J. V. DeLong, The American, Dec. 8, 2010 (Article). Free speech does not include the right to shout, ‘Fake goods here!’ in a crowded digital marketplace.