That which is not explicitly permitted, is forbidden. That which is not explicitly forbidden, is permitted. Two competing models of thinking about policy and law. I find these interesting to bear in mind in considering the current policy debate about privacy online, which I rejoined on June 18 at the American Constitution Society’s 2011 Convention (they have video of the panel posted here).
The privacy debate has been going on now in full throttle for over a decade. In 1998 I wrote a fun paper on privacy, Privacy As Censorship: A Skeptical View of Proposals to Regulate Privacy in the Private Sector. My post-ideological views of regulation today are somewhat different. Today, perhaps both the private sector and consumers would benefit somewhat from more clarity as to their rights and duties in this space. However, I agree with my past self in thinking that there is considerable tension between innovation and free speech on the one hand, and privacy on the other.
The current flap about Facebook’s rollout of technology that helps users tag their photos by recognizing faces is a case in point. Facial recognition technology seems kind of scary, as seen on tv. But much of this reaction is irrational. Every human being is in significant part a facial recognition device—being social animals, doing this sort of thing is important to us. Our own brains are designed to be quite good at it. Some details on the way it works in humans are covered at LiveScience and Science Daily.
Now, the rollout of the technological equivalent on a large scale has a different potential impact than facial recognition by biological humans, including a potential for mischief—but also potential to do good. Alas, the current coverage of the issue is such that speculation about potential harms is rampant—and the potential benefits almost entirely neglected. Alarmism sells better than good news. But this makes a poor basis for policy that affects innovation.
If one is bound to speculate about alternative futures, commitment to fairness and balance means devoting at least as much energy to possible positive outcomes (ranging from the trivial, such as time saved in tagging photos, to the nontrivial, such as use in identifying bad guys or abducted kids) as possible negative ones. Or hold off on speculation until real harms have been identified—along with the actual perpetrators. (And note that some of the pressure to push for more restrictive privacy rules for online services stems from the fact that enforcement against actual perpetrators of things like online fraud or stalking tends to need improvement).
Those who seem about to let a speculative parade of horribles drive innovation policy also rarely do so consistently. One panelist argued that Facebook should be held morally responsible for downstream abuses of their technology—even potential abuses. But if we are painting with the broad brush of moral culpability, well, what about others who work to advance facial recognition technology? There are all sorts of researchers who could be pilloried, unless one is bound on insisting, a la Plato, that researchers as “philosopher-kings” and should be exempt from the rules that govern the rest of us. One researcher mentioned by the panel ran an experiment showing that facial identification technology could be used to photographs to names and then to social security numbers (an example of similar research is here—not the same study discussed by the panel). But if someone might use Facebook’s technology to make mischief, well, someone might use such an academic paper as a nice blueprint for how to conduct identity theft. 1) If one is concerned with moral responsibility (rather than legal liability narrowly conceived) why a social networking service would be blamed for downstream abusers but a researcher not be makes little sense. Facebook’s rollout is commercial, certainly—but professors do not work for free, and many end up with lucrative consulting contracts related to their research. 2) Some might note that Facebook goes a step beyond the researchers in actually supplying the technology, often a distinction useful in considering legal liability, but in the vast array of other contexts in which online services do nothing more than supply platforms or tools capable of mischief—such as illegally distributing copyrighted content, for example—very few think that online services automatically should be liable for that, and none would impose liability when no actual harms have been reported. If one is concerned with innovation policy, it makes little sense to support the development of new ideas only so long as they are not commercially developed. If one’s goal is gaining knowledge of the larger consequences of facial recognition technology, well, as much or more is likely to be learned from Facebook’s rollout than from academic research.
One might also think about the controversy as an illustration of the conflict between free speech and broad privacy rules. An advertiser-supported magazine article detailing exactly how to write facial recognition software or use it on Facebook images would be protected by the first amendment; several courts have ruled that software itself is a form of free speech. Facebook has in essence provided user with an editing and publishing tool, another angle on the free speech question. How exactly a court would view all this (commercial speech doctrine and all) is well beyond the scope of this blog—but that there is a fundamental conflict seems undeniable.
One set of questions legitimately raised by the rollout of new services by online sites involve contract law. How should we view changes to a site’s practices and policies that are introduced after one has signed up? Is it enough that one may cancel one’s account and that the EULA reserves the site’s right to make changes? Even if not, certainly in the online environment consumers are unlikely to benefit from a rule that a site’s first offerings are set in stone.
All for now.
-SS