That which is not explicitly permitted, is forbidden. That which is not explicitly forbidden, is permitted. Two competing models of thinking about policy and law. I find these interesting to bear in mind in considering the current policy debate about privacy online, which I rejoined on June 18 at the American Constitution Society’s 2011 Convention (they have video of the panel posted here).
The privacy debate has been going on now in full throttle for over a decade. In 1998 I wrote a fun paper on privacy, Privacy As Censorship: A Skeptical View of Proposals to Regulate Privacy in the Private Sector. My post-ideological views of regulation today are somewhat different. Today, perhaps both the private sector and consumers would benefit somewhat from more clarity as to their rights and duties in this space. However, I agree with my past self in thinking that there is considerable tension between innovation and free speech on the one hand, and privacy on the other.
The current flap about Facebook’s rollout of technology that helps users tag their photos by recognizing faces is a case in point. Facial recognition technology seems kind of scary, as seen on tv. But much of this reaction is irrational. Every human being is in significant part a facial recognition device—being social animals, doing this sort of thing is important to us. Our own brains are designed to be quite good at it. Some details on the way it works in humans are covered at LiveScience and Science Daily.
Now, the rollout of the technological equivalent on a large scale has a different potential impact than facial recognition by biological humans, including a potential for mischief—but also potential to do good. Alas, the current coverage of the issue is such that speculation about potential harms is rampant—and the potential benefits almost entirely neglected. Alarmism sells better than good news. But this makes a poor basis for policy that affects innovation.
If one is bound to speculate about alternative futures, commitment to fairness and balance means devoting at least as much energy to possible positive outcomes (ranging from the trivial, such as time saved in tagging photos, to the nontrivial, such as use in identifying bad guys or abducted kids) as possible negative ones. Or hold off on speculation until real harms have been identified—along with the actual perpetrators. (And note that some of the pressure to push for more restrictive privacy rules for online services stems from the fact that enforcement against actual perpetrators of things like online fraud or stalking tends to need improvement).
Those who seem about to let a speculative parade of horribles drive innovation policy also rarely do so consistently. One panelist argued that Facebook should be held morally responsible for downstream abuses of their technology—even potential abuses. But if we are painting with the broad brush of moral culpability, well, what about others who work to advance facial recognition technology? There are all sorts of researchers who could be pilloried, unless one is bound on insisting, a la Plato, that researchers as “philosopher-kings” and should be exempt from the rules that govern the rest of us. One researcher mentioned by the panel ran an experiment showing that facial identification technology could be used to photographs to names and then to social security numbers (an example of similar research is here—not the same study discussed by the panel). But if someone might use Facebook’s technology to make mischief, well, someone might use such an academic paper as a nice blueprint for how to conduct identity theft. 1) If one is concerned with moral responsibility (rather than legal liability narrowly conceived) why a social networking service would be blamed for downstream abusers but a researcher not be makes little sense. Facebook’s rollout is commercial, certainly—but professors do not work for free, and many end up with lucrative consulting contracts related to their research. 2) Some might note that Facebook goes a step beyond the researchers in actually supplying the technology, often a distinction useful in considering legal liability, but in the vast array of other contexts in which online services do nothing more than supply platforms or tools capable of mischief—such as illegally distributing copyrighted content, for example—very few think that online services automatically should be liable for that, and none would impose liability when no actual harms have been reported. If one is concerned with innovation policy, it makes little sense to support the development of new ideas only so long as they are not commercially developed. If one’s goal is gaining knowledge of the larger consequences of facial recognition technology, well, as much or more is likely to be learned from Facebook’s rollout than from academic research.
One might also think about the controversy as an illustration of the conflict between free speech and broad privacy rules. An advertiser-supported magazine article detailing exactly how to write facial recognition software or use it on Facebook images would be protected by the first amendment; several courts have ruled that software itself is a form of free speech. Facebook has in essence provided user with an editing and publishing tool, another angle on the free speech question. How exactly a court would view all this (commercial speech doctrine and all) is well beyond the scope of this blog—but that there is a fundamental conflict seems undeniable.
One set of questions legitimately raised by the rollout of new services by online sites involve contract law. How should we view changes to a site’s practices and policies that are introduced after one has signed up? Is it enough that one may cancel one’s account and that the EULA reserves the site’s right to make changes? Even if not, certainly in the online environment consumers are unlikely to benefit from a rule that a site’s first offerings are set in stone.
The deadline for filing amicus briefs in support of the Federal Circuit's attempt to trim back business method patents in Bilski passed on October 2. Many briefs have been filed, and much fuss has been made in the tech community, for business method patents are linked to the problem of software patents. Many software patents, such as Amazon's 1-click order patent, are for business methods.
If the courts ultimately trim back business method patents, will this take some of the software-patent-related pressure off both tech and the patent system? Not as much as many in the tech community or the patent community would hope, for reasons I examine below. Patent reform is now being driven by business constituencies, and these constituencies are not good at all at working on big picture institutional problems. There, in short, is a not-seeing-forest-for trees problem.
A number of posts here have emphasized the importance of policies that promote (or at least avoid deterring) health care innovation.
CLI's interests extend far beyond health care -- to areas of broadband and telecom, intellectual property, software, innovation (collaborative and proprietary). And one of our fundamental points is that all of these interests are connected.
Today, broadband is foundational for driving innovation and
productivity across all economic sectors, including energy, education,
healthcare, and e-government.
For example, at Microsoft we envision a connected health ecosystem
that enables predictive, preventive, and personalized care. Telehealth
technologies can be used to remotely monitor patients, facilitate
collaboration between medical professionals, exchange medical data and
images, and instantaneously provide efficient emergency service to
remote areas. We see medical research increasingly benefiting from the
HUGE amounts of patient and genomics data for drug discovery and
He links to a letter recording a tele-meeting with FCC Chairman Genechowski and CEOs John Chambers (Cisco); Steve Ballmer (Microsoft); Jeffrey lmmelt (GE); and Steve Hemsley (UnitedHeath) at which the sectoral breadth of the broadband plan was emphasized.
To talk of "broadband policy" is misleading. You can't think effectively about broadband policy as a pure abstraction. You also need to think health care policy, education, and energy.
It works in reverse, too, in that thinking about these substantive areas requires consideration of the capabilities of broadband.
The FCC, in developing a broadband policy, is also developing health care policy, and energy policy,a nd so on. Example: Net neutrality, in its strong form of uniform treatment of all bits, whether related to the latest P2P piracy or on-line surgery, would do a lot to prevent innovation in medical services.
But they don't answer the question of how Chrome makes sense as a business proposition for Google. Taking on responsibility for an operating system is a heavy duty obligation. Contrary to myth, Linux is not maintained by a bunch of hobbyists; it is a high class, thoroughly professional operation supported by major providers of hardware and services that benefit from having a cooperative effort on an operating system and that make money by selling the complements of hardware and services.
The benefits to Google of maintaining Chrome are more indirect -- the idea is that, somehow, the OS will translate into more eyeballs for Google's search business. But no necessary relationship exists between the value of those eyeballs to advertisers and Google's costs of maintaining Chrome. So unless Google intends to go into a new business of selling aps that ride in Chrome, the move seems high risk. As eWeek notes, failure would be damaging to Google's reputation, not least because it would irritate geeks who spend time on Chrome.
Last week, Google announced its new netbook Web-focused operating system -- Chrome OS, to match its browser. The OS will be open source, a variation on Linux, so Google will be able to tap into the strength of that community from the get-go, and of course free ride on the work supported by IBM, HP, and the other tech companies who fund Linux.
There is a lot of cross-talk in the press and web about how this is or is not a deadly or minor threat to Microsoft's core Windows business, done with or without deliberate malice by Google, and how it is a disruptive or minor innovation that can be extended up the value chain (unless it is not), and how Microsoft must be very worried or perhaps highly amused.
The day before, I was at Google's DC office to hear Chris Anderson talk about his new bookFree: The Future of a Radical Price. One point he made is that there is a big psychological distance between "free" and even a trivial cost and that the business models of the future must cater to this. This does indeed seem to be a pre-occupation of the tech world, which thus assumes that Google's free OS should sweep the board, except of course for the power of those Microsoft people who seem to cheat by charging for their products.
Forget altruism. Misanthropy and egotism are the fuel of online
social production. That's the conclusion suggested by a new study of
the character traits of the contributors to Wikipedia. A team of
Israeli research psychologists gave personality tests to 69 Wikipedians
and 70 non-Wikipedians. They discovered that, as New Scientist puts it, Wikipedians are generally "grumpy," "disagreeable," and "closed to new ideas."
In their report
on the results of the study, the scholars paint a picture of
Wikipedians as social maladapts who "feel more comfortable expressing
themselves on the net than they do off-line" and who score poorly on
measures of "agreeableness and openness." Noting that the findings seem
in conflict with public perceptions, the researchers suggest that "the
prosocial behavior apparent in Wikipedia is primarily connected to
egocentric motives ... which are not associated with high levels of