Author: James Grimmelmann


The Evolving Law of Fair Use

The Google Books case would make a terrible movie. It started promisingly enough, with a cold open on Google’s ambitious book scanning project, and quickly established its central conflict, between Google and irate copyright holders. Then the plot took a breathtaking twist when it was revealed that Google and the authors and publishers had secretly been negotiating a settlement. But after the tense second act culminated in the brilliantly argued fairness hearing, the plot just tailed off. Sure, Judge Chin rejected the settlement, the Authors Guild brought Google’s library partners into the litigation, the publishers settled, and two judges found that the book scanning was fair use—but all of it had the repetitive feeling of a Bruckheimer-buster. Haven’t I seen these explosions somewhere else?

Times have changed, and technology has changed, and copyright law has changed, as well. Judge Chin’s decision bringing to a close (for now) the main Google Books litigation didn’t so much make new law as draw an emphatic underline under existing doctrines. Copyright law has taken account of digitization and search technology. They are accepted parts of the legal landscape now, and future decisions are not likely to call their legality into serious question.

Take the holding that search engines make a transformative use of the content they index. The Kelly, Field cases said the same—but it was possible to maintain they were other than settled law. That’s a Ninth Circuit doctrine. That’s a Web-only doctrine. That’s implied license in disguise. And so on—but no more. Search engine fair use is broadly established, to the point that even judges who deny its application on the facts before them accept the broader principle.

Or take the holding that there is no discernable market for licensing short snippets to illustrate. Judge Chin, like Judge Baer before him in the HathiTrust case against Google’s library partners, viewed Google Book Search as a pure pie-higher-er. The gains from making books searchable do not come out of the pockets of authors. Indeed, by making it easier for readers to find books, book search makes it easier for authors to find readers. Judges have come to accept the idea that these long-tailed digital geese lay golden eggs only so long as they live; cut them up in search of licensing fees and there is nothing inside.

Judge Chin and Judge Baer are also attuned to the public benefits of technological tools; their opinions have no difficulty explaining how digitization promotes the fundamental copyright goals of access, learning, and future creativity. This functionalist approach to fair use is part of the same trend that has brought us Cariou v. Prince and A.V. v. iParadigms.The Authors Guild’s entire litigation position has been based around three “C”s: that creators should have control over copies. But this formalism led it down the disastrous path of focusing heavily on the number of copies of each book made by Google and retained by the libraries in their data centers, and into pressing hard the obvious loser of an argument that libraries’ section 108 rights are an absolute limit on fair use. Such arguments are simply not persuasive to judges who are sensitive to the purposes of copyright and of fair use.

This is not the end of the Google Books movie. Both cases are still on appeal. And certainly the trend in current copyright cases is not wholly in the direction of technologists’ and readers’ rights. But this part of the copyright landscape — mass digitization and fair use — seems to have shifted fairly definitively. Not all at once in an earthquake, but slowly, almost geologically on the scale of Internet time. Now that’s very gradual change we can believe in.


Hacker Legal Education

In my Jotwell review of Coding Freedom, I commented that “Coleman’s portrait of how hackers become full-fledged members of Debian is eerily like legal education.”

[T]he hackers who are trained in it go through a prescribed course of study in legal texts, practice applying legal rules to new facts, learn about legal drafting, interpretation, and compliance, and cultivate an ethical and public-spirited professional identity. There is even a written examination at the end.

This is legal learning without law school. Coleman’s hackers are domain-specific experts in the body of law that bears on their work. It should be a warning sign that a group of smart and motivated lay professionals took a hard look at the law, realized that it mattered intensely to them, and responded not by consulting lawyers or going to law school but by building their own parallel legal education system. That choice is an indictment of the services lawyers provide and of the relevance of the learning law schools offer. A group of amateurs teaching each other did what we weren’t.

Their success is an opportunity as well as a challenge. The inner sanctums of the law, it turns out, are more accessible to the laity than sometimes assumed. One response to the legal services crisis would be to give more people the legal knowledge and tools to solve some of their own legal problems. The client who can’t afford a lawyer’s services can still usually afford her own. More legal training for non-lawyers might or might not make a dent in law schools’ budget gaps. But it is almost certainly the right thing to do, even if it reduces the demand for lawyers’ services among the public. There is no good reason why law schools can only impart legal knowledge to by way of lawyers and not directly.

Hacker education, however, also shows why lawyers and the traditional missions of law schools are not going away. Law is a blend of logic and argument, a baseball game that depends on persuading the umpire to change the rules mid-pitch. Hacker legal education, with its roots in programming, is strong on formal precision and textual exegesis. But it is notably light on legal realism: coping with the open texture of the law and sorting persuasive from ineffective arguments. The legal system is not a supercomputer that can be caught in a paradox. The professional formation of lawyers is absent in hacker education, because theirs is a different profession.

Legal academics also play a striking role in hacker legal education. Richard Stallman was of course the driving personality behind free software. But Columbia’s Eben Moglen had an absolutely crucial role in crafting amending the closest thing the free software movement has to a constitution: the GNU GPL. And Coleman documents the role that Larry Lessig‘s consciousness-raising activism played in politicizing hackers about copyright policy. They, and other professors who have helped the free software community engage with the law, like Pamela Samuelson, in turn, drew heavily on the legal scholarly tradition even as they translated it into more practical terms. The freedom to focus on self-chosen projects of long-term importance to society is a right and responsibility of the legal academic. Even if not all of us have used it as effectively as these three, it remains our job to try.


Computer Crime Law Goes to the Casino

Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.

Read More


LTAAA Symposium: How Law Responds to Complex Systems

In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.

Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.

Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.

A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.

When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.

Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.

When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.

And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.

My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.


LTAAA Symposium: Complex Systems and Law

The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.

One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.

But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.

Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.

In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.

As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.


The Master Switch Symposium: Information Ideology and Corporate Culture

I particularly liked two things about The Master Switch. The first is that Wu’s history of information networks in the 20th century (though sadly not before) has a meaningful theory of corporate ideology, and uses it effectively. The book opens with a 1916 banquet in Washington, D.C. honoring Theodore Vail and the Bell system. The highlight of the evening was a mildly absurd demo: a phone call to General Pershing in El Paso:

“Hello, General Pershing!”
“Hello, Mr. Carty.”
“How’s everything on the border?”
“All’s quiet on the border.”
“Did you realize you were talking with eight hundred people?”
“No, I did not,” answered General Pershing. “If I had known it, I might have thought of something worthwhile to say.”

It’s a great scene, and it captures the spirit of particular company and a moment in history. The Bell system was as Establishment as you can get; the event was shot through with patriotic symbolism. The tech demos were gifts from a benevolent, stabilizing, centralizing AT&T to the American people, with Vail both basking in accomplishment and promising the future.

Wu’s point, here as throughout the book, is that you can’t understand AT&T, or its economic and social impact, or the way it shaped and struggled with the legal system, without appreciating the way it saw itself and the world. Plenty of writers have described the endless [back-and-forth(] between the forces of openness and the forces of closure. Wu’s history shows, repeatedly, how the different companies taking part in the struggle justified themselves — and how those essentially ideological justifications in turn frequently drove key corporate decisions.

The Master Switch doesn’t assert, as too many people who should know better do, that corporations simply act in the interests of their shareholders. Nor is this a work of hagiography or demonization; one does not walk away with the impression that Theodore Vail built the Bell system with his bare hands. Instead, Instead, it gives examples of companies so in thrall to a vision of their inevitable triumph or their social role that they dove headlong off a marketplace or regulatory cliff — and also examples of executives who won their companies’, their industries’, and their regulators’ support only through the subtle arts of persuasion.

Wu’s discussion of the Hush-a-Phone brings out the way in which AT&T’s “One System, One Policy, Universal Service” philosophy drove it into a legal fight it would have been better off ignoring. And who helped Hush-a-Phone poke the first, critical hole in AT&T’s policy against foreign attachments? Leo Beranek and J.C.R. Licklider, major figures in the development of the Internet. In another example, after successfully shaking off Edison’s control of film patents, the Independent movie companies fractured. Some of them were thrilled to entrench themselves as a new cartel controlling distribution; others much less so. Wu’s portraits of monopolists, insurgents, and particularly of insurgents-turned-monopolists illustrate the power of a compelling vision of how information can or should be distributed to shape, and sometimes to warp, the design of information empires.

The other thing I especially enjoyed? Wu cites both science historian Lawrence Lessing and Internet law scholar Lawrence Lessig.


Future of the Internet Symposium: The Right Theory

When The Future of the Internet was published, I knew immediately it was a big deal. Paul Ohm had very much the same thought. And so we got together, called ourselves an institute, and jointly wrote a book review, which we titled “Dr. Generative Or: How I Learned to Stop Worrying and Love the iPhone.” I wish I could link to it, but it’s not quite out yet–it went to the Maryland Law Review’s publishers about a month ago, and isn’t back yet. In its place, though, I thought I’d run down the main points Paul and I make in our review.

The book’s gerat contribution, the reason it will stay on shelves as long as we Internet academics still believe in printed books, can be boiled down to one word: “generativity.” In the Lessig/Reidenberg/Kapor tradition of thinking about computer code as a kind of regulation, one of the central questions has always been which features of the Internet’s architecture make it THE INTERNET, and thus worth caring about. People have proposed a lot of different virtues. “Openness,” as Adam discusses below, is a disconcertingly capacious and imprecise term. But most of the more concrete alternatives–“end-to-end”-ianness, “neutrality,” “layering,” “standardization,” “decentralization,” “tinkerability,” “free-as-in-freedom” software, and the “commons”–turn out to be near misses. They focus too narrowly on one part of a much bigger puzzle. For example, as Laura’s work demonstrates, even though standardization makes the Internet possible, it can also be a tool of political control and repression.

In contrast, Paul and I call generativity “the right theory.” The Internet’s capacity to support large and unanticipated creativity and innovation on a wide variety of levels is remarkable. Focusing on generativity allows us to sum up, in one simple concept, what makes the Internet distinctive, and distinctively valuable. That alone is a serious achievement. One can dispute–as this symposium is already showing–perhaps everything else in the book. But there really is no arguing with the theory of generativity itself.

That said, however, Paul and I express somewhat more skepticism about some of Zittrain’s applications of generativity. Our problem with the book–or, really, our reason to look forward to the sequel–is that only in a few places does the carefully worked out theory really make contact with his practical recommendations. The final third of the book consists of some very clever case studies and proposals, but there’s something of a missing link: the proposals don’t always clearly follow from the theory of generativity.

Our central example, and the backbone of our review, is Zittrain’s discussion of the iPhone. It, and other “tethered appliances” feature what he calls “contingent generativity“: they can be programmed and extended for now, but Apple can always pull the plug on anything it doesn’t like. He’s afraid of that future–but the reasons he gives to worry about it aren’t really concerns about generativity as such. They implicate other values, like free speech and individual autonomy, and one must do more work than Zittrain has to link these values up with generativity. Indeed, it’s easy to make arguments that the iPhone and iPad have been massive improvements for generativity; recall Apple’s ad campaign that other phones have “the kinda sorta looks like the Internet” but the iPhone has “the Internet” itself.

Whether this and similar compromises–such as Google’s ability to turn off its cloud, or Wikipedia’s ability to revert your edits and ban your IP block–are worthwhile restrictions or not has to come from a richer, multivalued theory. That is, we think Zittrain has really and truly pinned down the fundamental architectural virtue of the Internet, but only just started on the long road of harnessing that theory to give advice for practical policy problems. In The Fourth Quadrant, Zittrain has started in on that important work–and we hope it’s a down payment on that sequel.


CCR Symposium: The Civil Rights Agenda

For me, the most important part of Danielle Citron’s paper is right there in the title: the way she frames online harassment specifically as a civil rights problem. It’s one of those moves that’s so seemingly simple that the reader may be tempted to say yeah, yeah, so what? But then Citron shows what, directly and carefully. Online harassment isn’t just about individual bullies and victims–though it’s about that, too. It’s also about pervasive patterns of abuse, directed at vulnerable groups, that effectively deprive them of the ability to participate in important social institutions.

Another commentator at this symposium, Ann Bartow, has argued that some legal scholarship has “too much doctrine, and not enough dead bodies.” Cyber Civil Rights has plenty of dead bodies, especially the virtual effigies of women targeted by anonymous individuals–or worse, anonymous mobs–for online abuse. The paper opens with the story of Kathy Sierra, threatened with rape and strangulation, including the delightful comment, “The only thing Kathy has to offer me is that noose in her neck size.” The footnotes of the first part of Cyber Civil Rights give a grim tour through some some of online harassment’s greatest and most appalling hits.

Then–and this is the point of Bartow’s argument that scholars need to be willing to point out where the bodies are buried–Citron uses these unsettling stories to make a familiar doctrinal story strange. In the Internet law world, we’re accustomed to talking about harassment as an issue that combines two of our favorite Internet hobbyhorses: anonymity and Section 230‘s immunity for intermediaries. The result is that many serious, important debates about responses to harassment have run into the well-worn ruts of very old arguments (on Internet time, that is) about the legal standard for unmasking anonymous individuals online and about how much to make intermediaries liable for harmful content.

Shifting from there to civil rights frame, however, allows Citron to point out important but often-ignored features of harassment online, ones that suggest different doctrinal moves. Civil rights discourse helps us see the victims of harassment as members of a consistently subordinated group, rather than as just unlucky individuals. It helps us see the mob dynamics at work in these simulacra of lynchings, rather than thinking about each insult in isolation. It reminds us that there’s a long tradition of using law creatively to prevent personal bias from becoming societal discrimination.

Indeed, when you go back to the online harassment cases after reading Cyber Civil Rights, it’s striking how many of them are really civil rights cases. True, few of them buy into that frame, and few have provided much redress for victims, but they’re directly engaged with classic civil rights issues. Take Noah v. AOL, a 2003 case dismissing on section 230 grounds a lawsuit against AOL for doing nothing about anti-Muslim comments its chat rooms like “well allah can suck my dick you peice of ass” and “SMELLY TOWEL HEADS,” or, more recently, the Craigslist and cases about discriminatory online housing ads. The law in these cases is all about the ins and outs of interpreting section 230, but the facts are all about religious intolerance and racial segregation. Cyber Civil Rights suggests that when we think about cross-cutting issues in Internet law–such as anonymity or intermediary liability–we might do well to pause before diving into the technical specifics of the communications at stake and instead ask, “Why do we want to know?”


The Rise of the Conservative Non-Trademark Use

24791984.JPGSteven Teles’s The Rise of the Conservative Legal Movement features a clever cover design. It shows a white man in a suit, wearing a maroon Federalist Society tie, and holding a book instantly recognizeable as a legal text, whose title is Teles’s subtitle: “The Battle for Control of the Law.”

So here’s the lazy Saturday question. The book is instantly recognizeable as a legal text because it uses the instantly-recognizeable trade dress of the Aspsn series of casebooks. It has the same red cover, the same pair of black boxes, the same five golden stripes (one above the boxes, five between, and four below), and the same golden lettering. The typeface and layout of the text are admittedly different: Teles’s book has a sans-serif, which any self-respecting conservative would disdain as a modernist liberal fad. Also, the upper box, where the authors’ names go on an Aspen casebook, is empty. The Aspen/WoltersKluwer names and logos don’t appear in the image. Does or should Aspen have any right to object to the use of its trade dress in this manner?