Category: Technology

From Territorial to Functional Governance

Susan Crawford is one of the leading global thinkers on digital infrastructure. Her brilliant book Captive Audience spearheaded a national debate on net neutrality. She helped convince the Federal Communications Commission to better regulate big internet service providers. And her latest intervention–on Uber–is a must-read. Crawford worries that Uber will rapidly monopolize urban ride services. It’s repeatedly tried to avoid regulation and taxes. And while it may offer a good deal to drivers and riders now, there is no guarantee it will in the future.

A noted critic of the sharing economy, Tom Slee, has backed up Crawford’s concerns, from an international perspective. “For a smallish city in Canada, what happens to accountability when faced with a massive American company with little interest in Canadian employment law or Canadian traditions?”, Slee asks, raising a very deep point about the nature of governance. What happens to a city when its government’s responsibilities are slowly disaggregated, functionally? Some citizens may want to see the effective governance of paid rides via Uber, of spare rooms via AirBnB, and so on. A full privatization of city governance awaits, from water to sidewalks.

If you’re concerned about that, you may find my recent piece on the sharing economy of interest. We’ll also be discussing this and similar issues at Northeastern’s conference “Tackling the Urban Core Puzzle.” Transitions from territorial to functional governance will be critical topics of legal scholarship in the coming decade.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.

 

0

How CalECPA Improves on its Federal Namesake

Last week, Governor Brown signed the landmark California Electronic Communications Privacy Act[1] (CalECPA) into law and updated California privacy law for modern communications. Compared to ECPA, CalECPA requires warrants, which are more restricted, for more investigations; provides more notice to targets; and furnishes as a remedy both court-ordered data deletion and statutory suppression.  Moreover, CalECPA’s approach is comprehensive and uniform, eschewing the often irrational distinctions that have made ECPA one of the most confusing and under-protective privacy statutes in the Internet era.

Extended Scope, Enhanced Protections, and Simplified Provisions

CalECPA regulates investigative methods that ECPA did not anticipate. Under CalECPA, government entities in California must obtain a warrant based on probable cause before they may access electronic communications contents and metadata from service providers or from devices.  ECPA makes no mention of device-stored data, even though law enforcement agents increasingly use StingRays to obtain information directly from cell phones. CalECPA subjects such techniques to its warrant requirement. While the Supreme Court’s recent decision in United States v. Riley required that agents either obtain a warrant or rely on an exception to the warrant requirement to search a cell phone incident to arrest, CalECPA requires a warrant for physical access to any device, not just a cell phone, which “stores, generates, or transmits electronic information in electronic form.” CalECPA clearly defines the exceptions to the warrant requirement by specifying what counts as an emergency, who can give consent to the search of a device, and related questions.

ECPA’s 1986-drafted text only arguably covers the compelled disclosure of location data stored by a service provider, and does not clearly require a warrant for such investigations. CalECPA explicitly includes location data in the “electronic communication information” that is subject to the warrant requirement when a government entity accesses it from either a device or a service provider (broadly defined).  ECPA makes no mention of location data gathered in real-time or prospectively, but CalECPA requires a warrant both for those investigations and for stored data investigations. Whenever a government entity compels the “the production of or access to” location information, including GPS data, from a service provider or from a device, CalECPA requires a warrant.

Read More

Air Traffic Control for Drones

8435473266_16e7ae4191_zRecently a man was arrested and jailed for a night after shooting a drone that hovered over his property. The man felt he was entitled (perhaps under peeping tom statutes?) to privacy from the (presumably camera-equipped) drone. Froomkin & Colangelo have outlined a more expansive theory of self-help:

[I]t is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great – or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

On the other hand, the Fortune article reports:

In the view of drone lawyer Brendan Schulman and robotics law professor, Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

I am wondering how we might develop a regulatory infrastructure to make either the self-help or police-help responses more tractable. Present resources seem inadequate. I don’t think the police would take me seriously if I reported a drone buzzing my windows in Baltimore—they have bigger problems to deal with. If I were to shoot it, it might fall on someone walking on the sidewalk below. And it appears deeply unwise to try to grab it to inspect its serial number.

Following on work on license plates for drones, I think that we need to create a monitoring infrastructure to promote efficient and strict enforcement of law here. Bloomberg reports that “At least 14 companies, including Google, Amazon, Verizon and Harris, have signed agreements with NASA to help devise the first air-traffic system to coordinate small, low-altitude drones, which the agency calls the Unmanned Aerial System Traffic Management.” I hope all drones are part of such a system, that they must be identifiable as to owner, and that they can be diverted into custody by responsible authorities once a credible report of lawbreaking has occurred.

I know that this sort of regulatory vision is subject to capture. There is already misuse of state-level drone regulation to curtail investigative reporting on abusive agricultural practices. But in a “free-for-all” environment, the most powerful entities may more effectively create technology to capture drones than they deploy lobbyists to capture legislators. I know that is a judgment call, and others will differ. I also have some hope that courts will strike down laws against using drones for reporting of matters of public interest, on First Amendment/free expression grounds.

The larger point is: we may well be at the cusp of a “this changes everything” moment with drones. Illah Reza Nourbakhsh’s book Robot Futures imagines the baleful consequences of modern cities saturated with butterfly-like drones, carrying either ads or products. Grégoire Chamayou’s A Theory of the Drone presents a darker vision, of omniveillance (and, eventually, forms of omnipotence, at least with respect to less technologically advanced persons) enabled by such machines. The present regulatory agenda needs to become more ambitious, since “black boxed” drone ownership and control creates a genuine Ring of Gyges problem.

Image Credit: Outtacontext.

Corporate Experimentation

Those interested in the Facebook emotional manipulation study should take a look at Michelle N. Meyer’s op-ed (with Christopher Chabris) today:

We aren’t saying that every innovation requires A/B testing. Nor are we advocating nonconsensual experiments involving significant risk. But as long as we permit those in power to make unilateral choices that affect us, we shouldn’t thwart low-risk efforts, like those of Facebook and OkCupid, to rigorously determine the effects of those choices. Instead, we should…applaud them.

Meyer offers more perspectives on the issue in her interview with Nicolas Terry and me on The Week in Health Law podcast.

For an alternative view, check out my take on “Facebook’s Model Users:”

[T]he corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility. That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

I just hope that, as A/B testing becomes more ubiquitous, we are well aware of the power imbalances it both reflects and reinforces. Given already well-documented resistance to an “experiment” on Montana politics, it’s clear that the power of big data firms to manipulate even the very political order that ostensibly regulates them, may well be on the horizon.

Worker Replaceability: A Question of Values

One reason I decided to write on law practice technology was because of a general unease about the shape of debates on automation. Technologists and journalists tend to look at jobs from the outside, presume that they are routine, and predict they’ll be further routinized by machines. But some reality checks are important here.

As David Rotman observes, “there is not much evidence on how even today’s automation is affecting employment.” Many economists believe that technology will create more jobs than it destroys. MIT’s David Autor, writing for the Federal Reserve Bank of Kansas City’s economic policy symposium on “Reevaluating Labor Market Dynamics,” states that “journalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities”—in other words, the ways that automation can increase, rather than decrease, the value of human labor. Consider, for instance, the use of voice recognition software: it may put transcriptionists out of work, but increases the value of the labor of a person who can now, say, transcribe what they’ve dictated 24 hours a day, rather than just when the transcriptionist is near. The selfie-stick may have a similar effect on cameramen and journalists. Legal tech may put some lawyers out of a job, while creating jobs for others.

It’s also easy to overestimate the scope of automation. Autor gives a sobering example of windshield repair:

Most automated systems lack flexibility—they are brittle. Modern automobile plants, for example, employ industrial robots to install windshields on new vehicles as they move through the assembly line. But aftermarket windshield replacement companies employ technicians, not robots, to install replacement windshields. Why not robots? Because removing a broken windshield, preparing the windshield frame to accept a replacement, and fitting a replacement into that frame demand far more real-time adaptability than any contemporary robot can
approach.

The distinction between assembly line production and the in-situ repair highlights the role of environmental control in enabling automation. While machines cannot generally operate autonomously in unpredictable environments, engineers can in some cases radically simplify the environment in which machines work to enable autonomous operation.

Admittedly, the “society of control” scenario discussed here, or even milder versions of the “smart city,” may lead to far more controllable environments. But they also raise critical questions about privacy, fair data practices, and liberty.

There are also conflicts over values at stake in worker replacement. Osborn & Frey’s study The Future Of Employment: How Susceptible Are Jobs To Computerisation? tries to rank order 702 positions on the degree of likelihood of their automation. They characterize recreational therapists as least automatable, and title examiners and searchers as the second most automatable. But many video games offer forms of therapy, and therapeutic jobs (like masseur) and even higher-touch jobs could, in principle, be computerized. Furthermore, at least in the United States in the wake of MERS, there has been a loss of “confidence in real property recording systems.” Title insurance may hinge on legal questions that are still up in the air in certain states. Yes, further automation and recognition of things like MERS might “cut the Gordian knot,” but that solution would also inevitably trench on other values of legal regularity and due process.

In summary: automation anxieties could be as overblown now as they were in the 1960s. And the automation of each occupation, and tasks within occupations, will inevitably create conflicts over values and social priorities. Far from a purely technical question, robotization always implicates values. The future of automation is ours to master. Respecting workers, rather than assuming their replaceability of, would be a great start.

Four Futures of Legal Automation

BarbicanThere are many gloom-and-doom narratives about the legal profession. One of the most persistent is “automation apocalypse.” In this scenario, computers will study past filings, determine what patterns of words work best, and then—poof!—software will eat the lawyer’s world.

Conditioned to be preoccupied by worst-case scenarios, many attorneys have panicked about robo-practitioners on the horizon. Meanwhile, experts differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%; others claim legal practice is fundamentally routinizable. I’ve recently co-authored an essay that helps explain why such radical uncertainty prevails.

While futurists affect the certainties of physicists, visions of society always reflect contestable political aspirations. Those predicting doom for future lawyers usually harbor ideological commitments that are not that friendly to lawyers of the present. Displacing the threat to lawyers to machines (rather than, say, the decisionmakers who can give machines’ doings the legal effect of what was once done by qualified persons) is a way of not merely rationalizing, but also speeding up, the hoped-for demise of an adversary. Just like the debate over killer robots can draw attention away from the persons who design and deploy them, so too can current controversy over robo-lawyering distract from the more important political and social trends that make automated dispute resolution so tempting to managers and bureaucrats.

It is easy to justify a decline in attorneys’ income or status by saying that software could easily do their work. It’s harder to explain why the many non-automatable aspects of current legal practice should be eliminated or uncompensated. That’s one reason why stale buzzwords like “disruption” crowd out serious reflection on the drivers of automation. A venture capitalist pushing robotic caregivers doesn’t want to kill investors’ buzz by reflecting on the economic forces promoting algorithmic selfhood. Similarly, #legaltech gurus know that a humane vision of legal automation, premised on software that increases quality and opportunities for professional judgment, isn’t an easy sell to investors keen on speed, scale, and speculation. Better instead to present lawyers as glorified elevator operators, replaceable with a sufficiently sophisticated user interface.

Our essay does not predict lawyers’ rise or fall. That may disappoint some readers. But our main point is to make the public conversation about the future of law a more open and honest one. Technology has shaped, and will continue to influence, legal practice. Yet its effect can be checked or channeled by law itself. Since different types of legal work are more or less susceptible to automation, and society can be more or less regulatory, we explore four potential future climates for the development of legal automation. We call them, in shorthand, Vestigial Legal Profession, Society of Control, Status Quo, and Second Great Compression. An abstract appears below.

Read More

0

Structuring US Law

In 2013, the U.S. House Law Revision Counsel released the Titles of the U.S. Code as “structured data” in xml.  Previously the law had been available only as ordinary text.  This structuring of the law as data allows for interesting visualizations and interactions with the law that were not previously feasible, such as the following:

circleExplode2h

Click on image to launch Force Directed Explorer App

 

US Code Explorer Screen shot

Click on image to launch Code Explorer App

 

This post will discuss what it means for US law to be structured as data and why this has enabled increased analysis and visualization of the law. (You can read more about the visualizations above here and here)

Structuring U.S. Law

The U.S. Code – (the primary codification of Federal Statutory Law) – has always had an implicit structure. However, it now has had an explicit, machine-readable structure.

Read More

Europe Steps Up to the Challenge of Digital Competition Law

Two years ago U.S. authorities abandoned a critical case in digital antitrust. The EC now appears ready to fill the void:

The European Commission is said to be planning to charge Google with using its dominant position in online search to favor the company’s own services over others, in what would be one of the biggest antitrust cases here since regulators went after Microsoft. . . . If Europe is successful in making its case, the American tech giant could face a huge fine and be forced to alter its business practices to give smaller competitors like Yelp greater prominence in its search queries.

I applaud this move. As I’ve argued in The Black Box Society, antitrust law flirts with irrelevance if it fails to grapple with the dominance of massive digital firms. Europe has no legal or moral obligation to allow global multinationals to control critical information sources. Someone needs to be able to “look under the hood” and understand what is going on when competitors of Google’s many acquired firms plunge in general Google search results.

Google argues that its vast database of information and queries reveals user intentions and thus makes its search services demonstrably better than those of its rivals. But in doing so, it neutralizes the magic charm it has used for years to fend off regulators. “Competition is one click away,” chant the Silicon Valley antitrust lawyers when someone calls out a behemoth firm for unfair or misleading business practices. It’s not so. Alternatives are demonstrably worse, and likely to remain so as long as the dominant firms’ self-reinforcing data advantage grows. If EU authorities address that dynamic, they’ll be doing the entire world a service.

PS: For those interested in further reading about competition online:
Read More

Meet the New Boss…

One of the most persistent self-images of Silicon Valley internet giants is a role as liberators, emancipators, “disintermediators” who’d finally free the creative class from the grips of oligopolistic music labels or duopolistic cable moguls. I chart the rise and fall of the plausibility of that narrative in Chapter 3 of my book. Cory Doctorow strikes another blow at it today:

[T]he competition for Youtube has all but vanished, meaning that they are now essential to any indie artist’s promotion strategy. And now that Youtube doesn’t have to compete with other services for access to artists’ materials, they have stopped offering attractive terms to indies — instead, they’ve become an arm of the big labels, who get to dictate the terms on which their indie competitors will have to do business.

Ah, but don’t worry–antitrust experts assure us that competition is just around the corner, any day now. Some nimble entrepreneur in a garage has the 1 to 3 million servers now deployed by Google, can miraculously access past data on organizing videos, and is just about to get all the current uploaders and viewers to switch to it. The folklore of digital capitalism is a dreamy affair.