Category: Technology

0

MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.

0

Not Found, Forbidden, or Censored? New Error Code 451 May Help Figure It Out

When UK sites blocked access to the Pirate Bay following a court order the standard 403 code error for “Forbidden” appeared, but a new standard will let users know that a site is not accessible because of legal reasons. According to the Verge, Tim Bray proposed the idea more than three years ago. The number may ring a bell. It is a nod to Bradbury’s Farenhiet 451. There some “process bits” to go before the full approval, but developers can start to implement it now. As the Verge explains, the code is voluntary. Nonetheless

If implemented widely, Bray’s new code should help prevent the confusion around blocked sites, but it’s only optional and requires web developers to adopt it. “It is imaginable that certain legal authorities may wish to avoid transparency, and not only forbid access to certain resources, but also disclosure that the restriction exists,” explains Bray.

It might be interesting to track how often the code is used and the reactions to it.

Here is the text of how the code is supposed to work:

This status code indicates that the server is denying access to the
resource as a consequence of a legal demand.

The server in question might not be an origin server. This type of
legal demand typically most directly affects the operations of ISPs
and search engines.

Responses using this status code SHOULD include an explanation, in
the response body, of the details of the legal demand: the party
making it, the applicable legislation or regulation, and what classes
of person and resource it applies to. For example:

HTTP/1.1 451 Unavailable For Legal Reasons
Link: ; rel=”blocked-by”
Content-Type: text/html


Unavailable For Legal Reasons

Unavailable For Legal Reasons

This request may not be serviced in the Roman Province
of Judea due to the Lex Julia Majestatis, which disallows
access to resources hosted on servers deemed to be
operated by the People’s Front of Judea.


Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

A Review of The Black Box Society

I just learned of this very insightful and generous review of my book, by Raizel Liebler:

The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press 2015) is an important book, not only for those interested in privacy and data, but also anyone with larger concerns about the growing tension between transparency and trade secrets, and the deceptiveness of pulling information from the ostensibly objective “Big Data.” . . .

One of the most important aspects of The Black Box Society builds on the work of Siva Vaidhyanathan and others to write about how relying on the algorithms of search impact people’s lives. Through our inability to see how Google, Facebook, Twitter, and other companies display information, it makes it seem like these displays are in some way “objective.” But they are not. Between various stories about blocking pictures of breastfeeding moms, blocking links to competing sites, obscurity sources, and not creating tools to prevent harassment, companies are making choices. As Pasquale puts it: “at what point does a platform have to start taking responsibility for what its algorithms go, and how their results are used? These new technologies affect not only how we are understood, but also how we understand. Shouldn’t we know when they’re working for us, against us, or for unseen interests with undisclosed motives?”

I was honored to be mentioned on the TLF blog–a highly recommended venue! Here’s a list of some other reviews in English (I have yet to compile the ones in other languages, but was very happy to see the French edition get some attention earlier this Fall). And here’s an interesting take on one of those oft-black-boxed systems: Google Maps.

Highly Recommended: Chamayou’s The Theory of The Drone

robocop1924_02Earlier this year, I read a compelling analysis of drone warfare, Gregoire Chamayou’s The Theory of The Drone. It is an unusual and challenging book, of interest to both policymakers and philosophers, engineers and attorneys. As I begin a review of it:

At what point do would-be reformers of the law and ethics of war slide into complicity with a morally untenable status quo? When is the moralization of force a prelude for the ration­alization of slaughter? Grégoire Chamayou’s penetrating recent book, A Theory of the Drone, raises these uncomfortable questions for lawyers and engineers both inside and out of the academy. Chamayou, a French philosopher, dissects legal academics’ arguments for targeted killing by unmanned vehicles. He also criticizes university research programs purporting to engineer ethics for the autonomous weapons systems they view as the inevitable future of war. Writing from a tradition of critical theory largely alien to both engineering and law, he raises concerns that each discipline should address before it continues to develop procedures for the automation of war.

As with the automation of law enforcement, advocacy, and finance, the automation of war has many unintended consequences. Chamayou helps us discern its proper limits.

Image Credit: 1924 idea for police automaton.

From Territorial to Functional Governance

Susan Crawford is one of the leading global thinkers on digital infrastructure. Her brilliant book Captive Audience spearheaded a national debate on net neutrality. She helped convince the Federal Communications Commission to better regulate big internet service providers. And her latest intervention–on Uber–is a must-read. Crawford worries that Uber will rapidly monopolize urban ride services. It’s repeatedly tried to avoid regulation and taxes. And while it may offer a good deal to drivers and riders now, there is no guarantee it will in the future.

A noted critic of the sharing economy, Tom Slee, has backed up Crawford’s concerns, from an international perspective. “For a smallish city in Canada, what happens to accountability when faced with a massive American company with little interest in Canadian employment law or Canadian traditions?”, Slee asks, raising a very deep point about the nature of governance. What happens to a city when its government’s responsibilities are slowly disaggregated, functionally? Some citizens may want to see the effective governance of paid rides via Uber, of spare rooms via AirBnB, and so on. A full privatization of city governance awaits, from water to sidewalks.

If you’re concerned about that, you may find my recent piece on the sharing economy of interest. We’ll also be discussing this and similar issues at Northeastern’s conference “Tackling the Urban Core Puzzle.” Transitions from territorial to functional governance will be critical topics of legal scholarship in the coming decade.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.

 

0

How CalECPA Improves on its Federal Namesake

Last week, Governor Brown signed the landmark California Electronic Communications Privacy Act[1] (CalECPA) into law and updated California privacy law for modern communications. Compared to ECPA, CalECPA requires warrants, which are more restricted, for more investigations; provides more notice to targets; and furnishes as a remedy both court-ordered data deletion and statutory suppression.  Moreover, CalECPA’s approach is comprehensive and uniform, eschewing the often irrational distinctions that have made ECPA one of the most confusing and under-protective privacy statutes in the Internet era.

Extended Scope, Enhanced Protections, and Simplified Provisions

CalECPA regulates investigative methods that ECPA did not anticipate. Under CalECPA, government entities in California must obtain a warrant based on probable cause before they may access electronic communications contents and metadata from service providers or from devices.  ECPA makes no mention of device-stored data, even though law enforcement agents increasingly use StingRays to obtain information directly from cell phones. CalECPA subjects such techniques to its warrant requirement. While the Supreme Court’s recent decision in United States v. Riley required that agents either obtain a warrant or rely on an exception to the warrant requirement to search a cell phone incident to arrest, CalECPA requires a warrant for physical access to any device, not just a cell phone, which “stores, generates, or transmits electronic information in electronic form.” CalECPA clearly defines the exceptions to the warrant requirement by specifying what counts as an emergency, who can give consent to the search of a device, and related questions.

ECPA’s 1986-drafted text only arguably covers the compelled disclosure of location data stored by a service provider, and does not clearly require a warrant for such investigations. CalECPA explicitly includes location data in the “electronic communication information” that is subject to the warrant requirement when a government entity accesses it from either a device or a service provider (broadly defined).  ECPA makes no mention of location data gathered in real-time or prospectively, but CalECPA requires a warrant both for those investigations and for stored data investigations. Whenever a government entity compels the “the production of or access to” location information, including GPS data, from a service provider or from a device, CalECPA requires a warrant.

Read More

Air Traffic Control for Drones

8435473266_16e7ae4191_zRecently a man was arrested and jailed for a night after shooting a drone that hovered over his property. The man felt he was entitled (perhaps under peeping tom statutes?) to privacy from the (presumably camera-equipped) drone. Froomkin & Colangelo have outlined a more expansive theory of self-help:

[I]t is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great – or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

On the other hand, the Fortune article reports:

In the view of drone lawyer Brendan Schulman and robotics law professor, Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

I am wondering how we might develop a regulatory infrastructure to make either the self-help or police-help responses more tractable. Present resources seem inadequate. I don’t think the police would take me seriously if I reported a drone buzzing my windows in Baltimore—they have bigger problems to deal with. If I were to shoot it, it might fall on someone walking on the sidewalk below. And it appears deeply unwise to try to grab it to inspect its serial number.

Following on work on license plates for drones, I think that we need to create a monitoring infrastructure to promote efficient and strict enforcement of law here. Bloomberg reports that “At least 14 companies, including Google, Amazon, Verizon and Harris, have signed agreements with NASA to help devise the first air-traffic system to coordinate small, low-altitude drones, which the agency calls the Unmanned Aerial System Traffic Management.” I hope all drones are part of such a system, that they must be identifiable as to owner, and that they can be diverted into custody by responsible authorities once a credible report of lawbreaking has occurred.

I know that this sort of regulatory vision is subject to capture. There is already misuse of state-level drone regulation to curtail investigative reporting on abusive agricultural practices. But in a “free-for-all” environment, the most powerful entities may more effectively create technology to capture drones than they deploy lobbyists to capture legislators. I know that is a judgment call, and others will differ. I also have some hope that courts will strike down laws against using drones for reporting of matters of public interest, on First Amendment/free expression grounds.

The larger point is: we may well be at the cusp of a “this changes everything” moment with drones. Illah Reza Nourbakhsh’s book Robot Futures imagines the baleful consequences of modern cities saturated with butterfly-like drones, carrying either ads or products. Grégoire Chamayou’s A Theory of the Drone presents a darker vision, of omniveillance (and, eventually, forms of omnipotence, at least with respect to less technologically advanced persons) enabled by such machines. The present regulatory agenda needs to become more ambitious, since “black boxed” drone ownership and control creates a genuine Ring of Gyges problem.

Image Credit: Outtacontext.

Corporate Experimentation

Those interested in the Facebook emotional manipulation study should take a look at Michelle N. Meyer’s op-ed (with Christopher Chabris) today:

We aren’t saying that every innovation requires A/B testing. Nor are we advocating nonconsensual experiments involving significant risk. But as long as we permit those in power to make unilateral choices that affect us, we shouldn’t thwart low-risk efforts, like those of Facebook and OkCupid, to rigorously determine the effects of those choices. Instead, we should…applaud them.

Meyer offers more perspectives on the issue in her interview with Nicolas Terry and me on The Week in Health Law podcast.

For an alternative view, check out my take on “Facebook’s Model Users:”

[T]he corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility. That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

I just hope that, as A/B testing becomes more ubiquitous, we are well aware of the power imbalances it both reflects and reinforces. Given already well-documented resistance to an “experiment” on Montana politics, it’s clear that the power of big data firms to manipulate even the very political order that ostensibly regulates them, may well be on the horizon.