Category: Cyberlaw

0

FAN (First Amendment News, Special Series #3) Newseum Institute Program on Apple-FBI Encryption Controversy Scheduled for June 15th

images

“The government [recently] dropped a bid to force Apple to bypass a convicted Brooklyn drug dealer’s pass code so it could read data on his phone.” — Government Technology, April 27, 2016

Headline: “Department of Justice drops Apple case after FBI cracks iPhone”San Francisco Chronicle, March 28, 2016

The Newseum Institute has just announced its June 15th event concerning the Apple-FBI encryption controversy. Information concerning the upcoming event is set out below:

Date:  June 15th, 2016

Time: 3:00 p.m.

Location: Newseum: 555 Pennsylvania Ave NW, Washington, DC 20001

Register here (free but limited seating):

http://www.newseum.org/events-programs/rsvp1/

The event will be webcast live on the Newseum Institute’s site

Screen Shot 2016-05-18 at 1.10.36 PM

“PEAR” v. THE UNITED STATES

The issues involved in the Apple cell phone controversy will be argued in front of a mock U.S. Supreme Court held at the Newseum as “Pear v. the United States.”

Experts in First Amendment law, cyber security, civil liberties and national security issues will make up the eight-member High Court, and legal teams will represent “Pear” and the government. The oral argument, supported by written briefs, will focus on those issues likely to reach the actual high court, from the power of the government to “compel speech” to the privacy expectations of millions of mobile phone users.

The Justices hearing the case at the Newseum:

  • As Chief Justice: Floyd Abrams, renowned First Amendment lawyer and author; and Visiting Lecturer at the Yale Law School.
  • Harvey Rishikof, most recently dean of faculty at the National War College at the National Defense University and chair of the American Bar Association Standing Committee on Law and National Security
  • Nadine Strossen, former president of the American Civil Liberties Unionthe John Marshall Harlan II Professor of Law at New York Law School
  • Linda Greenhouse, the Knight Distinguished Journalist in Residence and Joseph Goldstein Lecturer in Law at Yale Law School; long-time U.S. Supreme Court correspondent for The New York Times
  • Lee Levine, renowned media lawyer; adjunct Professor of Law at the Georgetown University Law Center
  • Stewart Baker,national security law and policy expert and former Assistant Secretary for Policy at the U.S. Department of Homeland Security
  • Stephen Vladeck, Professor of Law at American University Washington College of Law; nationally recognized expert on the role of the federal courts in the war on terrorism
  • The Hon. Robert S. Lasnik, senior judge for the Western District of Washington at the U.S. District Court

Lawyers arguing the case:

  • For PearRobert Corn-Revere has extensive experience in First Amendment law and communications, media and information technology law.
    • Co-counsel is Nan Mooney, writer and former law clerk to Chief Judge James Baker of the U.S. Court of Appeals for the Armed Forces.
  • For the U.S. governmentJoseph DeMarco, who served from 1997 to 2007 as an Assistant United States Attorney for the Southern District of New York, specializes in issues involving information privacy and security, theft of intellectual property, computer intrusions, on-line fraud and the lawful use of new technology.
    • Co-counsel is Jeffrey Barnum, a lawyer and legal scholar specializing in criminal law and First Amendment law who argued United States v. Alaa Mohammad Ali before the U.S. Court of Appeals for the Armed Forces while in law school.

Each side will have 25 minutes to argue its position before the Court and an additional five minutes for follow-up comments. Following the session, there will be an opportunity for audience members to ask questions of the lawyers and court members.

The program is organized on behalf of the Newseum Institute by the University of Washington Law School’s Harold S. Shefelman Scholar Ronald Collins and by Nan Mooney.

2

FAN (First Amendment News, Special Series #2) FBI to Continue Working with Hackers to Fight Terrorism . . . & Crime?

images

The F.B.I. defended its hiring of a third party to break into an iPhone used by a gunman in last year’s San Bernardino, Calif., mass shooting, telling some skeptical lawmakers on Tuesday that it needed to join with partners in the rarefied world of for-profit hackers as technology companies increasingly resist their demands for consumer information. — New York Times, April 19, 2016

__________________

This is the second FAN installment concerning the ongoing controversy over national security and cell-phone privacy. As with the first installment, the legal focus here is on First Amendment issues. It is against that backdrop that the Newseum Institute in Washington, D.C. will host a public event on June 15, 2016.

I am pleased to be working with Gene Policinski (the chief operating officer of the Newseum Institute) and Nan Mooney (a D.C. lawyer and former law clerk to Chief Judge James Baker of the U.S. Court of Appeals for the Armed Forces) in organizing the event.

Information concerning that upcoming event is set out below, but first a few news items.

Recent News Items

“FBI Director James Comey said the U.S. paid more than he will make in salary over the rest of his term to secure a hacking tool to break into a mobile phone used by a dead terrorist in the San Bernardino . . . . The law enforcement agency paid ‘more than I will make in the remainder of this job, which is 7 years and 4 months,’ Comey said . . . at the Aspen Security Forum in London. . . . Comey’s pay this year is $185,100, according to federal salary tables, indicating the tool cost the agency more than $1.3 million. FBI directors are appointed to 10-year terms.”

“[Ms. Amy Hess, the Federal Bureau of Investigation’s executive assistant director for science and technology,] did not answer directly when asked about whether there were ethical issues in using third-party hackers but said the bureau needed to review its operation ‘to make sure that we identify the risks and benefits.’ The F.B.I. has been unwilling to say whom it paid to demonstrate a way around the iPhone’s internal defenses, or how much, and it has not shown Apple the technique.”

“Bruce Sewell, Apple’s general counsel, told a House commerce oversight subcommittee that the company already works with law enforcement regularly and would help develop the FBI’s capability to decrypt technology itself, but won’t open ‘back doors’ to its iPhones due to the security risk that would pose to all users. . . . What the FBI wants, Hess said, is ‘that when we present an order, signed by an independent federal judge, that (tech companies) comply with that order and provide us with the information in readable form.’ How they do that is up to them, she said.”

“The leaders of the Senate Intelligence Committee have introduced a bill that would mandate those receiving a court order in an encryption case to provide “intelligible information or data” or the “technical means to get it” — in other words, a key to unlock secured data.  “I call it a ‘follow the rule of law bill,’ because that’s what it does: It says nobody’s exempt from a court order issued by a judge on the bench,’ said Committee Chairman Richard Burr, a North Carolina Republican. The top Democrat on the committee, California’s Dianne Feinstein, is a co-sponsor.”

Senate Bill Introduced

Here are a few excerpts from the proposed Senate Bill:

(1) GENERAL. Notwithstanding any other provision of law and except as provided in paragraph 7 (2), a covered entity that receives a court order from a government for information or data shall —

(A) provide such information or data to such government in an intelligible format; or

(B) provide such technical assistance as is necessary to obtain such information or data in an intelligible format or to achieve the purpose of the court order.

(2) SCOPE OF REQUIREMENT. A covered entity that receives a court order referred to in par graph (1)(A) shall be responsible only for providing data in an intelligible format if such data has been made unintelligible by a feature, product, or service owned, controlled, created, or provided, by the covered entity or by a third party on behalf of the covered entity.

(3) COMPENSATION FOR TECHNICAL ASSISTANCE. . . .

(b) DESIGN LIMITATIONS. Nothing in this Act shall be construed to authorize any government officer to require or prohibit any specific design or operating system to be adopted by any covered entity.

(4) DEFINITIONS . . . .

Non-Terrorist Crimes & Demands for Cell-Phone Access

Upcoming: Newseum Institute Moot Court Event Read More

0

FAN (First Amendment News, Special Series) Newseum Institute to Host Event on Cell Phone Privacy vs National Security Controversy

images

Starting today and continuing through mid-June, I will post a special series of occasional blogs related to the Apple iPhone national security controversy and the ongoing debate surrounding it, even after the FBI gained access to the phone used by the terrorist gunman in the December shooting in San Bernardino, California.

Gene Policinski

Gene Policinski

This special series is done in conjunction with the Newseum Institute and a major program the Institute will host on June 15, 2016 in Washington, D.C.

I am pleased to be working with Gene Policinski (the chief operating officer of the Newseum Institute) and Nan Mooney (a D.C. lawyer and former law clerk to Chief Judge James Baker of the U.S. Court of Appeals for the Armed Forces) in organizing the event.

The June 15th event will be a moot court with seven Supreme Court Justices and two counsel for each side. The focus will be on the First Amendment issues raised in the case. (See below re links to the relevant legal documents).

→ Save the Date: Wednesday, June 15, 2016 @ 2:00 p.m., Newseum, Washington, D.C. (more info forthcoming).

The Apple-FBI clash was the first significant skirmish — and probably not much more than that — of the Digital Age conflicts we’re going to see in this century around First Amendment freedoms, privacy, data aggregation and use, and even the extent of religious liberty. As much as the eventual outcome, we need to get the tone right, from the start — freedom over simple fear. –Gene Policinski

Newseum Institute Moot Court Event

It remains a priority for the government to ensure that law enforcement can obtain crucial digital information to protect national security and public safety, either with cooperation from relevant parties, or through the court system when cooperation fails.Melanie Newman (spokeswoman for Justice Department, 3-28-16)

As of this date, the following people have kindly agreed to participate as Justices for a seven-member Court:

The following two lawyers have kindly agreed to serve as the counsel (2 of 4) who will argue the matter:

→ Two additional Counsel to be selected.  

Nan Mooney and I will say more about both the controversy and the upcoming event in the weeks ahead in a series of special editions of FAN. Meanwhile, below is some relevant information, which will be updated regularly.

Apple vs FBI Director James Comey

President Obama’s Statement

Congressional Hearing

Documents

Screen Shot 2016-03-17 at 10.46.11 PM

Last Court Hearing: 22 March 2016, before Judge Sheri Pym

Podcast

Video

News Stories & Op-Eds

lockediphone5c

  1. Pierre Thomas & Mike Levine, “How the FBI Cracked the iPhone Encryption and Averted a Legal Showdown With Apple,” ABC News, March 29, 2016
  2. Bruce Schneier, “Your iPhone just got less secure. Blame the FBI,” Washington Post, March 29, 2016
  3. Katie Benner & Eric Lichtblau, “U.S. Says It Has Unlocked Phone Without Help From Apple,” New York Times, March 8, 2016
  4. John Markoff, Katie Benner & Brian Chen, “Apple Encryption Engineers, if Ordered to Unlock iPhone, Might Resist,” New York Times, March 17, 2016
  5. Jesse Jackson, “Apple Is on the Side of Civil Rights,” Time, March 17, 2016
  6. Katie Benner & Eric Lichtblau, “Apple and Justice Dept. Trade Barbs in iPhone Privacy Case,” New York Times, March 15, 2016
  7. Kim Zetter, “Apple and Justice Dept. Trade Barbs in iPhone Privacy Case,” Wired, March 15, 2016
  8. Alina Selyukh, “Apple On FBI iPhone Request: ‘The Founders Would Be Appalled,‘” NPR, March 15, 2016
  9. Howard Mintz, “Apple takes last shot at FBI’s case in iPhone battle,” San Jose Mercury News, March 15, 2016
  10. Russell Brandom & Colin Lecher, “Apple says the Justice Department is using the law as an ‘all-powerful magic wand‘,” The Verge, March 15, 2016
  11. Adam Segal & Alex Grigsby, “3 ways to break the Apple-FBI encryption deadlock,” Washington Post, March 14, 2016
  12. Seung Lee, “Former White House Official Says NSA Could Have Cracked Apple-FBI iPhone Already,” Newsweek, March 14, 2016
  13. Tim Bajarin, “The FBI’s Fight With Apple Could Backfire,” PC, March 14, 2016
  14. Alina Selyukh, “U.S. Attorneys Respond To Apple In Court, Call Privacy Concerns ‘A Diversion’,” NPR, March 10, 2016
  15. Dan Levine, “San Bernardino victims to oppose Apple on iPhone encryption,” Reuters, Feb. 22, 2016
  16. Apple, The FBI And iPhone Encryption: A Look At What’s At Stake,” NPR, Feb. 17, 2016
0

The Fragility of Desire

In his excellent new book Exposed, Harcourt’s analysis of the role of desire in what he calls the “expository society” of the digital age is seductive. We are not characters in Orwell’s 1984, or prisoners of Bentham’s Panopticon, but rather are enthusiastic participants in a “mirrored glass pavilion” that is addictive and mesmerizing. Harcourt offers a detailed picture of this pavilion and also shows us the seamy side of our addiction to it. Recovery from this addiction, he argues, requires acts of disobedience but there lies the great dilemma and paradox of our age: revolution requires desire, not duty, but our desires are what have ensnared us.

I think that this is both a welcome contribution as well as a misleading diagnosis.

There have been many critiques of consent-based privacy regimes as enabling, rather than protecting, privacy. The underlying tenor of many of these critiques is that consent fails as a regulatory tool because it is too difficult to make it truly informed consent. Harcourt’s emphasis on desire shows why there is a deeper problem than this, that our participation in the platforms that surveil us is rooted in something deeper than misinformed choice. And this makes the “what to do?” question all the more difficult to answer. Even for those of us who see a stronger role for law than Harcourt outlines in this book (I agree with Ann Bartow’s comments on this) should pause here. Canada, for example, has strong private sector data protections laws with oversight from excellent provincial and federal privacy commissioners. And yet these laws are heavily consent-based. Such laws are able to shift practices to a stronger emphasis on things like opt-in consent, but Harcourt leaves us with a disquieting sense that this might just be just an example of a Pyrrhic victory, legitimizing surveillance through our attempts to regulate it because we still have not grappled with the more basic problem of the seduction of the mirrored glass pavilion.

The problem with Harcourt’s position is that, in exposing this aspect of the digital age in order to complicate our standard surveillance tropes, he risks ignoring other sources of complexity that are also important for both diagnosing the problem and outlining a path forward.

Desire is not always the reason that people participate in new technologies. As Ann Bartow and Olivier Sylvain point out, people do not always have a choice about their participation in the technologies that track us. The digital age is not an amusement park we can choose to go to or to boycott, but deeply integrated into our daily practices and needs, including the ways in which we work, bank, and access government services.

But even when we do actively choose to use these tools, it is not clear that desire captures the why of all such choices. If we willingly enter Harcourt’s mirrored glass pavilion, it is sometimes because of some of its very useful properties — the space- and time-bending nature of information technology. For example, Google calendar is incredibly convenient because multiple people can access shared calendars from multiple devices in multiple locations at different times making the coordination of calendars incredibly easy. This is not digital lust, but digital convenience.

These space- and time-bending properties of information technology are important for understanding the contours of the public/private nexus of surveillance that so characterizes our age. Harcourt does an excellent job at pointing out some of the salient features of this nexus, describing a “tentacular oligarchy” where private and public institutions are bound together in state-like “knots of power,” with individuals passing back and forth between these institutions. But what is strange in Harcourt’s account is that this tentacular oligarchy still appears to be bounded by the political borders of the US. It is within those borders that the state and the private sector have collapsed together.

What this account misses is the fact that information technology has helped to unleash a global private sector that is not bounded by state borders. In this emerging global private sector large multinational corporations often operate as “metanationals” or stateless entities. The commercial logic of information is that it should cross political borders with ease and be stored wherever it makes the most economic sense.

Consider some of the rhetoric surrounding the e-commerce chapter of the recent TPP agreement. The Office of the US Trade Representative indicates that one of its objectives is to keep the Internet “free and open” which it has pursued through rules that favour cross-border data flows and prevent data localization. It is easy to see how this idea of “free” might be confused with political freedom, for an activist in an oppressive regime is better off in exercising freedom of speech when that speech can cross political borders or the details of their communications can be stored in a location that is free of the reach of their state. A similar rationale has been offered by some in the current Apple encryption debate — encryption protects American business people communicating within China and we can see why that is important.

But this idea of freedom is the freedom of a participant in a global private sector with weak state control; freedom from the state control of oppressive regimes also involves freedom from the state protection of democratic regimes.

If metanationals pursue a state-free agenda, the state pursues an agenda of rights-protectionism. By rights protectionism I mean the claim that states do, and should, protect the constitutional rights of their own citizens and residents but not others. Consider, for example, a Canadian citizen who resides in Canada and uses US cloud computing. That person could be communicating entirely with other Canadians in other Canadian cities and yet have all of their data stored in the US-based cloud. If the US authorities wanted access to that data, the US constitution would not apply to regulate that access in a rights-protecting manner because the Canadian is a non-US person.

Many see result as flowing from the logic of the Verdugo-Urquidez case. Yet that case concerned a search that occurred in a foreign territory (Mexico), rather than within the US, where the law of that territory continued to apply. The Canadian constitution does not apply to acts of officials within the US. The data at issue falls into a constitutional black hole where no constitution applies (and maybe even international human rights black hole according to some US interpretations of extraterritorial obligations). States can then collect information within this black hole free of the usual liberal-democratic constraints and share it with other allies, a situation Snowden documented within the EU and likened to a “European bazaar” of surveillance.

Rights protectionism is not rights protection when information freely crosses political boundaries and state power piggybacks on top of this crossing and exploits it.

This is not a tentacular oligarchy operating within the boundaries of one state, but a series of global alliances – between allied states and between states and metanationals who exert state-like power — exploiting the weaknesses of state-bound law.

We are not in this situation simply because of a penchant for selfies. But to understand the full picture we do need to look beyond “ourselves” and get the global picture in view. We need to understand the ways in which our legal models fail to address these new realities and even help to mask and legitimize the problems of the digital age through tools and rhetoric that are no longer suitable.

Lisa Austin is an Associate Professor at the University of Toronto Faculty of Law.

FTC Boston Pub Library Washington News Co 1930-45 postcard 03
0

The 5 Things Every Privacy Lawyer Needs to Know about the FTC: An Interview with Chris Hoofnagle

The Federal Trade Commission (FTC) has become the leading federal agency to regulate privacy and data security. The scope of its power is vast – it covers the majority of commercial activity – and it has been enforcing these issues for decades. An FTC civil investigative demand (CID) will send shivers down the spine of even the largest of companies, as the FTC requires a 20-year period of assessments to settle the score.

To many, the FTC remains opaque and somewhat enigmatic. The reason, ironically, might not be because there is too little information about the FTC but because there is so much. The FTC has been around for 100 years!

In a landmark new book, Professor Chris Hoofnagle of Berkeley Law School synthesizes an enormous volume of information about the FTC and sheds tremendous light on the FTC’s privacy activities. His book is called Federal Trade Commission Privacy Law and Policy (Cambridge University Press, Feb. 2016).

This is a book that all privacy and cybersecurity lawyers should have on their shelves. The book is the most comprehensive scholarly discussion of the FTC’s activities in these areas, and it also delves deep in the FTC’s history and activities in other areas to provide much-needed context to understand how it functions and reasons in privacy and security cases.

Read More

Is Eviction-as-a-Service the Hottest New #LegalTech Trend?

Some legal technology startups are struggling nowadays, as venture capitalists pull back from a saturated market. The complexity of the regulatory landscape is hard to capture in a Silicon Valley slide deck. Still, there is hope for legal tech’s “idealists.” A growing firm may bring eviction technology to struggling neighborhoods around the country:

Click Notices . . . integrates its product with property management software, letting landlords set rules for when to begin evictions. For instance, a landlord could decide to file against every tenant that owes $25 or more on the 10th of the month. Once the process starts, the Click Notices software, which charges landlords flat fees depending on local court costs, sends employees or subcontractors to represent the landlord in court (attorneys aren’t compulsory in many eviction cases).

I can think of few better examples of Richard Susskind’s vision for the future of law. As one Baltimore tenant observes, the automation of legal proceedings can lead to near-insurmountable advantages for landlords:

[Click Notices helped a firm that] tried to evict Dinickyo Brown over $336 in unpaid rent. Brown, who pays $650 a month for a two-bedroom apartment in Northeast Baltimore, fought back, arguing the charges arose after she complained of mold. The landlord dropped the case, only to file a fresh eviction action—this time for $290. “They drag you back and forth to rent court, and even if you win, it goes onto your record,” says Brown, who explains that mold triggers her epilepsy. “If you try to rent other properties or buy a home, they look at your records and say: You’ve been to rent court.”

And here’s what’s truly exciting for #legaltech innovators: the digital reputation economy can synergistically interact with the new eviction-as-a-service approach. Tenant blacklists can assure that merely trying to fight an eviction can lead to devastating consequences in the future. Imagine the investment returns for a firm that owned both the leading eviction-as-a-service platform in a city, and the leading tenant blacklist? Capture about 20 of the US’s top MSA‘s, and we may well be talking unicorn territory.

As we learned during the housing crisis, the best place to implement legal process outsourcing is against people who have a really hard time fighting back. That may trouble old-school lawyers who worry about ever-faster legal processes generating errors, deprivations of due process, or worse. But the legal tech community tends to think about these matters in financialized terms, not fusty old concepts like social justice or autonomy. I sense they will celebrate eviction-as-a-service as one more extension of technologized ordering of human affairs into a profession whose “conservatism” they assume to be self-indicting.

Still, even for them, caution should be in order. Bret Scott’s skepticism about fintech comes to mind:

[I]f you ever watch people around automated self-service systems, they often adopt a stance of submissive rule-abiding. The system might appear to be ‘helpful’, and yet it clearly only allows behaviour that agrees to its own terms. If you fail to interact exactly correctly, you will not make it through the digital gatekeeper, which – unlike the human gatekeeper – has no ability or desire to empathise or make a plan. It just says ‘ERROR’. . . . This is the world of algorithmic regulation, the subtle unaccountable violence of systems that feel no solidarity with the people who have to use it, the foundation for the perfect scaled bureaucracy.

John Danaher has even warned of the possible rise of “algocracy.” And Judy Wajcman argues that ““Futuristic visions based on how technology can speed up the world tend to be inherently conservative.” As new legal technology threatens to further entrench power imbalances between creditors and debtors, landlords and tenants, the types of feudalism Bruce Schneier sees in the security landscape threaten to overtake far more than the digital world.

(And one final note. Perhaps even old-school lawyers can join Paul Gowder’s praise for a “parking ticket fighting” app, as a way of democratizing advocacy. It reminds me a bit of TurboTax, which democratized access to tax preparation. But we should also be very aware of exactly how TurboTax used its profits when proposals to truly simplify the experience of tax prep emerged.)

Hat Tip: To Sarah T. Roberts, for alerting me to the eviction story.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

0

Exploration and Exploitation – Ideas from Business and Computer Science

One of the key reasons I joined GA Tech and the Scheller College of Business is that I tend to draw on technology and business literature, and GA Tech is a great place for both. My current paper Exploration and Exploitation: An Essay on (Machine) Learning, Algorithms, and Information Provision draws on both these literatures. A key work on the idea of exploration versus exploitation in the business literature is James G. March, Exploration and Exploitation in Organizational Learning, 2 ORG. SCI. 71 (1989) which as far as I can tell has not been picked up in the legal literature. A good follow up to that paper is Anil K. Gupta, Ken Smith, and Christina Shalley, The Interplay Between Exploration and Exploitation, 49 ACAD. MGMT. J. 693 (2006). I had come upon the issue as a computer science question when working on a draft of my paper Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding. That paper was part of my thoughts on artificial intelligence, algorithms, and the law. In the end, the material did not fit there, but it fits the new work. And as I have started to connect with folks in the machine learning group at GA Tech, I have been able to press on how this idea comes up in technology and computer science. The paper has benefitted from feedback from Danielle Citron, James Grimmelmann, and Peter Swire. I also offer many thanks to the Loyola University Chicago Law Journal. The paper started as a short piece (I think I wanted to stay at about five to eight thousand words), but as it evolved, the editors were most gracious in letting me use an asynchronous editing process to hit the final 18,000 or so total word count.

I think the work speaks to general issues of information provision and also applies to current issues regarding the way news and online competition work. As one specific matter, I take on the idea of serendipity which I think “is a seductive, overstated idea. Serendipity works because of relevancy.” I offer the idea of salient serendipity to clarify what type of serendipity matters. The abstract is below.

Abstract:
Legal and regulatory understandings of information provision miss the importance of the exploration-exploitation dynamic. This Essay argues that is a mistake and seeks to bring this perspective to the debate about information provision and competition. A general, ongoing problem for an individual or an organization is whether to stay with a familiar solution to a problem or try new options that may yield better results. Work in organizational learning describes this problem as the exploration-exploitation dilemma. Understanding and addressing that dilemma has become a key part of an algorithmic approach to computation, machine learning, as it is applied to information provision. In simplest terms, even if one achieves success with one path, failure to try new options means one will be stuck in a local equilibrium while others find paths that yield better results and displace one’s original success. This dynamic indicates that an information provider has to provide new options and information to users, because a provider must learn and adapt to users’ changing interests in both the type of information they desire and how they wish to interact with information.

Put differently, persistent concerns about the way in which news reaches users (the so-called “filter bubble” concern) and the way in which online shopping information is found (a competition concern) can be understood as market failures regarding information provision. The desire seems to be to ensure that new information reaches people, because that increases the potential for new ideas, new choices, and new action. Although these desired outcomes are good, current criticisms and related potential solutions misunderstand the nature of information users and especially information provision, and miss an important point. Both information users and providers sort and filter as a way to enable better learning, and learning is an ongoing process that requires continual changes to succeed. From an exploration- exploitation perspective, a user or an incumbent may remain isolated or offer the same information provision but neither will learn. In that case, whatever short-term success either enjoys is likely to face leapfrogging by those who experiment through exploration and exploitation.

0

MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.