Category: Technology


More Science Cheer – Microscopes for Everyone!

The New Yorker has a nice piece about Manu Prakash and his work on the Foldscope, a portable, paper-based microscope that costs about one dollar. As the author pointed out the whole thing can be put into “a nine-by-twelve-inch envelope.” Here are the details:

The paper is printed with botanical illustrations and perforated with several shapes, which can be punched out and, with a series of origami-style folds, woven together into a single unit. The end result is about the size of a bookmark. The lens—a speck of plastic, situated in the center—provides a hundred and forty times magnification. The kit includes a second lens, of higher magnification, and a set of stick-on magnets, which can be used to attach the Foldscope to a smartphone, allowing for easy recording of a sample with the phone’s camera. I put my kit together in fifteen minutes, and when I popped the lens into place it was with the satisfaction of spreading the wings of a paper crane.

The Foldscope performs most of the functions of a high-school lab microscope, but its parts cost less than a dollar.

So what? So Prakash and his colleagues are trying to deploy the device around the world to increase the way people gather and share data to understand the world. Folks use the device but also can go to “Foldscope Explore, a Web site where recipients of the kits can share photos, videos, and commentary. A plant pathologist in Rwanda uses the Foldscope to study fungi afflicting banana crops. Maasai children in Tanzania examine bovine dung for parasites. An entomologist in the Peruvian Amazon has happened upon an unidentified species of mite. One man catalogues pollen; another tracks his dog’s menstrual cycle.”

These seemingly far ranging interests thus connect to what Brett Frischmann, Mike Madison, and Kathy Strandburg have been studying: a knowledge commons. Just within Prakash’s interest in “biomimicry—understanding how and why certain organisms work so well, and using that knowledge to build new tools,” the project increases the ability to know about “Plants, insects, tiny bugs under the sink, bacteria,” that do amazing things. New species can be identified, and so the project creates thousands of eyes not only for Prakash’s work but others in the field.

As I read the article and the details of low-cost tech being used around the world for a variety of problems that locals identified, I thought of the way FabLabs and the work of Neil Gershenfeld have approached and supported the maker-movement. And as I went on, I found out that Prakash did his work with Gershenfeld’s Center for Bits and Atoms at MIT. Can you say school of thought?

Prakash’s group is looking for ways to aid in early detection of disease and water contamination using low-cost technology. At the same time, the world may be re-experiencing the wonder of the first tools that pushed our ability to understand the world. As the article described, Prakash and Jim Cybulski, (then Prakash’s student, now chief collaborator on the project) were in Nigeria studying malaria. They met with young students, caught a mosquito “that was feeding on one of the children and mounted it on a paper slide, which they inserted into the Foldscope.” The student looked at the slide and

“For the first time, he realized this was his blood, and this little proboscis is how it feeds on his blood,” Prakash said. “To make that connection—that literally this is where disease passes on, with this blood, his blood—was an absolutely astounding moment.” The exercise had its intended effect. The boy said, “I really should sleep under a bed net.”

Scale and change the world technology can be small, simple, and accessible. Folks who press the practical and tee up the skills and tools to learn and dream of bigger things are part of an ongoing season of giving that I dig. Happy holidays to all.


Authentic Brands

What is authentic? The question seems to pop up in many areas. If a company or corporation claims authenticity, I am sure several folks I know would have a reflexive reaction that such a claim is absurd. Nonetheless, the Economist notes that “Authenticity” is being peddled as a cure for drooping brands. One part of the article notes that despite the ongoing difficulties in valuing brands, “when brands are sold as part of corporate takeovers, what price do investors put on them? They found that these prices, as a percentage of deals’ total value, have dropped since 2003. So, at least for those firms being taken over, the strength of their brands is becoming a smaller share of their overall worth.” That is interesting insofar as it suggests that 1) Brand value (and goodwill in that sense) can be measured and 2) That is has gone down.

What is driving the change? A key thing I have tried to show is that the issue of information or search costs is not as high as it used to be and that change brings into question many aspects of trademark law and policy. The Economist seems to agree and puts it this way

It is not hard to see why the old marketing magic is fading, in an age in which people can instantly learn truths (and indeed untruths) about the things they are contemplating buying. Online reviews and friends’ comments on social media help consumers see a product’s underlying merits and demerits, not the image that its makers are trying to build around it. The ease of accessing information makes consumers more likely to abandon their habitual brands because they have heard about something new, or learned that retailers’ own-label products are much the same, except cheaper. Depending on your perspective, people are either increasingly fickle or ever more impermeable to marketing bullshit. For brands that lack any truly distinguishing features, that is bad news.

Better information and new sources of it change the legal and brand landscape. Plus an old problem–trying to sell essentially the same goods–has returned. As Spencer Waller and I noted, “From the end of the nineteenth century to the middle of the twentieth century to today, companies have had to find ways to compete over selling essentially the same goods and manage excess production capacity.” So it is not surprising that the sectors most hit by the change The Economist discusses are consumer goods and imported goods that no longer offer difference from other, lower-cost options of the same or close to same quality.

So can a corporation be authentic? If a corporation is slinging its authenticity with Keebler Elves and Santa Claus in Coke Red, that is a harder sell. Those plays will be claiming authenticity based on cultural history and maybe a done deal in that sense (as Spencer and I discussed, the history of firms using events and education to build a sense of community and identity is old). But insofar as craft brewing, locally-made goods, and customized offerings are claiming authenticity, those may fit the authenticity claim; as long as that claim is that the item is not from a firm of a certain size or somehow to be distrusted because of size, for Scalia was correct in Citizens United that many firms of many sizes can be corporations. Assuming small and personal is a sort of authenticity, where I am not sure The Economist is correct is its example of Apple. The newspaper offers

for those firms that get the product right and have a genuine story to tell, the rewards can still be huge. The textbook example of this is Apple, whose devices’ superior design and ease of use make it a powerful brand in a commoditised market. Last year it had only 6% of the revenues in the personal-computer market, but 28% of the profits. That’s real authenticity.

If getting the product “right” is the key, then the competition is about old school “my goods and services are better quality than yours.” If the story is also key, then we have to start asking whether Apple’s claims are accurate or myth-making “bullshit” as the Economist might say. I like Apple products as they fit my needs. I buy them despite the over-claimed genius we are all tech saviors rubbish they sling. It is authentic as long as it authentic here means 100% Silicon Valley hubris. So pure it …


MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.


Not Found, Forbidden, or Censored? New Error Code 451 May Help Figure It Out

When UK sites blocked access to the Pirate Bay following a court order the standard 403 code error for “Forbidden” appeared, but a new standard will let users know that a site is not accessible because of legal reasons. According to the Verge, Tim Bray proposed the idea more than three years ago. The number may ring a bell. It is a nod to Bradbury’s Farenhiet 451. There some “process bits” to go before the full approval, but developers can start to implement it now. As the Verge explains, the code is voluntary. Nonetheless

If implemented widely, Bray’s new code should help prevent the confusion around blocked sites, but it’s only optional and requires web developers to adopt it. “It is imaginable that certain legal authorities may wish to avoid transparency, and not only forbid access to certain resources, but also disclosure that the restriction exists,” explains Bray.

It might be interesting to track how often the code is used and the reactions to it.

Here is the text of how the code is supposed to work:

This status code indicates that the server is denying access to the
resource as a consequence of a legal demand.

The server in question might not be an origin server. This type of
legal demand typically most directly affects the operations of ISPs
and search engines.

Responses using this status code SHOULD include an explanation, in
the response body, of the details of the legal demand: the party
making it, the applicable legislation or regulation, and what classes
of person and resource it applies to. For example:

HTTP/1.1 451 Unavailable For Legal Reasons
Link: ; rel=”blocked-by”
Content-Type: text/html

Unavailable For Legal Reasons

Unavailable For Legal Reasons

This request may not be serviced in the Roman Province
of Judea due to the Lex Julia Majestatis, which disallows
access to resources hosted on servers deemed to be
operated by the People’s Front of Judea.

Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

A Review of The Black Box Society

I just learned of this very insightful and generous review of my book, by Raizel Liebler:

The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press 2015) is an important book, not only for those interested in privacy and data, but also anyone with larger concerns about the growing tension between transparency and trade secrets, and the deceptiveness of pulling information from the ostensibly objective “Big Data.” . . .

One of the most important aspects of The Black Box Society builds on the work of Siva Vaidhyanathan and others to write about how relying on the algorithms of search impact people’s lives. Through our inability to see how Google, Facebook, Twitter, and other companies display information, it makes it seem like these displays are in some way “objective.” But they are not. Between various stories about blocking pictures of breastfeeding moms, blocking links to competing sites, obscurity sources, and not creating tools to prevent harassment, companies are making choices. As Pasquale puts it: “at what point does a platform have to start taking responsibility for what its algorithms go, and how their results are used? These new technologies affect not only how we are understood, but also how we understand. Shouldn’t we know when they’re working for us, against us, or for unseen interests with undisclosed motives?”

I was honored to be mentioned on the TLF blog–a highly recommended venue! Here’s a list of some other reviews in English (I have yet to compile the ones in other languages, but was very happy to see the French edition get some attention earlier this Fall). And here’s an interesting take on one of those oft-black-boxed systems: Google Maps.

Highly Recommended: Chamayou’s The Theory of The Drone

robocop1924_02Earlier this year, I read a compelling analysis of drone warfare, Gregoire Chamayou’s The Theory of The Drone. It is an unusual and challenging book, of interest to both policymakers and philosophers, engineers and attorneys. As I begin a review of it:

At what point do would-be reformers of the law and ethics of war slide into complicity with a morally untenable status quo? When is the moralization of force a prelude for the ration­alization of slaughter? Grégoire Chamayou’s penetrating recent book, A Theory of the Drone, raises these uncomfortable questions for lawyers and engineers both inside and out of the academy. Chamayou, a French philosopher, dissects legal academics’ arguments for targeted killing by unmanned vehicles. He also criticizes university research programs purporting to engineer ethics for the autonomous weapons systems they view as the inevitable future of war. Writing from a tradition of critical theory largely alien to both engineering and law, he raises concerns that each discipline should address before it continues to develop procedures for the automation of war.

As with the automation of law enforcement, advocacy, and finance, the automation of war has many unintended consequences. Chamayou helps us discern its proper limits.

Image Credit: 1924 idea for police automaton.

From Territorial to Functional Governance

Susan Crawford is one of the leading global thinkers on digital infrastructure. Her brilliant book Captive Audience spearheaded a national debate on net neutrality. She helped convince the Federal Communications Commission to better regulate big internet service providers. And her latest intervention–on Uber–is a must-read. Crawford worries that Uber will rapidly monopolize urban ride services. It’s repeatedly tried to avoid regulation and taxes. And while it may offer a good deal to drivers and riders now, there is no guarantee it will in the future.

A noted critic of the sharing economy, Tom Slee, has backed up Crawford’s concerns, from an international perspective. “For a smallish city in Canada, what happens to accountability when faced with a massive American company with little interest in Canadian employment law or Canadian traditions?”, Slee asks, raising a very deep point about the nature of governance. What happens to a city when its government’s responsibilities are slowly disaggregated, functionally? Some citizens may want to see the effective governance of paid rides via Uber, of spare rooms via AirBnB, and so on. A full privatization of city governance awaits, from water to sidewalks.

If you’re concerned about that, you may find my recent piece on the sharing economy of interest. We’ll also be discussing this and similar issues at Northeastern’s conference “Tackling the Urban Core Puzzle.” Transitions from territorial to functional governance will be critical topics of legal scholarship in the coming decade.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.



How CalECPA Improves on its Federal Namesake

Last week, Governor Brown signed the landmark California Electronic Communications Privacy Act[1] (CalECPA) into law and updated California privacy law for modern communications. Compared to ECPA, CalECPA requires warrants, which are more restricted, for more investigations; provides more notice to targets; and furnishes as a remedy both court-ordered data deletion and statutory suppression.  Moreover, CalECPA’s approach is comprehensive and uniform, eschewing the often irrational distinctions that have made ECPA one of the most confusing and under-protective privacy statutes in the Internet era.

Extended Scope, Enhanced Protections, and Simplified Provisions

CalECPA regulates investigative methods that ECPA did not anticipate. Under CalECPA, government entities in California must obtain a warrant based on probable cause before they may access electronic communications contents and metadata from service providers or from devices.  ECPA makes no mention of device-stored data, even though law enforcement agents increasingly use StingRays to obtain information directly from cell phones. CalECPA subjects such techniques to its warrant requirement. While the Supreme Court’s recent decision in United States v. Riley required that agents either obtain a warrant or rely on an exception to the warrant requirement to search a cell phone incident to arrest, CalECPA requires a warrant for physical access to any device, not just a cell phone, which “stores, generates, or transmits electronic information in electronic form.” CalECPA clearly defines the exceptions to the warrant requirement by specifying what counts as an emergency, who can give consent to the search of a device, and related questions.

ECPA’s 1986-drafted text only arguably covers the compelled disclosure of location data stored by a service provider, and does not clearly require a warrant for such investigations. CalECPA explicitly includes location data in the “electronic communication information” that is subject to the warrant requirement when a government entity accesses it from either a device or a service provider (broadly defined).  ECPA makes no mention of location data gathered in real-time or prospectively, but CalECPA requires a warrant both for those investigations and for stored data investigations. Whenever a government entity compels the “the production of or access to” location information, including GPS data, from a service provider or from a device, CalECPA requires a warrant.

Read More