Category: Cyberlaw


Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.


The Problems and Promise with Terms of Use as the Chaperone of the Social Web

electric_fenceThe New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression.  The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.

Both articles raise important questions about the proper role of massive information intermediaries with respect to content deletion, but they also hint at a related problem: Facebook and other large websites often have vague restrictions on user behavior in their terms of use that are so expansive as to cover most aspects of interaction on the social web. In essence, these agreements allow intermediaries to serve as a chaperone on the field trip that is our electronically-mediated social experience.

Read More


Probabilistic Crime Solving

In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.

Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.


Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More


Tumblr, Porn, and Internet Intermediaries

In the hubbub surrounding this week’s acquisition of the blogging platform Tumblr by born-again internet hub Yahoo!, I thought one of the most interesting observations concerned the regulation of pornography. It led, by a winding path, to a topic near and dear to the Concurring Opinions gang: Section 230 of the Communications Decency Act, which generally immunizes online intermediaries from liability for the contents of user-generated content. (Just a few examples of many ConOp discussions of Section 230: this old post by Dan Solove and a January 2013 series of posts by Danielle Citron on Section 230 and revenge porn here, here, and here.)

Apparently Tumblr has a very large amount of NSFW material compared to other sites with user-generated content. By one estimate, over 11% of the site’s 200,000 most popular blogs are “adult.” By my math that’s well over 20,000 of the site’s power users.

Predictably, much of the ensuing discussion focused on the implications of all that smut for business and branding. But Peter Kafka explains on All Things D that the structure of Tumblr prevents advertisements for family-friendly brands from showing up next to pornographic content. His reassuring tone almost let you hear the “whew” from Yahoo! investors (as if harm to brands is the only relevant consideration about porn — which, for many tech journalists and entrepreneurs, it is).

There is another potential porn problem besides bad PR, and it is legal. Lux Alptraum, writing in Fast Company, addressed it.  (The author is, according to her bio, “a writer, sex educator, and CEO of Fleshbot, the web’s foremost blog about sexuality and adult entertainment.”) She somewhat conflates two different issues — understandably, since they are related — but that’s part of what I think is interesting. A lot of that user-posted porn is violating copyright law, or regulations meant to protect minors from exploitation, or both. To what extent might Tumblr be on the hook for those violations?

Read More


UCLA Law Review Vol. 60, Discourse

Volume 60, Discourse

Reflections on Sexual Liberty and Equality: “Through Seneca Falls and Selma and Stonewall” Nan D. Hunter 172
Framing (In)Equality for Same-Sex Couples Douglas NeJaime 184
The Uncertain Relationship Between Open Data and Accountability: A Response to Yu and Robinson’s The New Ambiguity of “Open Government” Tiago Peixoto 200
Self-Congratulation and Scholarship Paul Campos 214

Computer Crime Law Goes to the Casino

Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.

Read More


Are in-person academic communities luxury goods?

Since I began posting as a guest on Concurring Opinions at the beginning of March, “MOOCs” – massively open online courses – have been repeated topic. The blog search engine reports that the term did not appear on the blog until 25 Feb 2013; in the six weeks since, MOOCs have been a topic herehereherehereherehere, and, in Deven Desai’s interesting post two days ago, here. Deven says, and I agree, that the aggregation of students together inside an immersive academic, learning community is a real good, and one that cannot be duplicated by a set of MOOCs. But the question MOOCs make pressing is how to value that good, once it can be unbundled from training in the classroom. Nannerl Keohane, in a recent review in Perspectives on Politics (11:1, March 2013, p.318), says that “online education … is the easiest and cheapest way to learn a variety of subjects, especially useful ones,” and describes it is the contemporary analogue of “mutual-aid societies and lyceums.” This seems apt.

University insiders like to say that even unbundled academic community is indispensable, and should be subsidized by both state and university. I suspect that the marketplace will put a much lower value on it. State legislators, ever strapped for cash, will likely do so as well. There will still be a market for 24/7, bricks-and-mortar academic communities; but the online availability of downmarket, imperfect, but genuine partial substitutes will mark such communities more clearly as luxury goods. Once such luxuries are no longer inexorably bundled with direct instruction, the argument that they still deserve state or even philanthropic subsidy is not, it seems to me, a slam-dunk.

Deven posted that the key question is how to “leverage MOOCs and other technology to improve the way education is delivered while not offering only the virtual world” but also social context to those not in the luxury-goods market. Another way of phrasing that question is to ask whether there is a mid-market good, somewhere between the aggregation of naked MOOCs and the bricks-and-mortar private college, that could command interest in the marketplace and justify third-party subsidies. What features of the “code” of online courses – the way that they are presented, taught, bundled together, and converted into credentials – might be adjusted to create a closer approximation of an immersive community, without sacrificing the advantages virtual teaching offers in terms of access over distance, asynchronicity, economies of scale, and cost?


The school of the future: request for input

This post is a nerd crowdsourcing request. As a guest blogger I don’t know my audience as well as I might, but I am heartened by the presence of “science fiction” among the options my hosts give me for categorizing my posts; and my teenager assures me that “nerd” is a compliment.

As several of my earlier posts suggest, I am interested in the impact of virtual technology upon K-12 schooling; and one thing I have been doing in my spare time is looking at literary accounts, highbrow and low, of what schooling in the future might look like. A colleague gave me Ernest Kline’s recent Ready Player One, which imagines school in a fully virtualized world that looks a lot like the school I went to, complete with hallways, bullies, and truant teachers – but the software allows the students to mute their fellows and censors student obscenity before it reaches the teachers’ interfaces. Another colleague reminded me of Asimov’s 1951 The Fun They Had, where the teacher is mechanical but the students still wiggly and apathetic. On the back of a public swapshelf, I found the Julian May 1987 Galactic Milieu series, which imagines brilliant children, all alone on  faraway planets, logging on with singleminded seriousness to do their schoolwork all by their lonesomes. And my daughter gave me Orson Scott Card’s famous Ender’s Game, where the bullying is more educative than the mathematics, and scripted by the adults much more carefully.

That seems like an extensive list but really it’s not, and I was never a serious sci-fi person. If anyone is willing to post in the comments any striking literary accounts of schooling in the future, I’d be grateful.


The child, not the school

The Indiana vouchers program I posted about earlier, significant on its own, also partakes of a trend. The New York Times gets it:

A growing number of lawmakers across the country are taking steps to redefine public education, shifting the debate from the classroom to the pocketbook. Instead of simply financing a traditional system of neighborhood schools, legislators and some governors are headed toward funneling public money directly to families, who would be free to choose the kind of schooling they believe is best for their children, be it public, charter, private, religious, online or at home.

In particular, the Times is right that what is sought here is redefinition. Once states established and supported institutions – public schools – that parents could take or leave, so long as they educated their children somehow. The new paradigm has states instead provide a quantum of funding earmarked for each child, that parents can deploy at any educational institution of their choosing. The fact that the aid attaches to the child and follows her to her family’s chosen school is much more important than the various labels ascribed to the funding and/or the institutional provider – public, private, charter, voucher.

As people learn to function within, and get used to, this new paradigm, they will stop thinking of educational politics as the way to create good public schools, and start thinking of it in terms of how big the aid pie is and how it gets divided up. Whether a school is public or private, online or bricks-and-mortar, religious or not – these stop being political questions and start being questions that markets will resolve through supply and demand. Read More