Category: Sociology of Law

2

Introductions and the Sociology of Privacy

It is always a pleasure to join the Concurring Opinions community, one that I find supportive and tough, insightful and witty. I hope to contribute to ongoing discussions, raise a few eyebrows and bring some new perspective to issues of great concern to us all. Thanks to the incomparable Danielle Citron and the Con-Op community of leaders for having me on this month, and thank you in advance to all the readers for indulging my interest in sociology and privacy.

That is what I’d like to write about this month. My research is on the law and sociology of privacy and the Internet, but I am particularly concerned with the injustices and inequalities that arise in unregulated digital spaces. This was the animator of my previous work on bullying and cyberharassment of LGBT youth. This month, I would like to speak more broadly about how sociologists (I am completely my Ph.D. in sociology at Columbia U) talk about privacy and, by the end of the month, persuasively argue that we — lawyers, legal scholars, sociologists, psychologists, economists, philosophers and other social scientists and theories — are, for the most part, thinking about privacy too narrowly, too one-dimensionally, too pre-Internet to adequately protect private interests, whatever they may be. But before I get there, let me start small.

Many of us are familiar with the work of legal and economic privacy scholars, from Dan Solove to Alessandro Acquisti, from Jeffrey Rosen to Larry Lessig and Julie Cohen. All incredibly smart and insightful academics who have taught me much. But many are less familiar with sociologists like Robert J. Maxwell (not to be confused with the Robert Maxwell who produced “Lassie”) who’s work I would like to discuss briefly. I argue that Maxwell’s work evokes a typically narrow conception of privacy too common among sociologists: that privacy is, at best, about mere separation from others and, at worst, about the space for deviance.

Maxwell wanted to know about the presence of premarital sex in preindustrial societies. So, using an established data set including all sorts of details about these societies, Maxwell decided to look at the connection, if any, between sexual norms and, of all things, the permeability of wall construction materials. The codings for whether sex was allowed ranged from “premarital relations not allowed and not sanctioned unless pregnancy results” to “insistence on virginity; premarital sex relations prohibited, strongly sanctioned in fact rare.” Wall material codings ranged from the relatively impermeable “stone,” “stucco,” “concrete” and “fired brick” to “nonwalls” (literally, no walls, or temporary screens). He was working off the glass houses hypothesis — people who live in glass houses will not throw stones. Therefore, he thought that the more permeable the wall, the less rigid the antisex norms.

He was right.

He found that there was inverse relationship between the permeability of the materials used in wall construction and the rigidity of the norms regulating premarital sex for women.

The data provide a simple, though imperfect, proxy for talking about privacy in a discrete social unit. Walls are barriers to knowledge about what’s going on behind them (though, not impenetrable barriers, see Kyllo v. United States, 533 U.S. 27 (2001) (heat sensors used to pierce the wall of a home)). Strong anti-premarital sex norms existed in communities that could afford to have them, i.e., communities that had impenetrable walls to create hiding spaces. Communities without walls or hiding places more likely had their members have sex out in the open or, at least, in view of others. They could not afford or were not able to have strict antisex norms.

This tells us two things about how sociologists study privacy.

First, sociologists tend to think about the private as separate from the public and indulge in an oft-used spatial analogy. In fact, they’re not alone. Much of the social science literature uses the rhetoric of spaces, territories, walls, and other indicators of literal separation to support theoretical arguments. For example, Joseph Rykwert, an historian of the ancient world, argued that there was a direct correspondence between ancient conceptions of privacy and the women’s rooms in the home, on the one hand, and public behavior and the men’s rooms, on the other. The distinction in the home was literal. In his work on secret societies, Georg Simmel not only argued that “detachment” and “exclusion” were necessary for the success of a secret organization, but analogized the role of the secret to a wall of separation: “Their secret encircles them like a boundary, beyond which there is nothing.” Erving Goffman, a preeminent sociologists whose work almost every undergraduate reads in a Sociology 101 course, built his entire microsociology theory of how people behave in public around a theatrical conceit that distinguished between the “front stage,” where the action happened, and the “back stage,” where the actors could kick back. And so, when the Maxwell wanted to study sexual intimacy in pre-industrial societies, he chose to study wall construction, material permeability, and hidden spaces to determine if there was a relationship between intimacy norms in the greater society and private behavior.

But conceiving of privacy as sequestration or as a hidden space has its limits. Neither Goffman nor Simmel ever really meant their analogy to be put into practice. Both wrote much about how privacy could exist in public, in crowded rooms and when you around many other people. And yet privacy-as-sequestration in a space permeates the law of privacy, from the continued sanctity of the home to old cases like Olmstead v. United States, 277 U.S. 438 (1928), that hinged privacy invasions on an actual, physical trespass. Some sociologists appear to be guilty of the same lack of imagination that Justice Brandeis called out in his Olmstead dissent: “The protection guaranteed by the amendments is much broader in scope. The makers of our Constitution undertook to secure conditions favorable to the pursuit of happiness. They recognized the significance of man’s spiritual nature, of his feelings and of his intellect. They knew that only a part of the pain, pleasure and satisfactions of life are to be found in material things. They sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations. They conferred, as against the government, the right to be let alone-the most comprehensive of rights and the right most valued by civilized men.”

The second thing this approach to the study of privacy tells us about sociologists and privacy is that they, and many other scholars, burden privacy with a moral dimension. They associate privacy and private places with deviance. This is where I will pick up in my next post.

0

Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)

 

BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.

Privacy & Information Monopolies

First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:

A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .

I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.

Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read More

“The Creditor Was Always Right”

What would a world of totally privatized justice look like? To take a more specific case—imagine a Reputation Society where intermediaries, unbound by legal restrictions, could sort people as wheat or chaff, credit-worthy or deadbeat, reliable or lazy?

We’re well on our way to that laissez-faire nirvana for America’s credit bureaus. While they seem to be bound by FCRA and a slew of regulations, enforcement is so wan that they essentially pick and choose the bits of law they want to follow, and what they’d like to ignore. That, at least, is the inescapable conclusion of a brief but devastating portrait of the bureaus on 60 Minutes. Horror stories abound regarding the bureaus, but reporter Steve Kroft finds their deeper causes by documenting an abandonment of basic principles of due process:
Read More

0

Gamification – Kevin Werbach and Dan Hunter’s new book

Gamification? Is that a word? Why yes it is, and Kevin Werbach and Dan Hunter want to tell us what it means. Better yet, they want to tell us how it works in their new book For the Win: How Game Thinking Can Revolutionize Your Business (Wharton Press). The authors get into many issues starting with a refreshing admission that the term is clunky but nonetheless captures a simple, powerful idea: one can use game concepts in non-game contexts and achieve certain results that might be missed. As they are careful to point out, this is not game theory. This is using insights from games, yes video games and the like, to structure how we interact with a problem or goal. I have questions about how well the approach will work and potential downsides (I am after all a law professor). Yet, the authors explore cases where the idea has worked, and they address concerns about where the approach can fail. I must admit I have only an excerpt so far. But it sets out the project while acknowledging possible objections that popped to mind quite well. In short, I want to read the rest. Luckily the Wharton link above or if you prefer Amazon Kindle are both quite reasonably priced. (Amazon is less expensive).

If you wonder about games, play games, and maybe have thought what is with all this badging, point accumulation, leader board stuff at work (which I did while I was at Google), this book looks to be a must read. And if you have not encountered these changes, I think you will. So reading the book may put you ahead of the group in understanding what management or companies are doing to you. The book also sets out cases and how the process works, so it may give you ideas about how to use games to help your endeavor and impress your manager. For the law folks out there, I think this area raises questions about behavioral economics and organizations that will lay ahead. In short, the authors have a tight, clear book that captures the essence of a movement. That alone merits a hearty well done.

4

How would we know if and why the “law” is “overly complicated and outrageously expensive”?

I agree with some of what’s said in this new essay about credentialing and the educational system. It’s worth reading.  But the author makes a claim about “law” which I don’t quite accept:

“Today, we take it for granted that practicing medicine or law requires years of costly credentialing in unrelated fields. In the law, the impact of all this “training” is clear: it supports a legal system that is overly complicated and outrageously expensive, both for high-flying corporate clients who routinely overpay and for small-time criminal defendants who, in the overwhelming majority of cases, can’t afford to secure representation at all (and must surrender their fate to local prosecutors, who often send them to prison). But just as a million-dollar medical training isn’t necessary to perform an abortion, routine legal matters could easily, and cheaply, be handled by noninitiates.”

There is one statement here that is undeniably true: many people who would like to access legal services can not afford to do so. But the rest is not fully thought out.

Literally any vaguely competent human can draft a will. The relevant question is: what percentage of “routine” wills turn out to be complex down the line, such that lay drafting which doesn’t anticipate problems creates a joojooflop and expensive heartache?  Does anyone actually know the answer to this question? I don’t. And given that I don’t have a sense of the relevant baseline risks, I would vastly prefer to have a will drafted by a competent T&E attorney than drafting it myself;  and I’d prefer to draft it myself than take it from a form book or a “noninitiate.” That doesn’t make me a credentialist snob: that makes me risk averse.  Indeed: it should be obvious that merely because many people can’t afford wills drafted by lawyers doesn’t mean that experienced nonlawyer will drafting is just as good as legally trained drafting. (It might or not be – the question susceptible to empirical investigation.)

Read More

0

Are We Really Growing “More Divided” By Party Over Time?

Over at the Cultural Cognition Blog, I’ve written a bit about some new evidence about partisan division.  The headline news is that partisanship is a better predictor than it used to be of cultural division.  But as I read the data, the undernews is that we’re actually no more divided than we used to be on common ideological and cultural measures.  Given all that’s happened in the last quarter-century – including media differentiation, the digital revolution and 24-hour news cycle, more bowling alone, sprawl – isn’t that kind of a huge deal? The fact that partisan self-identification is a better predictor of cultural views than it used to be simply means that the parties are cohering better.  That might be bad for the functioning of our particular form of representative government, but it doesn’t mean that we’re drifting apart as a country.

0

Personhood to artificial agents: Some ramifications

Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.

The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand,  the argument for according  “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong  impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and  do not have clearly defined identities), and then argue how they might be overcome legally.

Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent  independent, is not too far into the future. In fact, the aftermath of  Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”

We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project  founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if)  artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?

And  when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?

These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network?  What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent?  What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?

Are these scenarios too far away for us to worry about, or close enough? I wonder…

-Ramesh Subramanian

2

Did Rahm Learn Anything From Cass?

This week Governor Pat Quinn of Illinois signed legislation that will allow the City of Chicago to put speed cameras in the one-eighth mile buffer zones around schools and parks.   As the Chicago Tribune has reported, the City has more than 600 public schools and only slightly fewer parks, so this legislation gives Chicago the authority to cover roughly half of its territory with speed cameras.  The City says it will concentrate on the approximately 80 areas where the need for speed enforcement is particularly acute.

Although Quinn signed the legislation, the cameras are the handiwork of Mayor Rahm Emanuel.   The Mayor says he developed the plan after school officials and the police expressed concerns about public safety.  Emanuel’s critics—and he has a lot of them—paint the legislation as being more about revenue generation than public safety.   Drivers who go more than 5 miles over the speed limit will be fined $50 and drivers who go more than 11 miles over the limit will be fined $100.  The Mayor has said repeatedly that he doesn’t care if the cameras generate any revenue; the legislation is all about keeping kids safe.

Let’s take the Mayor at his word and assume that his only goal is to make Chicago safer.  What would traffic engineers and behavioral economists advise?  They would tell him to install dynamic speed displays, which announce the posted speed limit and display in large digital numbers the speed of each driver going past.   One of the first experiments with these displays took place in school zones in suburban Los Angeles in 2003.  Drivers slowed down by an average of 14 percent and in some zones the average speed dropped below the limit.   The use of dynamic speed displays has since become commonplace and research has consistently shown that they cause drivers to slow down by about 10 percent for several miles.

These displays upend the usual approach to traffic enforcement because there is no penalty for displaying a speed that is higher than the posted limit.   Instead, the display works by creating a feedback loop: (1) sensors instantly capture and relay information about the driver’s speed; (2) the large public display of numbers carries real punch because few people want to be perceived as reckless or careless; and (3) the driver has immediate opportunity to slow down by simply easing up on the gas.   This feedback loop is so effective that traffic safety experts have concluded it does a better job of changing driving habits than techniques that depend on police issuing tickets.  (You can read about dynamic speed displays and feedback loops more generally here.)

Chicago’s speed cameras will be accompanied by highly visible signage, so time will tell whether the combination of signage and speed cameras make drivers slow down in the short term and change their driving habits in the long term.   If I were advising a mayor whose priority was public safety, however, I’d recommend the use of dynamic speed displays that provide effective feedback to drivers in the moments before they enter a school zone, and not cameras whose feedback comes in the mail several days after the driver already has sped by a school.

Gamifying Control of the Scored Self

Social sorting is big business. Bosses and bankers crave “predictive analytics:” ways of deciding who will be the best worker, borrower, or customer. Our economy is less likely to reward someone who “builds a better mousetrap” than it is to fund a startup which will identify those most likely to buy a mousetrap. The critical resource here is data, the fossil fuel of the digital economy. Privacy advocates are digital environmentalists, worried that rapid exploitation of data either violates moral principles or sets in motion destructive processes we only vaguely understand now.*

Start-up fever fuels these concerns as new services debut and others grow in importance. For example, a leader at Lenddo, “the first credit scoring service that uses your online social network to assess credit,” has called for “thousands of engineers [to work] to assess creditworthiness.” We all know how well the “quants” have run Wall Street—but maybe this time will be different. His company aims to mine data derived from digital monitoring of relationships. ITWorld headlined the development: “How Facebook Can Hurt Your Credit Rating”–“It’s time to ditch those deadbeat friends.” It also brought up the disturbing prospect of redlined portions of the “social graph.”

There’s a lot of value in such “news you can use” reporting. However, I think it misses some problematic aspects of a pervasively evaluated and scored digital world. Big data’s fans will always counter that, for every person hurt by surveillance, there’s someone else who is helped by it. Let’s leave aside, for the moment, whether the game of reputation-building is truly zero-sum, and the far more important question of whether these judgments are fair. The data-meisters’ analytics deserve scrutiny on other grounds.
Read More