Category: Philosophy of Social Science

Rethinking the Political Economy of Automation

The White House recently released two important reports on the future of artificial intelligence. The “robot question” is as urgent today as it was in the 1960s. Back then, worry focused on the automation of manufacturing jobs. Now, the computerization of services is top of mind.

At present, economists and engineers dominate public debate on the “rise of the robots.” The question of whether any given job should be done by a robot is modeled as a relatively simple cost-benefit analysis. If the robot can perform a task more cheaply than a worker, substitute it in. This microeconomic approach to filling jobs dovetails with a technocratic, macroeconomic goal of maximizing some blend of GDP and productivity.

In the short run, these goals appear almost indisputable–the dictates of market reason. In the long run, they presage a jobs crisis. As Michael Dorf recently observed, even though “[i]t is possible that new technologies will create all sorts of new jobs that we have not imagined yet,” it is hard to imagine new mass opportunities for employment. So long as a job can be sufficiently decomposed, any task within it seems (to the ambitious engineer) automatable, and (to the efficiency-maximizing economist) ripe for transferring to software and machines. The professions may require a holistic perspective, but other work seems doomed to fragmentation and mechanization.

Dorf is, nevertheless, relatively confident about future economic prospects:

Standard analyses…assume that in the absence of either socialism or massive philanthropy from future tech multi-billionaires, our existing capitalist system will lead to a society in which the benefits of automation are distributed very unevenly. . . . That’s unlikely. Think about Henry Ford’s insight that if he paid his workers a decent wage, he would have not only satisfied workers but customers to buy his cars. If the benefits of technology are beyond the means of the vast majority of ordinary people, that severely limits the ability of capitalists and super-skilled knowledge workers to profit from the mass manufacture of the robotic gizmos. . . . Enlightened capitalists would understand that they need customers and that, with automation severely limiting the number of jobs available, customers can only be ensured through generous government-provided payments to individuals and families.

I hope he is right. But I want to explore some countervailing trends that militate against wider distribution of the gains from automation:
Read More

Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

Is the Happiness Industry Creating Algorithmic Selves?

In a recent podcast called “Thinking Allowed,” host Laurie Taylor covered two fascinating books: The Wellness Syndrome, and The Happiness Industry. One author discussed a hedge fund that’s now managing what it calls “biorisk” by correlating traders’ eating, drinking, and sleeping habits, and their earnings for the firm. Will Davies, author of The Happiness Industry, discussed less intrusive, but more pervasive, efforts to assure that workers are fitter, happier, and therefore more productive. As he argues in the book,

[M]ood-tracking technologies, sentiment analysis algorithms and stress-busting meditation techniques are put to work in the service of certain political and economic interests. They are not simply gifted to us for our own Aristotelian flourishing. Positive psychology, which repeats the mantra that happiness is a personal ‘choice’, is as a result largely unable to provide the exit from consumerism and egocentricity that its gurus sense many people are seeking.

But this is only one element in the critique to be developed here. One of the ways in which happiness science operates ideologically is to present itself as radically new, ushering in a fresh start, through which the pains, politics and contradictions of the past can be overcome. In the early twenty-first century, the vehicle for this promise is the brain. ‘In the past, we had no clue about what made people happy – but now we know’, is how the offer is made. A hard science of subjective affect is available to us, which we would be crazy not to put to work via management, medicine, self-help, marketing and behaviour change policies.

The happiness industry thrives in a culture premised on an algorithmic model of the self. People (or “econs“) are seen a bundle of inputs (data collection), algorithmic processes (data analysis), and outputs (data use). Since the demands of affect can only be extirpated in robots, the challenge for the happiness industry is to optimize some quantum of satisfaction for its human subjects, compatible with their maximum productivity. Objectively, the algorithmic self is no more (nor less) than the goods and services it uses and creates; subjectively, it strives to convert inputs of resources into outputs of joy, contentment–name your positive affect. As “human resources,” it is simply raw material to be deployed to its most profitable use.

Audit culture, quantification (e.g., the quantified self), commensuration, and cost-benefit analysis all reflect and reinforce algorithmic selfhood. Both the Templeton Foundation and the Social Brain Centre in Britain are developing some intriguingly countercultural alternatives to big data-driven behaviorism. As he highlights the need for such alternatives, Davies deserves great credit for exposing the political economy behind corporate appropriations of positive psychology.

Four Futures of Legal Automation

BarbicanThere are many gloom-and-doom narratives about the legal profession. One of the most persistent is “automation apocalypse.” In this scenario, computers will study past filings, determine what patterns of words work best, and then—poof!—software will eat the lawyer’s world.

Conditioned to be preoccupied by worst-case scenarios, many attorneys have panicked about robo-practitioners on the horizon. Meanwhile, experts differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%; others claim legal practice is fundamentally routinizable. I’ve recently co-authored an essay that helps explain why such radical uncertainty prevails.

While futurists affect the certainties of physicists, visions of society always reflect contestable political aspirations. Those predicting doom for future lawyers usually harbor ideological commitments that are not that friendly to lawyers of the present. Displacing the threat to lawyers to machines (rather than, say, the decisionmakers who can give machines’ doings the legal effect of what was once done by qualified persons) is a way of not merely rationalizing, but also speeding up, the hoped-for demise of an adversary. Just like the debate over killer robots can draw attention away from the persons who design and deploy them, so too can current controversy over robo-lawyering distract from the more important political and social trends that make automated dispute resolution so tempting to managers and bureaucrats.

It is easy to justify a decline in attorneys’ income or status by saying that software could easily do their work. It’s harder to explain why the many non-automatable aspects of current legal practice should be eliminated or uncompensated. That’s one reason why stale buzzwords like “disruption” crowd out serious reflection on the drivers of automation. A venture capitalist pushing robotic caregivers doesn’t want to kill investors’ buzz by reflecting on the economic forces promoting algorithmic selfhood. Similarly, #legaltech gurus know that a humane vision of legal automation, premised on software that increases quality and opportunities for professional judgment, isn’t an easy sell to investors keen on speed, scale, and speculation. Better instead to present lawyers as glorified elevator operators, replaceable with a sufficiently sophisticated user interface.

Our essay does not predict lawyers’ rise or fall. That may disappoint some readers. But our main point is to make the public conversation about the future of law a more open and honest one. Technology has shaped, and will continue to influence, legal practice. Yet its effect can be checked or channeled by law itself. Since different types of legal work are more or less susceptible to automation, and society can be more or less regulatory, we explore four potential future climates for the development of legal automation. We call them, in shorthand, Vestigial Legal Profession, Society of Control, Status Quo, and Second Great Compression. An abstract appears below.

Read More

Methodological Pluralism in Legal Scholarship

The place of the social science in law is constantly contested. Should more legal scholars retreat to pure doctrinalism, as Judge Harry Edwards suggests? Or is there a place for more engagement with other parts of the university? As we consider these questions, we might do well to take a bit more of a longue duree perspective–helpfully provided by David Bosworth in a recent essay in Raritan:

No society in history has more emphasized the social atom than ours. Yet the very authority we have invested in individualism is now being called into question by both the inner logic of our daily practices and by the recent findings of our social sciences. . . .

Such findings challenge the very core of our political economy’s self-conception. What, after all, do “self-reliance” and “enlightened self-interest” really mean if we are constantly being influenced on a subliminal level by the behavior of those around us? Can private property rights continue to seem right when an ecologically minded, post-modern science keeps discovering new ways in which our private acts transgress our deeded boundaries to harm or help our neighbors? Can our allegiance to the modern notions of ownership, authorship, and originality continue to make sense in an economy whose dominant technologies expose and enhance the collaborative nature of human creativity? And in an era of both idealized and vulgarized “transparency,” can privacy—-the social buffer that cultivates whatever potential for a robust individualism we may actually possess—-retain anything more than a nostalgic value?

These are provocative questions, and I don’t agree with all their implications. But I am very happy to be part of an institution capable of exploring them with the help of computer scientists, philosophers, physicians, social scientists, and humanists.

I suppose Judge Edwards would find it one more symptom of the decadence of the legal academy that I’ll be discussing my book this term at both the Institute for Advanced Studies of Culture at UVA and at MAGIC at the Rochester Institute of Technology. But when I think about who might be qualified to help lawyers bridge the gap between policy and engineering in the technology-intensive fields I work in, few might be better than the experts at MAGIC. The fellows and faculty at IASC have done fascinating work on markets and culture–work that would, ideally, inform a “law & economics” committed to methodological pluralism.
Read More

The Black Box Society: Interviews

My book, The Black Box Society, is finally out! In addition to the interview Lawrence Joseph conducted in the fall, I’ve been fortunate to complete some radio and magazine interviews on the book. They include:

New Books in Law

Stanford Center for Internet & Society: Hearsay Culture

Canadian Broadcasting Corporation: The Spark

Texas Public Radio: The Source

WNYC: Brian Lehrer Show.

Fleishman-Hillard’s True.

I hope to be back to posting soon, on some of the constitutional and politico-economic themes in the book.

Legal Scholarship & the University

Just a quick note to make explicit something implicit in my last post: I not only agree with Dave Hoffman’s point about the enduring value of many modes of law teaching, but also think that we could do with a lot less defensiveness about the value of legal scholarship. It is not only the case that legal theories “have fundamentally changed our thinking about the law,” as Robin West and Danielle Citron argue. There are areas of social science presently adrift either because they have not adequately incorporated key legal insights, or because attorneys and legal scholars have failed to fully engage with key controversies and ideas. And there are fields–like political economy and finance theory–now being revitalized thanks to the efforts of legal academics. Legal scholarship exists not only to help the bench and bar, but to enrich the social sciences and humanities generally.

From Piketty to Law and Political Economy

Thomas Piketty’s Capital in the 21st Century continues to spur debate among economists. It has many lessons for attorneys, as well. But does law have something to offer in return? I make that case in my review of Capital, focusing on Piketty’s call for a renewal of the social science of political economy. My review underscores the complexity of the relationship between law and social science. Legal academics import ideas from other fields, but also return the favor by informing those fields. Ideally, the process is dialectic, with lawyers and social scientists in dialogue.

At the conference Critiquing Cost-Benefit Analysis of Financial Regulation, I saw that process first hand in May. We at the Association of Professors of Political Economy and the Law (APPEAL) are planning further events and projects to continue that dialogue.

I also saw a renewed synergy between law and social sciences at the Rethinking Economics conference last month. Economists inquired about bankruptcy law to better understand the roots of the financial crisis, and identified the limits that pension law places on certain types of investment strategies.

Some of the organizers of the conference recently took the argument in a new direction, focusing on the interaction between Modern Monetary Theory (MMT) and campaign finance reform. “Leveling up” modes of campaign finance reform have often stalled because taxpayers balk at funding political campaigns. Given that private campaign funders’ return on investment has been estimated at 22,000%, that seems an unwise concession to crony capitalism. So how do we get movement on the issue?
Read More

Interview on The Black Box Society

BBSBalkinization just published an interview on my forthcoming book, The Black Box Society. Law profs may be interested in our dialogue on methodology—particularly, what the unique role of the legal scholar is in the midst of increasing academic specialization. I’ve tried to surface several strands of inspiration for the book.

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.