Category: Political Economy

Rethinking the Political Economy of Automation

The White House recently released two important reports on the future of artificial intelligence. The “robot question” is as urgent today as it was in the 1960s. Back then, worry focused on the automation of manufacturing jobs. Now, the computerization of services is top of mind.

At present, economists and engineers dominate public debate on the “rise of the robots.” The question of whether any given job should be done by a robot is modeled as a relatively simple cost-benefit analysis. If the robot can perform a task more cheaply than a worker, substitute it in. This microeconomic approach to filling jobs dovetails with a technocratic, macroeconomic goal of maximizing some blend of GDP and productivity.

In the short run, these goals appear almost indisputable–the dictates of market reason. In the long run, they presage a jobs crisis. As Michael Dorf recently observed, even though “[i]t is possible that new technologies will create all sorts of new jobs that we have not imagined yet,” it is hard to imagine new mass opportunities for employment. So long as a job can be sufficiently decomposed, any task within it seems (to the ambitious engineer) automatable, and (to the efficiency-maximizing economist) ripe for transferring to software and machines. The professions may require a holistic perspective, but other work seems doomed to fragmentation and mechanization.

Dorf is, nevertheless, relatively confident about future economic prospects:

Standard analyses…assume that in the absence of either socialism or massive philanthropy from future tech multi-billionaires, our existing capitalist system will lead to a society in which the benefits of automation are distributed very unevenly. . . . That’s unlikely. Think about Henry Ford’s insight that if he paid his workers a decent wage, he would have not only satisfied workers but customers to buy his cars. If the benefits of technology are beyond the means of the vast majority of ordinary people, that severely limits the ability of capitalists and super-skilled knowledge workers to profit from the mass manufacture of the robotic gizmos. . . . Enlightened capitalists would understand that they need customers and that, with automation severely limiting the number of jobs available, customers can only be ensured through generous government-provided payments to individuals and families.

I hope he is right. But I want to explore some countervailing trends that militate against wider distribution of the gains from automation:
Read More

Is Eviction-as-a-Service the Hottest New #LegalTech Trend?

Some legal technology startups are struggling nowadays, as venture capitalists pull back from a saturated market. The complexity of the regulatory landscape is hard to capture in a Silicon Valley slide deck. Still, there is hope for legal tech’s “idealists.” A growing firm may bring eviction technology to struggling neighborhoods around the country:

Click Notices . . . integrates its product with property management software, letting landlords set rules for when to begin evictions. For instance, a landlord could decide to file against every tenant that owes $25 or more on the 10th of the month. Once the process starts, the Click Notices software, which charges landlords flat fees depending on local court costs, sends employees or subcontractors to represent the landlord in court (attorneys aren’t compulsory in many eviction cases).

I can think of few better examples of Richard Susskind’s vision for the future of law. As one Baltimore tenant observes, the automation of legal proceedings can lead to near-insurmountable advantages for landlords:

[Click Notices helped a firm that] tried to evict Dinickyo Brown over $336 in unpaid rent. Brown, who pays $650 a month for a two-bedroom apartment in Northeast Baltimore, fought back, arguing the charges arose after she complained of mold. The landlord dropped the case, only to file a fresh eviction action—this time for $290. “They drag you back and forth to rent court, and even if you win, it goes onto your record,” says Brown, who explains that mold triggers her epilepsy. “If you try to rent other properties or buy a home, they look at your records and say: You’ve been to rent court.”

And here’s what’s truly exciting for #legaltech innovators: the digital reputation economy can synergistically interact with the new eviction-as-a-service approach. Tenant blacklists can assure that merely trying to fight an eviction can lead to devastating consequences in the future. Imagine the investment returns for a firm that owned both the leading eviction-as-a-service platform in a city, and the leading tenant blacklist? Capture about 20 of the US’s top MSA‘s, and we may well be talking unicorn territory.

As we learned during the housing crisis, the best place to implement legal process outsourcing is against people who have a really hard time fighting back. That may trouble old-school lawyers who worry about ever-faster legal processes generating errors, deprivations of due process, or worse. But the legal tech community tends to think about these matters in financialized terms, not fusty old concepts like social justice or autonomy. I sense they will celebrate eviction-as-a-service as one more extension of technologized ordering of human affairs into a profession whose “conservatism” they assume to be self-indicting.

Still, even for them, caution should be in order. Bret Scott’s skepticism about fintech comes to mind:

[I]f you ever watch people around automated self-service systems, they often adopt a stance of submissive rule-abiding. The system might appear to be ‘helpful’, and yet it clearly only allows behaviour that agrees to its own terms. If you fail to interact exactly correctly, you will not make it through the digital gatekeeper, which – unlike the human gatekeeper – has no ability or desire to empathise or make a plan. It just says ‘ERROR’. . . . This is the world of algorithmic regulation, the subtle unaccountable violence of systems that feel no solidarity with the people who have to use it, the foundation for the perfect scaled bureaucracy.

John Danaher has even warned of the possible rise of “algocracy.” And Judy Wajcman argues that ““Futuristic visions based on how technology can speed up the world tend to be inherently conservative.” As new legal technology threatens to further entrench power imbalances between creditors and debtors, landlords and tenants, the types of feudalism Bruce Schneier sees in the security landscape threaten to overtake far more than the digital world.

(And one final note. Perhaps even old-school lawyers can join Paul Gowder’s praise for a “parking ticket fighting” app, as a way of democratizing advocacy. It reminds me a bit of TurboTax, which democratized access to tax preparation. But we should also be very aware of exactly how TurboTax used its profits when proposals to truly simplify the experience of tax prep emerged.)

Hat Tip: To Sarah T. Roberts, for alerting me to the eviction story.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 

Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 

We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)

Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.

Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions

Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.

For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 

Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 

And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.


A Little History That May Help Understand Current Politics

The current politics around the race to be the Republican candidate for President, ISIS, online speech, campus speech, technology, labor, and more have stuck me has angrier and a bit more irrational than I am used to, so an old essay, The Paranoid Style in American Politics by Richard Hofstadter, caught my eye. I offer it as a quick historical perspective on some of our current issues and approaches to them. Hofstadter writes quite well, so it is another example of good style. But he shows that the “paranoid style,” as he calls it, rises across the range of political views and has done so for some time. Here is his opening:

American politics has often been an arena for angry minds. In recent years we have seen angry minds at work mainly among extreme right-wingers, who have now demonstrated in the Goldwater movement how much political leverage can be got out of the animosities and passions of a small minority. But behind this I believe there is a style of mind that is far from new and that is not necessarily right-wing. I call it the paranoid style simply because no other word adequately evokes the sense of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind. In using the expression “paranoid style” I am not speaking in a clinical sense, but borrowing a clinical term for other purposes. I have neither the competence nor the desire to classify any figures of the past or present as certifiable lunatics. In fact, the idea of the paranoid style as a force in politics would have little contemporary relevance or historical value if it were applied only to men with profoundly disturbed minds. It is the use of paranoid modes of expression by more or less normal people that makes the phenomenon significant. (emphasis added)

That he calls out that the style can show up for any party and is not about being crazy is excellent. He goes on to admit that the term is “perjorative,” because he wants to ensure we know that although it “has a greater affinity for bad causes than good. [] nothing really prevents a sound program or demand from being advocated in the paranoid style. Style has more to do with the way in which ideas are believed than with the truth or falsity of their content.” Wow. He knows someone may say well what about true or false, and he swipes that issue aside, so that he can get to his point, “I am interested here in getting at our political psychology through our political rhetoric. The paranoid style is an old and recurrent phenomenon in our public life which has been frequently linked with movements of suspicious discontent.”

In two paragraphs, Hofstadter explains the idea, the scope, and why one should read more. Damn fine work. Plus he goes on to show show McCarthyism, early populism, fears of Masons and Illuminati (yes Illuminati), and fear of Jesuits fit his idea. To be clear, Hofstadter thinks that something different–including the felling of “dispossession” as Daniel Bell put it–explains what happened with the right in the 1950s. And he offers that mass media allows for greater, easier demonization. Nonetheless, I think that his summation fits for a range of views today:

Norman Cohn believed he found a persistent psychic complex that corresponds broadly with what I have been considering—a style made up of certain preoccupations and fantasies: “the megalomaniac view of oneself as the Elect, wholly good, abominably persecuted, yet assured of ultimate triumph; the attribution of gigantic and demonic powers to the adversary; the refusal to accept the ineluctable limitations and imperfections of human existence, such as transience, dissention, conflict, fallibility whether intellectual or moral; the obsession with inerrable prophecies . . . systematized misinterpretations, always gross and often grotesque.”

As Hofstadter put it, this view allowed him to “conjecture” that “that a mentality disposed to see the world in this way may be a persistent psychic phenomenon, more or less constantly affecting a modest minority of the population.”

The real punch came as he connected the modest minority to more. He said, “But certain religious traditions, certain social structures and national inheritances, certain historical catastrophes or frustrations may be conducive to the release of such psychic energies, and to situations in which they can more readily be built into mass movements or political parties.” That is the idea that worries me. According to Hofstadter, part of the problem may be “a confrontation of opposed interests which are (or are felt to be) totally irreconcilable, and thus by nature not susceptible to the normal political processes of bargain and compromise.” Furthermore, when groups are shut out of “the political process” even if their demands are “unrealistic” or unrealizable,” “they find their original conception that the world of power is sinister and malicious fully confirmed. They see only the consequences of power—and this through distorting lenses—and have no chance to observe its actual machinery.” The idea is to at least be open to other views and seek compromise. Still I am not sure what the response to being shut-out and unable to observe the machinery should be. I can understand that some will argue the process itself is corrupt, and it may be corrupt. I don’t think that submission to the Paranoid Style is the way to go. Nor is simply saying that the system will work correct. To riff on Hofstadter, if the Paranoid Style is on the rise and going mainstream for any issue, we should note it, and be open to the claims and facts. It may be that we missed a sea change that has not only style but substance, often a dangerous substance, as is the case when in “an arena for angry minds.”


China, the Internet, and Sovereignty

China’s World Internet Conference is, according to its organizers, about:

“An Interconnected World Shared and Governed by All—Building a Cyberspace Community of Shared Destiny”. This year’s Conference will further facilitate strategic-level discussions on global Internet governance, cyber security, the Internet industry as the engine of economic growth and social development, technological innovation and philosophy of the Internet. It is expected that 1200 leading figures from governments, international organizations, enterprises, science & technology communities, and civil societies all around the world will participate the Conference.

As the Economist points out, “The grand title is misleading: the gathering will not celebrate the joys of a borderless internet but promote “internet sovereignty”, a web made up of sovereign fiefs, gagged by official censors. Political leaders attending are from such bastions of freedom as Russia, Pakistan, Kazakhstan, Kyrgyzstan and Tajikistan.”

One of the great things about being at GA Tech is the community of scholars from a wide range of backgrounds. This year colleagues in Public Policy hired Milton Mueller, a leader in telecommunication and Internet policy. I have known his work for some time, but it has been great getting to hang out and talk with Milton. Not surprising, but Milton has a take on the idea of sovereignty and the Internet. I can’t share it, as it is in the works. But as a teaser, keep your eye out for it.

As a general matter, it seems to me that sovereignty will be a keyword in coming Internet governance debates across all sectors. Whether the term works from a political science perspective or others should be interesting. Thinking of jurisdiction, privacy, surveillance, telecommunication, cyberwar, and intellectual property, I can see sovereignty being asserted, perverted, and converted to serve a range of interests. Revisiting the core international relations theories to be clear about what sovereignty is and should be seems a good project for a law scholar or student as these areas evolve.


Centralizers: Uber vs the Others (Lyft, Didi Kuaidi, Ola, and GrabTaxi)

Uber is looking to raise more than $2 billion; Lyft, Didi Kuaidi, Ola, and GrabTaxi have formed a global alliance to counter Uber. Where or where is the disruptive scrappy tech savior? Answer: It existed briefly and the next phase is with us. In The New Steam: On Digitization, Decentralization, and Disruption I argued that [T]his era of disruption and decentralization will likely pass and new winners, who will look much like firms of old, will emerge, if they have not already.” I was building on the ideas Gerard Magliocca and I explored in our work on 3D printing. Although some technologies have helped decentralize production and distribution, to think that centralized players would all go away or new ones not emerge is a mistake. I was focused on safety, stability, liability and insights from Douglass North.

As I said in the paper:

Douglass North captures a paradox that goes with transaction costs. Greater specialization, division of labor, and a large market increase transaction costs, because the shift to impersonal transactions demands higher costs to: 1) measure the valuable dimensions of a good or service; 2) protect individual property rights; 3) enforce agreements; and 4) integrate the dispersed knowledge of society.26 Standardized weights and measures, effective laws and enforcement, and institutions and organizations that integrate knowledge emerge, but the “dramatic increase in the overall costs of transacting” is “more than offset by dramatic decreases in production costs.” Digitization forces us to revisit these issues.

Uber’s success and the response of the other players raises another point. Although I think that society will favor centralized players in the long run, because that allows for some regulation; the process of centralization may also occur for simpler reasons. When one big player starts to break away from the pack, the rest may co-operate or consolidate to keep pace. There may be one winer or a handful. Either way, as Seattle now allows Uber and Lyft drivers to unionize and calls for more regulation continue, the former disruptors will be seen as the new centralized power and treated as such. The reasons offered for that treatment are what draw my interest and where legal theory has and will see some action.


BEER (and Brands)!! IPA, SOUR, Coors, Miller, STELLA!!!

It seemed quaint several few years ago, when someone wanted a pumpkin brew for Halloween and asked my help in finding it. Pumpkin. How novel. But craft brewing is no longer novel. According to Fortune, “Craft beer volume represented just 1% of the overall beer industry in 1994 but stands at over 11% today.” Nonetheless, the recent merger action in beer makes the craft beer industry a bit nervous.

A key issue seems to be that the merger may cut off access to craft beers, because AB InBev has been buying up distributors. The fear is that at bars and retailers one would only have access to “Bud and Miller.” As Spencer Waller and I wrote, in Brands, Competition, and the Law, branding allows businesses “to move beyond price, product, place, and position and create the idea that a consumer should buy a branded good or service at a higher price than the consumer might otherwise pay.” As Susan Strasser has explained historically, national manufacturers used branding to overcome the “strong loyalties [customers had] to the people with whom they did business, which might surpass their interest in nationally advertised products that they had not yet tried.” At the same time, local retailers knew that national goods cut into their profits and often refused to carry these new goods. Which brings us to today and some questions about beer and brands and the law. Would changing the alcohol system help or hurt?

If consumers could buy directly from alcohol makers, would that blunt the force of a beer mega-merger? For that matter, what are the main markets for craft beers? Do distributors sell say a Georgia beer only within Georgia or a radius of the brewery? Would a craft beer maker even want a world without the three tier system? Wine seems to do OK with direct sales and distribution, so I am thinking beer and even craft spirits may like that option. But I don’t know.

Also it seems that the issue is not just about price. People may want to pay more for the craft beer but can’t get it. That seems to be an incorrect outcome. I am not a deregulate everything and wonders will flow person, but I think that this industry may be heading to much more flat organization and less regulation.

Those who know about alcohol making and selling, I am all ears. Until then I may have a beer and think on this one.

Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

Is the Happiness Industry Creating Algorithmic Selves?

In a recent podcast called “Thinking Allowed,” host Laurie Taylor covered two fascinating books: The Wellness Syndrome, and The Happiness Industry. One author discussed a hedge fund that’s now managing what it calls “biorisk” by correlating traders’ eating, drinking, and sleeping habits, and their earnings for the firm. Will Davies, author of The Happiness Industry, discussed less intrusive, but more pervasive, efforts to assure that workers are fitter, happier, and therefore more productive. As he argues in the book,

[M]ood-tracking technologies, sentiment analysis algorithms and stress-busting meditation techniques are put to work in the service of certain political and economic interests. They are not simply gifted to us for our own Aristotelian flourishing. Positive psychology, which repeats the mantra that happiness is a personal ‘choice’, is as a result largely unable to provide the exit from consumerism and egocentricity that its gurus sense many people are seeking.

But this is only one element in the critique to be developed here. One of the ways in which happiness science operates ideologically is to present itself as radically new, ushering in a fresh start, through which the pains, politics and contradictions of the past can be overcome. In the early twenty-first century, the vehicle for this promise is the brain. ‘In the past, we had no clue about what made people happy – but now we know’, is how the offer is made. A hard science of subjective affect is available to us, which we would be crazy not to put to work via management, medicine, self-help, marketing and behaviour change policies.

The happiness industry thrives in a culture premised on an algorithmic model of the self. People (or “econs“) are seen a bundle of inputs (data collection), algorithmic processes (data analysis), and outputs (data use). Since the demands of affect can only be extirpated in robots, the challenge for the happiness industry is to optimize some quantum of satisfaction for its human subjects, compatible with their maximum productivity. Objectively, the algorithmic self is no more (nor less) than the goods and services it uses and creates; subjectively, it strives to convert inputs of resources into outputs of joy, contentment–name your positive affect. As “human resources,” it is simply raw material to be deployed to its most profitable use.

Audit culture, quantification (e.g., the quantified self), commensuration, and cost-benefit analysis all reflect and reinforce algorithmic selfhood. Both the Templeton Foundation and the Social Brain Centre in Britain are developing some intriguingly countercultural alternatives to big data-driven behaviorism. As he highlights the need for such alternatives, Davies deserves great credit for exposing the political economy behind corporate appropriations of positive psychology.

Taking Human Capital Theory Seriously: Simkovic on “The Knowledge Tax”

Graduate professional education in the US is facing a financing squeeze. Some argue that those learning to become doctors, nurses, engineers, lawyers, and the like should get no help from the federal government, because they tend to earn higher incomes than average. Others question that premise, arguing that past results of grad degrees are no guarantee of future performance. They believe that an impending wave of defaults on federal student loans will raise the cost of federal credit programs.

Nevertheless, each side argues for policy with convergent outcomes. The “grad students will be rich” camp argues for curtailing federal loans, since they believe professionals can handle the higher interest rates on the private market. The “grad students will be poor” camp wants to raise the rates on federal student loans, to build up the already hefty surpluses the government is now making, to prepare for the putative future defaults. In the eyes of both, graduate students are the undeserving recipients of government largesse.

I’m not convinced by either: the “too rich” camp fails to value professional services properly, and the “too poor” camp is relying on controversial accounting techniques. But until I read Mike Simkovic’s recent paper “The Knowledge Tax,” I’d never thought of an even more fundamental distortion at work here: tax policy. Simkovic lays out the problem with characteristic clarity, considering a hypothetical college graduate deciding on (1) attending medical school and practicing medicine; or (2) purchasing a small vacant building and converting it into rental apartments:
Read More