Author: Frank Pasquale

The Phantom Industrial Policy of the Beltway’s Favorite Health Cost Cutters

A few weeks ago, I spoke on artificial intelligence in health care at the AI Now Conference. I focused on the distinction between substitutive automation (which replaces human labor with software or robots) and complementary automation (which deploys technology to assist, accelerate, or improve humans’ work). I developed three cases where complementary automation ought to be preferred: where it produces better outcomes; in sensitive areas like targeting persons for mental health interventions; and to improve data gathering. Law and policy (ranging from licensure rules to reimbursement regs) could help assure that the health care sector pursued complementary automation where appropriate, rather than chasing the well-hyped narrative of robot doctors and nurses.

The pushback was predictable. Even if complementary automation is better now, shouldn’t our policy reward firms that try to eliminate ever more labor costs? Doesn’t *everyone* agree that the US spends too much on health care–and isn’t technology the best way of reducing that spending? Let me try to address each of these views, boiling down some perspectives from a longer, academic article.

A Policy at War with Itself

There is a troubling tension at the heart of US labor policy on health care and automation. Numerous high-level officials express grave concerns about the “rise of the robots,” since software is taking over more jobs once done by humans. They also tend to lament growth in health care jobs as a problem. In an economy where automation is pervasive, one would think they would be thankful for new positions at hospitals, nursing homes, and EHR vendors. But they remain conflicted, anxious about maintaining some arbitrary cap on health spending.

Politico reporter Dan Diamond encapsulated this conflict in his recent article, “Obamacare: The Secret Jobs Program”–and he leaves no uncertainty about which side he thinks is right:
Read More

Social Media for Scholars

For about a year now, Nic Terry and I have been hosting “The Week in Health Law” podcast. (We did miss a few weeks–so we’re actually more like “This 8.3 Days in Health Law”–but we’re pretty reliable!) We interview law professors, social scientists, and other experts, mainly from the US, though with some international presence. We recently convened a “meta-podcast” with 3 past show guests (and the editor of Pharmalot, an influential pharma industry blog) on the importance of social media presence for engaged academics. Our show notes also link to some good guides from other scholars. Like the “No Jargon” podcast of the Scholars Strategy Network, we try to bring informed commentary on complex ideas (like agency guidance on wellness programs) to a broad audience. We’ve received positive feedback from around the world, and I’m often surprised by the range of people who are tuning in (from hospital administrators to bar leaders to general counsel).

I just wanted to add one cautionary note to the emerging commentary on engaged scholarship and social media. I often see participation in blogs, podcasts, or Twitter framed in corporate or neoliberal discourse–the need to “build a brand,” “increase citations,” “leverage a network,” and so on. Even I engage in that in the podcast when I discuss altmetrics. But at its core, the scholarly identity is a very different one than the metricized self of performance optimization. Our best conversations feature a critical distance from the topics at hand and even from the ever more voluminous research apparatus around them. They highlight, rather than gloss over, inevitable conflicts of values that emerge once once tries to apply banalities like the “triple aim” in specific settings. There is a deep interest in an empirical research, and a sober awareness of its limits. (Our discussion with Scott Burris on policies like bike helmet laws is one very good example of this.)

The best moments of the podcast (contrasted with the impoverished neoliberal discourse often used to justify participation in engaged scholarship) highlight two very different meanings of “professionalism” now at work in our culture. The professionalized scholar is often a cite-generator and grant-grubber, more concerned with the external indicia of achievement than the intrinsic value of research they are meant to merely validate or support. But if we consider the academy as a profession, we realize the extraordinary importance of its partial autonomy from both market and state. It exists to create a space for research and conversations that are impossible to monetize immediately (or maybe ever), and which have not been specifically approved by political institutions.

As the state increasingly becomes a cat’s paw of market forces, and market forces themselves are engineered by a shrinking and short-sighted financial elite, preserving the residual autonomy of professions is more important than ever. I hope that future discussions of engaged scholarship focus more on its potential to advance solidarity among those committed to an independent academy–not one keen on ever-preciser rankings of its members, or defensive about proving its value in economic terms that are themselves of questionable utility.

Scholarship and Mid-Career Self-Assessments: A Brief Reflection on Simkovic’s What Can We Learn from Credit Markets?

Chris J. Walker has written a very helpful series of posts for young professors on “how to become a voice in one’s field.” The last addressed one of the hardest issues: “Am I Asking the Right Questions?” Academic freedom at a professional school comes with serious responsibilities: to choose field(s), to apply methodology well, and to try to establish the importance of one’s findings among one’s peers and (increasingly) among educated publics, as an engaged academic. Both Walker and Michael Rich offer wise perspectives on the dilemmas that inevitably come up during thoughtful reflection on these responsibilities, focusing on a process of discernment.

I also think that we can learn a great deal from the content of successful scholars’ inquiry. Usually, researchers only undertake this type of self-reflection when applying for jobs and preparing research agendas (a mostly private process), or at the end of a career (when a long list of accomplishments may seem too daunting to be relatable to younger peers). But winners of the ALI Young Scholars Medal appear to get invited to give a public talk on their work at an earlier stage of inquiry. Mike Simkovic (whose work I’ve previously praised here) gave such an address in May.

The talk is focused on the questions that led Simkovic to research credit markets. His work helped explain some puzzling aspects of personal finance–for example, why harsh restrictions on bankruptcy imposed in the mid-2000s did not lead to a cheapening of credit. His findings are revealing: consolidation in the credit card industry, as well as confusing contractual terms, helped dominant firms keep the resulting profits, rather than compete them away. As of 2016, even The Economist has caught up to this challenge to laissez-faire orthodoxy–but at the time it was made, complacent assumptions about market efficiency were dominant.

From that inquiry, Simkovic describes a chain of puzzles that led him to challenge widely held preconceptions in corporate, education finance, and tax law. It’s an engaging documentation of a particularly fruitful and insightful trajectory in inquiry.

I recently proposed a paper to the MLA’s annual conference entitled “Beyond the False Certainties of Impact Factors, Altmetrics, and Download Counts: Qualitative & Narrative Accounts of Scholarship.” It arose out of my dissatisfaction with the metricization of accomplishment. As citation counts proliferate, accumulating the ersatz currency of reputational quantifications threatens to overwhelm the real purpose of research–just as financialization has all too often undermined the productive functions of the economy.

Traditional modes of assessment (including tenure letters and festschrift tributes) are an alternative form of evaluation. And an essay like Simkovic’s is an example of a type of self-evaluation that should become more popular among scholars at certain career milestones (like tenure, appointment to full professor or senior lecturer, and, say, every 5 or 10 years thenceforward.) We need better, more narrative, mid-career assessments of the depth and breadth of scholarly contributions. Such qualitative modes of evaluation can complement the quantification-driven metrics now ascendant in the academy.

Facebook is More Like a Cable Network than a Newspaper

As I worried yesterday, Facebook’s defenders are already trying to end the conversation about platform bias before it can begin. “It’s like complaining that the New York Times doesn’t publish everything that’s fit to print or that Fox News is conservative,” Eugene Volokh states.

Eight years ago, I argued that platforms like Google are much more like cable networks than newspapers–and should, in turn, be eligible for more governmental regulation. (The government can’t force Fox News to promote Bernie Sanders–but it can require Comcast to carry local news.) The argument can be extended to dominant social networks, or even apps like WeChat.

As I note here, to the extent megaplatforms are classifiable under traditional First Amendment doctrine, they are often closer to utilities or cable networks than newspapers or TV channels. Their reach is far larger than that of newspapers or channels. Their selection and arrangement of links comes far closer to the cable network’s decision about what channels to program (where such entities, by and large, do not create the content they choose to air), than it does to a newspaper which mostly runs its own content and has cultivated an editorial voice. Finally, and most importantly, massive internet platforms must take the bitter with the sweet: if they want to continue avoiding liability for intellectual property infringement and defamation, they should welcome categorization as a conduit for speech, rather than speaker status itself.

Admittedly, if there is any aspect of Facebook where it might be said to be cultivating some kind of editorial voice, it is the Trend Box. It is ironic that they’ve gotten in the most trouble for this service, rather than the much more problematic newsfeed. But they invited this trouble with their bland and uninformative description of what the Trend Box is. Moreover, if the Trend Box is indeed treated as “media” (rather than a conduit for media), it could betoken a much deeper challenge to foundational media regulation like sponsorship disclosures–a topic I’ll tackle next week.

Platform Responsibility

Internet platforms are starting to recognize the moral duties they owe their users. Consider, for example, this story about Baidu, China’s leading search engine:

Wei Zexi’s parents borrowed money and sought an experimental treatment at a military hospital in Beijing they found using Baidu search. The treatment failed, and Wei died less than two months later. As the story spread, scathing attacks on the company multiplied, first across Chinese social networks and then in traditional media.

After an investigation, Chinese officials told Baidu to change the way it displays search results, saying they are not clearly labeled, lack objectivity and heavily favor advertisers. Baidu said it would implement the changes recommended by regulators, and change its algorithm to rank results based on credibility. In addition, the company has set aside 1 billion yuan ($153 million) to compensate victims of fraudulent marketing information.

I wish I could include this story in the Chinese translation of The Black Box Society. On a similar note, Google this week announced it would no longer run ads from payday lenders. Now it’s time for Facebook to step up to the plate, and institute new procedures to ensure more transparency and accountability.

A Social Theory of Surveillance

LocksBernard Harcourt’s Exposed is a deeply insightful analysis of data collection, analysis, and use by powerful commercial and governmental actors.  It offers a social theory of both surveillance and self-exposure. Harcourt transcends methodological individualism by explaining how troubling social outcomes can be generated by personal choices that each seem rational at the time they are made. He also helps us understand why ever more of daily life is organized around the demands of what Shoshanna Zuboff calls “surveillance capitalism:” intimate monitoring of our daily lives to maximize our productivity as consumers and workers.

The Chief Data Scientist of a Silicon Valley firm told Zuboff, “The goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us.” Harcourt reflects deeply on what it means for firms and governments to “change behavior at scale,” identifying “the phenomenological steps of the structuration of the self in the age of Google and NSA data-mining.”

Harcourt also draws a striking, convincing analogy between Erving Goffman’s concept of the “total institution,” and the ever-denser networks of sensors and training (both in the form of punishments and lures) that powerful institutions use to assure behavior occurs within ranges of normality. He observes that some groups are far more likely to be exposed to pain or inconvenience from the surveillance apparatus, while others enjoy its blandishments in relative peace. But almost no one can escape its effects altogether.

In the space of a post here, I cannot explicate Harcourt’s approach in detail. But I hope to give our readers a sense of its power to illuminate our predicament by focusing on one recent, concrete dispute: Apple’s refusal to develop a tool to assist the FBI’s effort to reveal the data in an encrypted iPhone. The history Harcourt’s book recounts helps us understand why the case has attracted so much attention—and how it may be raising false hopes.

Read More

The UK’s “Democratization” of the Professions: Case Studies

Read a few techno-utopian pieces on the future of US legal practice, and you’ll see, again and again, “lessons from Britain.” The UK “legal industry” is lauded for its bold innovation and deregulatory verve. Unfortunately, it appears that in its enthusiasm to make a neoliberal omelette, the green and pleasant land is breaking a few eggs:

Gap-year students are being recruited by the Home Office to make potentially life or death decisions on asylum claims, the Observer has learned. The students receive only five weeks’ training. . . . Immigration lawyers and asylum seekers have condemned the practice, pointing out that, after completing a degree, immigration lawyers undergo a further four years’ training. . . .

A health professional from west Africa who was granted refugee status last year said his claim was initially refused and it took four years of appeals to win his refugee status. “I attempted suicide after my asylum claim was refused because I knew my life would be in danger if I was forcibly returned home,” he said. “I became friendly with a family where the son had taken a gap year during his university degree to work as a Home Office decision-maker. I could not believe that he was making these life and death decisions about complex cases like mine. I am not sure that students are capable of the complex level of critical analysis required to make asylum decisions.”

Meanwhile, the British Health Secretary is telling parents that, hey, Dr. Google may be just as good as a regular physician. Expect to see the new “democratization of the professions” accelerate fastest among those without the resources to resist.

Is Eviction-as-a-Service the Hottest New #LegalTech Trend?

Some legal technology startups are struggling nowadays, as venture capitalists pull back from a saturated market. The complexity of the regulatory landscape is hard to capture in a Silicon Valley slide deck. Still, there is hope for legal tech’s “idealists.” A growing firm may bring eviction technology to struggling neighborhoods around the country:

Click Notices . . . integrates its product with property management software, letting landlords set rules for when to begin evictions. For instance, a landlord could decide to file against every tenant that owes $25 or more on the 10th of the month. Once the process starts, the Click Notices software, which charges landlords flat fees depending on local court costs, sends employees or subcontractors to represent the landlord in court (attorneys aren’t compulsory in many eviction cases).

I can think of few better examples of Richard Susskind’s vision for the future of law. As one Baltimore tenant observes, the automation of legal proceedings can lead to near-insurmountable advantages for landlords:

[Click Notices helped a firm that] tried to evict Dinickyo Brown over $336 in unpaid rent. Brown, who pays $650 a month for a two-bedroom apartment in Northeast Baltimore, fought back, arguing the charges arose after she complained of mold. The landlord dropped the case, only to file a fresh eviction action—this time for $290. “They drag you back and forth to rent court, and even if you win, it goes onto your record,” says Brown, who explains that mold triggers her epilepsy. “If you try to rent other properties or buy a home, they look at your records and say: You’ve been to rent court.”

And here’s what’s truly exciting for #legaltech innovators: the digital reputation economy can synergistically interact with the new eviction-as-a-service approach. Tenant blacklists can assure that merely trying to fight an eviction can lead to devastating consequences in the future. Imagine the investment returns for a firm that owned both the leading eviction-as-a-service platform in a city, and the leading tenant blacklist? Capture about 20 of the US’s top MSA‘s, and we may well be talking unicorn territory.

As we learned during the housing crisis, the best place to implement legal process outsourcing is against people who have a really hard time fighting back. That may trouble old-school lawyers who worry about ever-faster legal processes generating errors, deprivations of due process, or worse. But the legal tech community tends to think about these matters in financialized terms, not fusty old concepts like social justice or autonomy. I sense they will celebrate eviction-as-a-service as one more extension of technologized ordering of human affairs into a profession whose “conservatism” they assume to be self-indicting.

Still, even for them, caution should be in order. Bret Scott’s skepticism about fintech comes to mind:

[I]f you ever watch people around automated self-service systems, they often adopt a stance of submissive rule-abiding. The system might appear to be ‘helpful’, and yet it clearly only allows behaviour that agrees to its own terms. If you fail to interact exactly correctly, you will not make it through the digital gatekeeper, which – unlike the human gatekeeper – has no ability or desire to empathise or make a plan. It just says ‘ERROR’. . . . This is the world of algorithmic regulation, the subtle unaccountable violence of systems that feel no solidarity with the people who have to use it, the foundation for the perfect scaled bureaucracy.

John Danaher has even warned of the possible rise of “algocracy.” And Judy Wajcman argues that ““Futuristic visions based on how technology can speed up the world tend to be inherently conservative.” As new legal technology threatens to further entrench power imbalances between creditors and debtors, landlords and tenants, the types of feudalism Bruce Schneier sees in the security landscape threaten to overtake far more than the digital world.

(And one final note. Perhaps even old-school lawyers can join Paul Gowder’s praise for a “parking ticket fighting” app, as a way of democratizing advocacy. It reminds me a bit of TurboTax, which democratized access to tax preparation. But we should also be very aware of exactly how TurboTax used its profits when proposals to truly simplify the experience of tax prep emerged.)

Hat Tip: To Sarah T. Roberts, for alerting me to the eviction story.

Private Lenders’ Troubling Influence on Federal Loan Policy

Hundreds of billions of dollars are at stake in the upcoming reauthorization of the Higher Education Act (HEA). Like the confirmation of a new Supreme Court justice, it may be delayed into 2017 (or beyond) by partisan wrangling. But as that wrangling happens, Washington insiders are drafting “radical” proposals to change the federal government’s role.

Faculty at all institutions need to examine these proposals closely. The law and public finance issues raised by them are complex. But if we fail to understand them, and to weigh in against the worst proposals, we could witness developments that will fundamentally change (and perhaps end) the university as we know it. Moreover, even if universities find ways to shield themselves from change, some proposals will leave students vulnerable to worse financing terms and lower-quality programs.

In a series of posts over the next few weeks, I’ll be explaining the stakes of the HEA reauthorization. For now, I want to start with a thought experiment on how education finance may change, based on recent activities of large banks and digital lending services I’ve studied. What would be ideal, in terms of higher education finance, for them?

Financiers consider government a pesky and unfair competitor. While federal loans offer options to delay payments (like deferment and forbearance), and discharge upon a borrower’s death or permanent disability (with certain limitations), private loans may not offer any of these options. Private lenders often aim to charge subprime borrowers more than prime borrowers; federal loans offer generally uniform interest rates (though grad students pay more than undergrads, and Perkins loans are cheaper than average). Alternatively, private lenders may charge borrowers from wealthy families (or attending wealthy institutions) less. Rates might even fluctuate on the basis of grades: just as some students now lose their scholarships when they fail to maintain a certain GPA, they may face a credit hit for poor performance.*

Now in conventional finance theory, that’s a good thing: the “pricier” loan sends a signal warning students that their course may not be as good an idea as they first thought. But the commitment to get a degree is not really analogous to an ordinary consumer decision. A simple Hayekian model of “market as information processor” works well in a supermarket: if bananas suddenly cost far more than apples, that signal will probably move a significant number of customers to substitute the latter for the former. But education does not work like that. College degrees (and in many areas further education) are necessary to get certain jobs. The situation is not as dire as health care, the best example of how the critical distinction between “needs” and “wants” upends traditional economic analysis. But it is still a much, much “stickier” situation than the average consumer purchase. Nor can most students simply “go to a cheaper school,” without losing social networks, enduring high transition costs, and sacrificing program quality.

For financiers, a sliding scale of interest rates makes perfect sense as “calculative risk management.” But we all know how easily it can reinforce inequality. A rational lender would charge much lower interest rates than average to a student from a wealthy family, attending Harvard. The lender would charge far more to a poorer student going to Bunker Hill Community College. “Risk-based pricing” is a recipe for segmenting markets, extracting more from the bottom and less from the top. The same logic promoted the tranching of mortgage-backed securities, restructuring housing finance to meet investor demands. Some investors wanted income streams from the safest borrowers only–they bought the AAA tranches. Others took more risk, in exchange for more reward. Few considered how the lives of the borrowers could be wrecked if the “bets” went sour.

Now you might ask: What’s the difference between those predictable disasters, and those arising out of defaults of federal loans? They’re very difficult to discharge in bankruptcy. But federal loans have income-based repayment options. For loans made after 2007, lenders in distress can opt into a payment plan keyed to their income level, which eventually forgives the debt. Private loans don’t offer IBR.

But IBR is not that great a deal, you may counterAnd in many cases, you’re right, it isn’t! Interest can accumulate for 20 or 25 years. Then, when the debt is finally forgiven, the forgiven amount could be treated as income which must be taxed. There is no IBR for the tax payment. Moreover, the impact of growing debt (even it is eventually to be forgiven) on future opportunities is, at present, largely unknown. Many consumer scores may factor it in, without even giving the scored individual notice that they are doing so.

So why keep up the federal role in higher ed finance? Because one key reason federal loans are so bad now is because private lenders have had such a powerful role in lobbying, staffing the key loan-disbursing agency (Department of Education), and supporting (indirectly or directly) think tank or analyst “research” on higher ed finance. When government is your competitor, you use the regulatory process to make the government’s “product” as bad as possible, to make your own look better by comparison. And the more of the market private lenders take, the more money they’ll have to advocate for higher rates and worse terms for federal loans–or getting rid of them altogether.

—————————-

*The CFPB has warned lenders that using institutional cohort default rates to price loans could violate fair lending laws, and that may have scared some big players away from doing too much risk based pricing. However, with the rise of so many fringe and alternative lenders, and the opacity of algorithmic determinations of creditworthiness, the risk of disparate impact is still present.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.