The Continued Need for Technological Due Process

As my work on Technological Due Process explored, government increasingly uses automated systems to help human administrators make decisions about people’s important rights.  Sometimes, the computers make the decisions with varying degrees of oversight.  Government decision-making systems include data-matching programs, which compare two or more databases with an algorithmic set of rules that determine the likelihood that two sets of personal identifying information represent the same individual.

Data-matching programs frequently misidentify individuals because they use crude algorithms that cannot distinguish between similar names.  Sometimes, this accords with policy.  Better to have more false positives when it comes to finding terrorists, than more false negatives.  Other times, it’s a problem that humans resolve before anyone gets hurt.  Yet, time and again, human operators fall down on the job.

Here’s a recent example.  An anti-terrorism facial recognition system scans databases of state driver’s license images to prevent terrorism, reduce fraud, and improve the accuracy of identification documents issued by states.  Massachusetts started using the software after receiving a $1.5 million grant from the U.S. Department of Homeland Security.  On March 22, Massachusetts resident John Gass received a letter from the state motor vehicles registry informing him that he had to cease driving because his license had been revoked.  From various news reports, it seems that the letter did not tell Mr. Glass why he lost his license.  It was only after various calls and a hearing with motor vehicle officials that he learned that the system identified his license as evidence of potential fraud.  The system flagged Glass because he looked like another driver, not because his image was used to create a fake identity.  The motor vehicles registry reinstated his license after ten days of wrangling “to prove he is who he says he is.”  Not surprisingly, Gass is not alone.  The system picked out more than 1,000 cases last year that resulted in investigations, and some were guilty of nothing more than looking like someone else.Another disturbing fact: neither the motor vehicle registry nor the state police keep tabs on the number of people wrongly identified by the system.  Now, this is what I call a “mixed system,” one that combines human decision making with automation.  The software identifies matches of license pictures that have a high score of being the same person.  Registry analysts review the licenses and check biographical information, criminal records, and driving histories to rule out cases with legitimate explanations.  But as if often the case, the analyst in Gass’s case signed off on the match, revoking his license.  At the hearing, Gass showed the hearing officer his birth certificate and Social Security card as proof of his identity, but the officer insisted that he provide documents with his current address.  His lawyers faxed the document two days later.  On April 14, Gass got word that he was cleared to drive, more than twenty days after he received the revocation notice.

This case is not an outlier.  Human oversight of Government 2.0 decision-making routinely fails.  The cognitive system’s engineering literature has found that human beings view automated systems as error-resistant.  Operators of decision-making systems tend to trust their answers.  As a result, human operators are less likely to credit information contradicting the computer’s findings.  Studies show that human beings rely on automated decisions even when they suspect malfunction.  The impulse to follow a computer’s recommendation flows from automation bias–the use of automation as a heuristic replacement for vigilant information seeking and processing.  Automation bias effectively turns a computer program’s suggested answer into a trusted final decision.  Thus, the practical distinction between fully automated systems and mixed ones should not be overstated.

The system offended basic norms of due process.  The notice failed to inform Gass of the basis of the registry’s revocation decision.  It did not seem “reasonably calculated” to inform him of the government’s claims.  Automation bias might have been at the root of the initial failure to catch the problem and the demand for more proof at the hearing, even though Gass provided his SSN card and birth certificate.  To protect individual rights, we need adequate notice and safeguards against automation bias that impact hearings.  Moreover, this particular system exemplifies the kind of mission creep that the anti-terrorism label entails.  Frank Pasquale and I have written about it in our forthcoming article in Hastings Law Journal–Frank’s important book Black Box Society will extend those concerns to secret rankings by corporations that have a profound impact on our lives.

H/T: Ryan Calo

You may also like...

2 Responses

  1. Orin Kerr says:


    If cognitive biases are causing people to rely too much on the answers produced by machines, isn’t the real problem human error — human error caused by cognitive bias — rather than automation?

  2. PrometheeFeu says:

    I am endlessly amused by the fact that they found a terrorist and all they did was revoke his driver’s license.

    As a computer programmer, I am not fond of such decisions being made by a computer with little human oversight. Even with oversight, we have the problem that the human operator is likely to defer to the computer’s judgement. Such systems should be setup so as to have the human actually make the final decision instead of just giving humans a veto on the computer’s decision. For instance, if the computer finds a match, we should show the human operator a dozen or so pictures and ask them to pick out the match. If they can’t do it, the computer probably was wrong.