# Drug Sniffing Dogs and Quantifying Probable Cause, Redux

My first post on the reliability of drug detection dogs was linked to by EvidenceProf Blog, where Colin Miller makes an excellent point about why quantifying probable cause may be more feasible than quantifying proof beyond a reasonable doubt. Although an artificial imposition of a numeric value on the proof beyond a reasonable doubt standard will likely confuse jurors, writes Miller, we believe in many contexts that judges are better equipped to navigate complex evidentiary matters.

Since judges- not jurors- decide whether probable cause justifies a particular search, quantifying probable cause might not unearth such a parade of horribles, the fear of which has imprisoned the probable cause standard in fuzziness for so long.  We may need to quantify probable cause to decide if a positive alert by a drug sniffing dog gives the police enough suspicion to conduct a full search of a person’s car, or suitcase, or of the person’s person.

Let me be clear. I am not arguing (at least for now) that every probable cause determination should be calculated according to a mathematical algorithm where the evidence has to yield a certain likelihood of criminal behavior (40%?, 50%?, 51%?) before probable cause is satisfied. However, if the Supreme Court affirms the Florida Supreme Court’s holding that evidence of a dog’s false positive rate must be admitted as part of the probable cause inquiry, courts will have to grapple with some quantification of probable cause. Unless the Supreme Court decides that a positive alert by a drug detection dog is never, on its own, sufficient to establish probable cause, courts will confront situations where a dog’s positive alert is the sole reason to suspect someone of possessing drugs. This could occur at an airport, or at a drunk driving checkpoint, or anywhere where drug dogs are permitted to routinely sniff individuals without suspicion. In that case, courts will have to decide if the dog sniff alone is sufficient to establish probable cause.

As a result, courts may have to determine, as a matter of law, whether a particular false positive rate is too high to satisfy probable cause. This presents several problems, mentioned in the comments to my previous post on drug sniffing dogs. A false positive rate measures the likelihood that a dog will alert to drugs, given that there are no drugs in the vicinity. (Probability of alert, given absence of drugs). To determine whether an alert is insufficient to satisfy probable cause, we’d have to know the likelihood that there are no drugs in the vicinity, given a positive alert. (Probability of absence of drugs, given alert). These two probabilities are not the same, and the former can be converted into the latter if the police have information on the base rate of drugs on individuals in a particular area.

This does pose a bit of a problem, but does not mean that we should simply presume that because a drug detection dog has received training, its positive alert automatically establishes probable cause. (I fear the Supreme Court may decide something to this effect in Florida v. Harris.) Nor do I think we should resign ourselves to the fact that a positive alert can never satisfy probable cause. Instead, perhaps we can standardize dog training programs and create simulated situations where we know the base rate of drugs (because the police have created the simulation) and can convert the false positive rate into a probability that a given individual will not possess drugs, given a positive alert.  At that point, this probability can be converted directly into probable cause, and courts will need to know what likelihood of criminality meets the probable cause standard.

Quantifying probable cause in this way does not mean an abandonment of the totality of the circumstances test. Police officers can still rely on their instincts when there are various indicators, in addition to a dog’s positive alert, that a suspect is engaging in criminal behavior. Courts can still weigh all of these indicators in conjunction with a positive alert, and, in that case, we might not need to quantify probable cause or decide if a particular dog is too unreliable. But, when the only indication of probable cause is an alert by a dug detection dog, courts need to decide whether probable cause means that an officer must be 40%, 50%, or 51% sure that a particular vehicle contains drugs.

### 10 Responses

1. Colin Miller says:

An interesting analogue can be found with polygraph results. It is true that courts consistently claim that we can’t quantify probable cause, but consider this statement from Bennett v. City of Grand Prairie, Tex., 883 F.2d 400, 405-06 (5th Cir. 1989):

“Polygraph exams, by most accounts, correctly detect truth or deception 80 to 90 percent of the time. We, therefore, see no reason to create a per se rule barring magistrates, who may already consider information like hearsay, from using their sound discretion to evaluate the results of polygraph exams, in conjunction with other evidence, when determining whether probable cause exists to issue an arrest warrant.”

Now, of course, on one level, the court is saying that probable cause is a totality of the circumstances determination, which is what courts often claim when they assert that probable cause cannot be quantified. But the fact that the court cites the 80-90% accuracy rate certainly seems to be quantifying probable cause in a certain sense by quantifying the type(s) of evidence that can be considered in creating it.

So, judges can consider polygraph evidence in deciding whether there is probable cause, but, pursuant to United States v. Scheffer, 523 U.S. 303 (1998), rules of evidence can be drafted to per se ban the admission of polygraph results at trial. Now, if we were going to quantify “reasonable doubt,” where would we put it? 5%? 10%? 20%? Certainly, we would think that the likelihood that a defendant’s car contains contraband (probable cause) should have to be higher than the likelihood that a defendant did not commit the crime charged (reasonable doubt). And yet, we allow rule makers to preclude jurors from considering the same evidence (polygraph results) that judges are allowed to use in finding probable cause.

And why? Well, according to Justice Thomas in Scheffer (although this wasn’t part of his majority opinion):

“Jurisdictions, in promulgating rules of evidence, may legitimately be concerned about the risk that juries will give excessive weight to the opinions of a polygrapher, clothed as they are in scientific expertise and at times offering, as in respondent’s case, a conclusion about the ultimate issue in the trial. Such jurisdictions may legitimately determine that the aura of infallibility attending polygraph evidence can lead jurors to abandon their duty to assess credibility and guilt.”

2. Erica Goldberg says:

Great point. The likelihood of reasonable doubt needed to exonerate via polygraph should be way less than the probable cause standard. I also think that this fear of human beings (jurors) misusing numbers may be taken too far- but I’d need to see some empirical study on that!

3. Orin Kerr says:

Erica, maybe I’m just dense, but I’m not sure why you want courts to quantify probable cause. Are you seeing quantification as a pro-civil-liberties position, perhaps on the assumption that the choices are (a) an automatic rule that an alert is probable cause or else (b) a quantified standard in which some alerts are probable cause and some aren’t? Or do you see quantification as good because it could add clarity, and clarity is inherently a good thing? Or neither?

4. Erica Goldberg says:

Orin Kerr,

Thanks for asking. Let me explain my reasons more comprehensively, and hopefully they will be somewhat coherent.

At the highest level of generality, I believe that quantifying probable cause will add integrity and accountability to the jurisprudence. It’s hard for us citizens to determine if judges are making fair decisions about probable cause if we don’t even know how much suspicion is required. Proof beyond a reasonable doubt is less in need of quantification because it’s so far out there on the spectrum of proof. I think people intuitively understand what a reasonable doubt is because it is such an extremely high standard, but the probable cause decisions are all over the place. This makes them elusive to students, scholars, and citizens, who then lose faith in the decisions.

Below that, quantifying probable cause will give courts some understanding of how to determine if probable cause exists. Sure, there is precedent and rules for judges to apply. For example, running from the police is not enough on its own to constitute probable cause, but running plus some other suspicious behavior is sufficient. However, these rules are almost beside the point because they say nothing about what the actual standard is.

Below that, quantifying probable cause will give police something more standardized to shoot for, besides untethered rules, when making probable cause determinations.

Finally, because so much evidence has a quantitative component (DNA evidence, drug dog alerts, fingerprint matching), we can use numbers to assess the probative value of this evidence. Until now, the presumption has been that a positive alert automatically gives rise to probable cause, but we cannot know that unless we grapple with the numbers. This, in turn, will make quantifying probable cause inevitable, in situations where all of the suspicious evidence has numerical error rates (false DNA matches, for example).

I think this quantification will be good for both civil liberties and for clarity. And I believe that clarity is inherently virtuous. Fairness requires that we apply the same rules to everyone, and that seems like an impossible task when the rules are fuzzy. Reasonable minds can differ on whether quantifying probable cause will provide only a false sense of clarity, but, at least in the context of drug sniffing dogs, this seems unlikely.

5. Orin Kerr says:

Got it, Erica. If you don’t mind me pressing you, what are the probable cause determinations that are all over the place? In my experience, probable cause is actually rather nicely settled. It’s a judgment that takes some time to “get,” but once you get it, it’s pretty consistently applied.

6. steph tai says:

Not exactly related to what you do (but related to what I do), an empirical study on the biasing effects of presenting quantified v. unquantified information:

http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2005/proceedings/criticalaccounting/Mayper.pdf

Also citations to entry 21 (“Mode”) here: http://www.sims.monash.edu.au/staff/darnott/biastax.pdf

These studies have more to do with how people view quantitative v. qualitative data (versus how people apply quantitative v. qualitative “tests”), but I still think their insights may be useful in this situation. The concern is that stuff that looks “quantitative” will end up being given more weight from jurors *even if* their underlying methodology is still questionable. By putting numbers on such information, lay people often make shallower inquiries. (I’ve found this in my own research on how judges approach quantitative v. qualitative information in certain environmental cases.

7. Erica Goldberg says:

Orin Kerr,

Yes, good point. When I said that they’re all over the place, I meant that the rules established do not conform to many people’s notions about when probable cause should exist; the cases are hard to reconcile with each other, and the rules often seem arbitrary. The caselaw may have settled certain questions, but I find it very difficult to predict whether a magistrate will find probable cause in a particular scenario. How to deal with anonymous informants comes to mind- we know that courts apply a totality of the circumstances test that considers the A-S factors of credibility and basis for information, but I find it hard to predict when the requisite amount of corroboration of the informant’s tip has been met. Most of the cases I’ve read seem on the line, and if I had a probability number to target, it might help me. (Perhaps I do not yet “get” it). This, I think, may partially explain why, besides cost, police needing warrants have very high success rates (they try to produce more evidence than is strictly needed), whereas police doing their own probable cause inquiry have highly variable success rates, and some have very low success rates. See Putting Probability Back Into Probable Cause, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1157111. You might find this article interesting generally. It argues that an officer’s success rate should be incorporated into the probable cause determination and presents arguments that, in some ways, oppose your intuitive judgment theory of warrant issuance. The author wants more relevant information included in the judicial analysis.

Additionally, given that the review of probable cause determinations is so deferential, due to Illinois v. Gates and Leon and the qualified immunity standard, it’s hard to tell in any case if probable cause is lacking unless it is so obviously lacking, because the Court needn’t decide that issue. Quantifying probable cause would add clarity to a jurisprudence that I think is becoming increasingly bereft in clarity, but more than that, I think it is becoming inevitable given the types of evidence we’re seeing.

Also, I had a thought regarding Bayesian probabilities and false positives in light of the Myers article that you sent me in response to the previous post. Do we really want to incorporate the base rate of drug possession in a given population into our false positive analysis? I presume courts are not supposed to penalize people too much for being in a high crime area (it can be part of the probable cause inquiry, but cannot be sufficient independently), but that’s what incorporating base rate into the probable cause metric would do. If the base rate of drugs in cars in a particular area is high, it almost won’t matter how accurate the dogs are, and then police are just searching cars because, say, 90% of people in cars on a given street are carrying drugs.

Steph Tai,
Thanks for the link. Colin Miller at EvidenceProfs Blog has a good discussion linking to my posts on why jurors perhaps should be trusted less with quantifiable information than judges. I’ll have to think about this more, though. http://lawprofessors.typepad.com/evidenceprof/

8. Orin Kerr says:

Thanks, Erica. I’ve read Minzer’s article, although his proposed rule strikes me as impossible to administer. Your point about the deference due under Gates and Leon is a fair one, although it seems more directed as a criticism of deference to probable cause than the notion of probable cause itself. Finally, since we can never actually know the base rate, I’m not sure why we would worry about whether formally incorporating the base rate would lead to undesirable results. Anyway, thanks for answering my questions; I agree that Harris is a fascinating case that raises fascinating issues.

9. As a detector dog trainer, former military law enforcement handler and a County Jail Deputy Warden my first concern is that many times when a dog indicates an alert for illicit drugs it actually is indicating a location where drugs were concealed or located, so is using the term “false positive correct”? Obviously this isn’t in all cases, but again the concern in standardizing training and certification will not eliminate the fact that a good drug detector dog properly trained will still indicate on locations where drugs were concealed up to days or weeks after the substance has been removed.

10. Erica Goldberg says:

Dennis Fetzer,

This is a very important point. Dogs may alert to drugs that are no longer in a given location, but were there at some point. This is not classically a “false positive” in the sense that a dog has alerted to the scent of drugs with absolutely no basis. However, for the probable cause inquiry, I think we have to consider this a false positive because, given that drugs are no longer there, it’s impossible to know to whom to attribute those drugs, or even if the drugs were ever there in the first place.

Technically, probable cause means a fair probability that an individual is engaged in or will be engaged in criminal activity. If a car owner, or suitcase owner, used to have drugs but doesn’t anymore, then sure, that person did engage in criminal activity. However, I think it is too attenuated to be considered because perhaps the drugs belonged to someone else using the car, or the suitcase, weeks ago. It becomes very speculative. So, we can assign a value to false positives that includes the possibility that the car contained drugs at some point, but perhaps it’s not clear enough that they belonged to the owner of the car to provide more evidence of probable cause. I am open to other thoughts, though, and I appreciate your input.