Should Fourth Amendment Law Pay Attention to What People Expect? If So, How?

In our previous post we responded to Orin Kerr’s argument that an originalist should vote to affirm the Sixth Circuit in Carpenter v. United States, the blockbuster case that will be argued in the Supreme Court on Wednesday. In this post we will say why we think his attack on the probabilistic model is misguided. Because all the published social science research suggests that average Americans regard warrantless access to historic cell site records as unexpected, the probabilistic model strongly suggests that Carpenter should win his case. As a matter of full disclosure, we were two of the principal drafters of the Empirical Fourth Amendment Scholars’ amicus brief that was filed in Carpenter.  Orin’s brief takes issue with that brief’s approach beginning at page 25 of his amicus brief and in this blog post. In the paragraphs that follow we will explain why we think his criticisms are not persuasive.

It is important to begin with a tip of the cap to Orin for his terrific and influential article, Four Models of Fourth Amendment Protection, which articulated a very persuasive descriptive claim, which is that in its approach to cases involving the question of whether police conduct is a search, the Supreme Court has not applied a consistent methodology. Sometimes, Orin notes, the Court applies a probabilistic model, which asks, if the police hadn’t conducted the surveillance at issue, how likely it is that a bystander would have learned the private information that the defendant is trying to exclude. Sometimes, he notes, the Court asks whether what the government did to gather the information at issue would have violated the target’s legal rights, arising out of property law or perhaps privacy tort law. This is the positive law model. Sometimes, the Court focuses on how sensitive the information sought and obtained was. This is the private facts model. And sometimes, the Court engages in a cost-benefit analysis of the government’s surveillance. Orin calls this the policy model. In many instances, Orin notes, the Court engages in more than one methodology at once. And the Court has at various times approached “search” cases in inconsistent ways, embracing methodologies that were attacked in earlier cases and attacking methods that were embraced in subsequent cases. To many readers of Orin’s seminal article, Fourth Amendment law sounds like a mess.

This messiness has prompted some scholars to argue that the Supreme Court could make Fourth Amendment law more coherent and much more predictable by just sticking with one model. For example, Will Baude and James Stern have argued on both originalist and pragmatic grounds that the Court should just stick with the positive law model in Fourth Amendment search cases. See also Will’s recent blog post. Chris Slobogin has argued that something like the probabilistic model should predominate, though aspects of the policy model also work their way into Slobogin’s balancing framework.  And various other scholars, including one of us, have also embraced a Fourth Amendment jurisprudence where the probabilistic model looms large.

Orin draws very different implications from his paper. Whereas other scholars see maddeningly inconsistent and unpredictable Fourth Amendment law, Orin thinks four models are better than one, arguing that it is optimal for the Supreme Court to mix-and-match different models to different circumstances. Orin argues that there are patterns that help explain why the Court usually favors one model or another, but the Court seems unaware of these patterns and does not inevitably adhere to them. This was true when Orin published his article a decade ago, and it remains true today, even though Orin’s article has already been cited nine times by state and federal courts. We think it’s fair to say that while Orin’s descriptive claim (there are four models) has convinced many Fourth Amendment scholars, his normative claim (there ought to be four models) has won fewer converts.

We think one reason why is straightforward. If everyone agrees that there is one model to determine whether surveillance constitutes a Fourth Amendment search then the fight is limited to what results are dictated by that model. By contrast, if there is a fight over both which model applies and what outcome results from the application of the model, then Fourth Amendment results will be quite unpredictable. This makes it harder for police officers to figure out ex ante what they can do without a warrant and increases the temptation for judges to reach results that are consistent with their ideological priors.

Another reason why we aren’t persuaded by Orin’s normative argument is that hard Fourth Amendment cases often involve areas of overlap between two different models. For example, Orin argues that “the private facts model appears particularly often in cases involving new technologies” (pg. 543) and that “the positive law model tends to govern physical access to houses, packages, letters, and automobiles” (pg. 544). So what is the Court to do in a case like Jones, which involved new technology used to track an automobile? Orin’s framework doesn’t provide a correct answer, and the opinions in that case articulated a mix of private facts arguments, probabilistic arguments, and positive law arguments, which were decisive to the majority.  Or consider various cases involving shared access to a home or car. Are those cases where the positive law model should apply, or the probabilistic model, which Kerr says “mostly surface in investigations that occur in group settings.” (pg. 544)? The Court gets to choose, and its choice will determine the outcome. A choice of four models and no clear rule about which applies makes Fourth Amendment precedent a less meaningful constraint on judges.

The two of us have somewhat differing instincts about the merits of the probabilistic model. But having said that, we agree with Orin that the Supreme Court applies the probabilistic model rather regularly, though not exclusively, and has done so since Katz. Our view here is straightforward. If the Supreme Court is to apply the probabilistic model, it should do so rigorously, rather than in a pseudoscientific way. To that end, we have conducted empirical research that asks whether people know that their cell phone geolocations are being tracked by their cell phone providers. It turns out that most people do not know this. And we (and other scholars) have conducted empirical research that asks whether Americans expect that the government can obtain cell phone company records about where their phones have been without a warrant. It turns out that most people do not expect such surveillance and they think that if such surveillance is to occur the government must obtain a warrant beforehand. The problem this scholarship is designed to address is real. Judges and their law clerks have much better information about how law enforcement gathers evidence and how technologies might leave an evidence trail than lay respondents do. So by assessing the expectations of ordinary American citizens, scholars can help ensure that the probabilistic model is keyed to the expectations of the broader citizenry rather than those of legal elites.

Orin’s attacks on efforts to make the probabilistic model more rigorous misapprehend the nature of this research. In his amicus brief, Orin makes the following points:

The Empirical Scholars look for answers in public opinion polls and surveys. See Amicus Brief of Empirical Fourth Amendment Scholars at 2-10. They envision the Katz test as protection against the unexpected: Surprising disclosures to the government should require a warrant because they violate the expectations of ordinary people. See id. at 10-16. That has never been the law. “The concept of an interest in privacy that society is prepared to recognize as reasonable is, by its very nature, critically different from the mere expectation, however well justified, that certain facts will not come to the attention of the authorities.” United States v. Jacobsen, 466 U.S. 109, 122 (1984) … [I]t would be difficult for courts to implement a survey-based approach to what disclosures of information should be a search. Public opinion changes, and judges are not empiricists who are trained to compare and critique new scholarly research. Empirical studies can be useful in some contexts within Fourth Amendment law. But they cannot provide the nondisclosure line-drawing that Carpenter needs.

We have a number of responses. First, the citation to Jacobsen notwithstanding, there are several cases cited in Orin’s Four Models paper, such as Bond v. United States, Minnesota v. Olson, and O’Connor v. Ortega, in which the Supreme Court is clearly looking to what ordinary people expect in determining the scope of the Fourth Amendment. And while it is true that Supreme Court decisions don’t often cite representative surveys, until very recently rigorous evidence of that sort was hard to come by. That kind of evidence has gotten much easier to obtain in recent years, and experimental techniques have gotten more sophisticated too. So lower court judges in Texas, Massachusetts, and Ohio have begun to cite the new and rigorous survey-based research. We find it hard to believe that judges can rely on their intuitive judgments when applying the probabilistic model but cannot take judicial notice of published academic research.

Orin argues that the probabilistic model does not apply to Carpenter because it’s a third party case – where Carpenter has shared information with his cell phone provider – rendering social expectations irrelevant as a matter of law. Let us assume, contrary to the evidence, that Carpenter voluntarily disclosed his location information to a third party and thereby “consented” to a partial loss of privacy.  We still don’t think the law disregards societal expectations. Consider the Supreme Court’s 2006 opinion in Georgia v. Randolph, which involves the question of whether the police can search shared living quarters when one occupant agrees to a search and the other objects. In the context of that case, the Court wrote: “The constant element in assessing Fourth Amendment reasonableness in the consent cases, then, is the great significance given to widely shared social expectations, which are naturally enough influenced by the law of property, but not controlled by its rules.” Now, we are not saying that Randolph and Carpenter are identically situated. But in Randolph the issue is whether the target’s consent to live with someone else entails a consent to let that someone else authorize police entry into a home. And in Carpenter the issue is whether a cell phone user’s consent to share information with a cell phone carrier entails consent to let that carrier share the information with the police. If “widely shared social expectations” were decisive in one Fourth Amendment context (Randolph) it seems like a stretch to regard them as irrelevant in a related context (Carpenter).

Orin points out that Congress is “designed to reflect public opinion” and is the best entity to incorporate popular sentiment into the law. That’s an argument with real force, and in this instance Congress has weighed in, leaving law enforcement with the option of getting a warrant or just meeting the more lax standard of “reasonable grounds to believe” that the cell site records “are relevant and material to an ongoing criminal investigation.” 18 U.S.C. § 2703(d). But it is emphatically the Court’s duty to say what the law is, and to set the constitutional floor below which Congress cannot go. If societal expectations of privacy are to play a role in setting the minimum constitutional standard, as Orin has elsewhere endorsed, then courts should assess those expectations as accurately as they can. Empirical studies can provide accurate and detailed information about societal expectations, not just popular opinion.  It is the former that can help guide courts in determining the scope of the Fourth Amendment under Katz.

Orin’s then argues that it makes no sense to regard unexpected surveillance as a search for Fourth Amendment purposes, and he says that what people expect the police to do is “based on what they read on the Internet.” Unexpected surveillance is a problem because society wants people to take appropriate but not excessive precautions to communicate sensitive information or engage in sensitive acts. The greater the fit between what people expect Fourth Amendment law to be and what Fourth Amendment law actually is, the more appropriate their precautions will be. Conducting a wiretap is a Fourth Amendment search, and people generally expect privacy in their telephonic communications. If people weren’t able to intuit that it is hard for the police to conduct a lawful wiretap, people might stop saying what they mean when speaking to loved ones on the phone, or they might begin meeting in person in public parks rather than talking on the phone, or they might engage in various forms of self-help that are commonly practiced in police states. Orin does not provide any assertions for his empirical statement that what people expect law enforcement to do is based on what they read on the Internet. Ordinary lay people aren’t pouring over Concurring Opinions, Lawfare, or SCOTUSBlog. (As it happens, one of us has collected recent data showing that television coverage of public events seems to loom much larger in shaping popular expectations than social media exposure – when people had heard about a 2014 Supreme Court case (Riley), they were almost three times more likely to have heard about it via television than Internet news sites and blogs.) While the social science literature is not yet large enough to permit conclusive judgments, the existing evidence suggests that social norms and expectations about surveillance are not particularly responsive to changes in Fourth Amendment doctrine. As anyone who has read Bob Ellickson’s work might expect, ordinary people do not pay too much attention to formal law, and they bring with them a set of robust, common-sense intuitions that are generally hard for legal actors to dislodge.

In a follow-up post, Orin doubles down on this critique of the probabilistic approach. He says that expectations of privacy are circular because even if people do not understand that their geolocation is being tracked by their cell phone providers at the time Carpenter is decided, the Carpenter case itself is likely to generate significant enough attention to teach the public how cell site information is stored and used. This is a variant of the Circularity claim that one of us casts significant doubt on in a University of Chicago Law Review article that has gone to press and will be published in final form within the next month. More broadly, there is a political science literature suggesting that the broader American public is largely ignorant about the Supreme Court’s work. (On some of the interesting methodological challenges, see this paper.) Other than Roe v. Wade, there evidently isn’t a single case that most Americans have heard of, and when big Fourth Amendment developments do break through a news cycle (as Riley v. California did briefly) public knowledge about those cases dissipates quickly. To the extent that people hear anything about a Supreme Court case, they might digest the result, rather than factual discussions buried in the opinion. To take an example, most Americans understood that in Roe the Supreme Court struck down Texas’s prohibition on abortion, but the political science literature causes us to doubt that Roe changed most Americans’ understanding of the fetal viability timetable or the history of abortion regulation, even though Justice Blackmun’s opinion discussed those topics at length. Orin’s hypothesis, in short, is hard to square with the available social science evidence that examines the role of the Supreme Court in American life.

The probabilistic model is a paradigmatic Katz approach, the one most consistent with the language of Katz, and an approach taken by every appeals court to address the cell phone location tracking issue raised in Carpenter.  Orin’s arguments against considering societal expectations of privacy, and against using empirical evidence to help courts more reliably assess those expectations, have problems.  This is even more apparent when one considers the alternative test that he proposes for deciding Carpenter—which will be the subject of our next post.

You may also like...

4 Responses

  1. Tom Donahue says:

    An excellent suggestion – let us rigorously determine the degree of ignorance among the population regarding how technology works and use that to determine our Constitutional rights. The greater the ignorance – the stronger our rights!

  2. Pauly says:

    The court uses one consistent model for separation-of-church-and-state cases. Maybe it’s time they do the same for search-and-seizure cases?

  3. Orin Kerr says:

    Thanks again for the engagement. There’s a lot here to respond to, and I can’t get to all of it, but here are a few thoughts:

    1) I think the biggest conceptual problem with your use of the probabilistic model — a conceptual problem that you don’t, as far as I can tell, address — is that except in a few specific situations the probabilistic model doesn’t have an obvious connection to a plausible theory of Fourth Amendment history, text, or purpose. In particular, your approach is really big on making the Fourth Amendment scientific and rigorous. But it seems to come at a cost of losing touch with the goal of all of that science and rigor. In particular, you say that most people don’t know how cell phones work, and that most people are surprised by the Stored Communications Act rule. The technology and law comes as news to them. But you don’t seem to have a theory for why that should matter as a matter of Fourth Amendment law. The Fourth Amendment has a text, a history, and different views of its purpose. But you don’t seem to tie your probabilistic view into them. Instead, if I understand you correctly, you go with a single phrase from the doctrine — reasonable expectation of privacy — and then you suggest that the true law is a particular literal probabilistic reading of that phrase. But missing from that approach, as far as I can tell, is a theory for why that particular reading of a caselaw test — a reading that conflicts with other cases — is achieving any ends of Fourth Amendment law. Why does it matter that things surprise people? Why does it matter that people don’t know the law, or don’t know the technology? My apologies if I’m missing it, but it doesn’t seem clear to me.

    2) Under your view of the Fourth Amendment, I gather the Fourth Amendment changes in response to world events? For example, say, God forbid, that a major 9/11-level terrorist attack occurs in New York City. InfoWars and Breitbart start reporting that the government is secretly collecting everyone’s cell site records without a warrant. President Trump starts tweeting about it — rest assured, Trump says, they are making everyone safe by collecting all cell site records. The public, unsure of what to think, concludes that if the President says it’s true, then presumably it is. At that point, as the public now knows what cell-site records are, and expects that they are being collected, would you say that cell-site records are no longer protected under the Fourth Amendment?

    3) One final thought, on the coexistence of the four models. Having four models doesn’t mean the police don’t know what the law is. The police care about what the rules are, not what conceptual frameworks are used to get to the rules. So long as the rules are clear, that’s what the police care about. I have called this the difference between the “principles layer” of Fourth Amendment law and the “application layer” of Fourth Amendment law. The principles of the law require judgment and may be murky. But when courts hand down rules, the rules are at the application layer, telling the police what they can and can’t do. Clarity at the application layer is essential, but clarity at the principles layer is not.

  4. Lior Strahilevitz & Matthew Tokson says:

    Thanks, Orin, for a fun back-and-forth.

    On point one, that’s actually what we were getting at in paragraph 11, the paragraph that begins “Orin then argues…” Now I’m (Lior) speaking just for myself, and relying on scholarship I wrote with Matthew Kugler that gets into a lot more depth (especially on pages 226-229). http://www.journals.uchicago.edu/doi/abs/10.1086/686204 You were kind enough to read a first draft of that paper, when I think you raised this concern, and we tried to do a better job of addressing that question in the final draft. Basically, we think that it’s beneficial for the law to match what people expect the law to be so that people take appropriate precautions with sensitive information. Mismatches between formal law and expectations also provide law enforcement with opportunities to exercise undue leverage over the citizenry and they make the democratic checks on law enforcement activity less robust. (Voters respond to the law as they perceive it rather than the law as it really is.) We also have a working hypothesis that it’s easier to generate reliable and consistent measures of popular expectations than it is to engage in the kind of welfarist analysis that the policy model requires or even the kind of legal analysis that the positive model requires. (To be clear, it’s just a hypothesis, but that’s where we are coming from.) Making the law as predictable as possible ex ante is good because it lets the police invest law enforcement resources appropriately.

    I’ll concede that these are all pragmatic arguments for the probabilistic model. We don’t claim that the original public meaning of the Fourth Amendment embraced our approach; I rather doubt it did, but it’s hard to know for reasons you stated in response to a different post. We think there’s a good “common law constitutionalism” argument that 4th Amendment law has been moving in this direction since Katz, though not uniformly. And we think that the 4th Amendment is an area where thanks to an accumulation of precedents the meaning of the Constitution has shifted over time. The Supreme Court seems committed to Katz. Maybe that was a wrong move initially. But given the strength of the commitment we want to make Katz as coherent as possible and we think social science data can help with that.

    On point two, you are correct, as I indicate above. We are comfortable with the Fourth Amendment’s meaning changing as the world changes. But we think that is true of every Fourth Amendment theory. Certainly we should expect to see such changes if the courts apply the policy model – major terrorism threats will change the cost-benefit calculus. And the private facts model builds in some change as people’s sense of what is and isn’t sensitive changes (though Chris Slobogin’s work gives us some reason to think that these changes aren’t particularly dramatic over time.) Maybe the positive law model is most resistant to change but of course the common law and statutes do change in response to threats like the ones you identify. And even if something is a search, changing world events will inform the reasonableness inquiry. So yes, we think the meaning of the Constitution changes over time to the extent that people’s expectations change. But we don’t think that’s a big problem and in any event it isn’t unique to our proposal. I think this is only a distinct problem for us if you believe that popular expectations change more (or change less predictably) than policy judgments or sensitivity assessments or private law. We are gathering some data that sheds light on that question but it’s an ongoing and challenging empirical project.

    On point three, what you wrote is helpful, and I don’t disagree. Note that our post also talks about unconstrained judges as a problem. But note further that survey research has gotten really cheap and easy to do well, and various scholars are doing it for free and publishing their results in widely accessible journals. So if the law leaned on the probabilistic model more there’s the possibility for the law to become radically transparent to police departments. They can just read a paper by Chao et al. or Scott-Hayward et al. and find out what is / isn’t unexpected. The major concern I think you’d have is about whether the research that is being done is in a way that biases the responses or is results-oriented. I think academic incentives like tenure committees, grant committees, peer review, scholarly reputation, etc. do a pretty good (albeit imperfect) job of separating the work of scholars from the work of hacks. But if you’re skeptical about that claim then you should be skeptical about our version the probabilistic model.

    All that said, you argue in your Stanford article that the courts should sometimes apply the probabilistic model. So I guess I’d ask you: If they are going to apply that model and treat is as decisive in some cases, wouldn’t you rather they rely on a well-designed study by Matt Tokson than on the judges’ bare intuitions about what ordinary Americans expect?
    -Lior

Leave a Reply

Your email address will not be published. Required fields are marked *

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image