Can You Sue If a Computer Reads Your E-mail?

hal9000.jpgThanks Dan for the welcome, and I’m excited to be guest-blogging at Concurring Opinions again. I had intended my first post to be a continuation of the discussion Dan and I were having in the comments last week about heightened review for subpoenas to unmask anonymous actors on the internet, but events have overtaken me. Orin Kerr over at the Volokh Conspiracy has put up a post querying whether network-level filtering for copyright-infringing materials would violate the Wiretap Act; Orin appears to believe that it would, at least without consent from every potential sender of material that was scanned. This merges two of my areas of interest, copyright and electronic privacy law.

First of all, the report is a little sketchy, but it looks to me like the topic came up as possibly an off-the-cuff remark or an answer to a question at the CES conference in Las Vegas. It doesn’t appear that anyone is proposing implementing this right away. But the idea seems to be that network intermediaries — either ISPs serving individual subscribers, such as Comcast or Verizon, or perhaps ISPs closer to the Internet backbone, such as Level 3 or Sprint — may be able to use fingerprinting technologies to detect and block copyrighted content transiting the network as a way of preventing infringement.

There might be all sorts of practical problems with this. How would a filter distinguish between authorized and unauthorized downloads, for example? But that’s not what intrigues me right now. The question I want to focus on is, would this violate the Wiretap Act? It’s arguable, but I don’t think it would. I don’t believe an automated scan of communications, where no permanent copy is made, violates the Act.


Of course, as a cautious lawyer (perhaps a redundant description), I’d certainly advise any telecommunications company to be wary before proceeding here. The ECPA, including the Wiretap Act, is a convoluted statute with a lot of unclear terminology. In essence, the Wiretap Act prohibits intentional interception of an electronic communication. There’s an exception for consent — that’s why receiving an email is not a violation of the Act — but Orin’s already indicated why consent might be hard to obtain here from everyone. Could telecommunications companies do this kind of filtering without consent?

I agree with Orin that it doesn’t seem that the exceptions allowing service providers to intercept communications for business-related reasons — Sections 2510(5)(a)(ii) and 2511(2)(a)(i) — would be of much help. In order to take advantage of the first of these exceptions, the service provider would need to be able to claim that filtering traffic for files infringing on the rights of others is “the ordinary course of its business.” Perhaps that will become the ordinary course of business someday, but it doesn’t seem to be right now. The second provision cited above specifically rules out “utiliz[ing] service observing or random monitoring” except for quality control, so that’s no help either.

Nevertheless, I think there may be room in the Act for automated filtering. It all hinges on the definition of the term, “intercept.” The central provision of the Wiretap Act makes any person who “intentionally intercepts … any wire, oral, or electronic communication” liable. “Intercept” is defined as “the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.” So, in order to violate the Act, one has to (1) intentionally (2) use a device to (3) acquire (4) the contents of a communication.

What does it mean to “acquire” the contents of a communication? That has always been a little unclear. Here’s what I wrote in a chapter on civil applications of the ECPA in the PLI treatise, Proskauer on Privacy:

The issue of what qualifies as “acquisition” has proven more difficult. “Acquisition” is not defined in the act, nor is its interpretation necessarily straightforward. For example, are the contents of a communication that is routed somewhere other than the intended destination, but not listened to or recorded, “acquired” for purposes of the act? What about a communication that is recorded but not listened to? Or a communication that is recorded pursuant to an exception, such as by a party, but later acquired and listened to by someone else?

Courts have struggled with the answers to these questions ever since the Wiretap Act was adopted. For example, a telephone conversation may be intercepted by attaching a wire to a telephone line and stringing that wire to a speaker where the conversation is converted back to sound and overheard by a third party. At what point has interception occurred? One theory is that the interception occurs at the moment the signal in the line branches off to the wire installed by the wiretapper. The newly installed wire itself is the “device,” and the diverted signal is the “acquisition,” even if no speaker is attached at the other end. An alternative theory is that the interception occurs when the signal is converted back to sound at the speaker attached to the wire; the speaker is the relevant “device,” and the reconversion to a human-perceptible form is the “acquisition.” A third alternative is that the interception only occurs if a human listener hears the sound waves produced by the speaker. The speaker is still the “device,” but acquisition does not occur unless a human listener is there to overhear the conversation.

In most cases involving live surveillance of the sort just described, the dividing line between wire, speaker, and listener will not be of critical importance, since all three events will occur nearly simultaneously, and it will likely be the case that the same person or group of people attached the wire and the speaker and are using the apparatus. But interception can also be accomplished by recording a communication for later playback. In such a case, does the interception occur

(a) when the signal is diverted;

(b) when the recording is made; or

(c) when the recording is listened to?

One early case to resolve this issue looked at a tape recording that had been made by one participant in a drug transaction. United States v. Turk, 526 F.2d 654 (5th Cir. 1976). When the police searched his car, they found the tape and listened to it. The other person on the tape, Frederick Turk, was then charged with perjury for having lied to the grand jury. When the police listened to the tape, was that an interception in violation of the Act? The Fifth Circuit said no — the first acquisition occurred when the recording was made, with the recorder serving as the “agent of the ear.” Turk’s colleague intercepted the conversation by recording it, but he did so with consent — his own. The police then acquired a lawfully intercepted recording. Most courts have followed Turk — an acquisition occurs no later than the point some device records the conversation, even if the recording is destroyed without anyone ever listening to it. As the Turk court put it, “In a forest devoid of living listeners, a tree falls. Is there a sound? The answer is yes, if an active tape recorder is present, and the sound might be thought of as ‘aurally acquired’ at (almost) the instant the action causing it occurred.”

OK, so copying a communication is enough for a violation, even if no human ever reads it or listens to it. But what about the situation where no recording is made and no human is present to read or listen to the content at issue? For example, suppose a wire communication is tapped, and the tap goes to a speaker in an empty room, where it goes unheard. Is that still an “aural or other acquisition”? Turk waffled on that point, and there have been very few cases that have looked at it. One was the Fourth Circuit’s decision in Sanders v. Robert Bosch Corp., 38 F.3d 736 (4th Cir. 1994), a case premised in part on the somewhat dubious conclusion that recording incoming calls to help capture bomb threats is not use “in the ordinary course of business.” In another part of the opinion, the court reached the issue of whether conversations that were picked up by a microphone in a security office and, unbeknownst to everyone, were directed to a speaker in another area of the plant that apparently was set to a very low volume, had been “aurally or otherwise acquired” under the Act. The court held that it was “satisfied” that no acquisition had occurred. A district court in New Jersey reached a similar conclusion, holding that acquisition occurs when a device either directs a conversation to a human or when it is “permanently memorialized, a feat impossible for a wire to perform.” Pascale v. Carolina Freight Carriers Corp., 898 F. Supp. 276, 280 n.1 (D.N.J. 1995).

I think these decisions are a reasonable interpretation of “acquisition.” Acquisition means enabling a human to perceive the contents of a communication, either by bringing that communication to a place where humans are present, or by recording it for future perception. If that is the correct interpretation of “acquisition,” then automatic scanning of the contents of a communication by a computer is not “acquisition.” It neither carries those contents to a human for perception, nor does it capture them for later perception. So programs like Google’s Gmail service, which automatically scans email content for advertising keywords, would be fine even without consent on this view. So would the ISP filtering at issue in Orin’s post, so long as no contents from the communication are recorded or transmitted to humans. Indeed, given that qualification, it’s hard to see what the privacy harm from such automatic scanning would be. Assuming nonsentient computers, who cares if a computer reads your email and never tells anyone about it?

You may also like...

20 Responses

  1. Paul Ohm says:

    Is it likely that the ISPs would deploy a no doubt imperfect technology that blocks packets in this way without any type of accounting whatsoever? Isn’t it much more likely they would keep a log of the traffic that had been blocked, so that they could investigate future complaints for example?

    This is critical to the wiretap question, because once ISPs start keeping logs that preserve the “substance, meaning, or purport” (the definition of “contents” under section 2510) of messages on the network, your analysis might not apply.

    I’d say the same thing about Gmail. I’ve always assumed that Google has been keeping statistics about the contextual Gmail ads they display. In fact, their advertisers probably demand it. Those statistics themselves might constitute wiretaps. That’s why Google is wise to try to deal with this through consent.

    Based on what I’ve heard so far, I’d advise the ISPs to think long and hard about wiretap liability before deploying these filters.

  2. Cathy says:

    I tend to disagree with you on whether such interception runs afoul of the copyright act. I wrote my note on whether these fingerprinting devices could be used by universities and ultimately concluded “no.”

    Catherine R. Gellis, CopySense and Sensibility: How the Wiretap Act Forbids Universities from Using P2P Monitoring Tools, 12 B.U. J. Sci. & Tech. L. 340 (2006), available on SSRN or my blog.

    I’m with you that the definitions of “interception,” et al. are a mess, but later cases (see e.g. US v. Councilman) seem to want to try to apply the general fourth amendment protection principles more broadly. Which is good news, because otherwise you end up with a situation where traditional telephonic communications would have protections but ones made over the Internet wouldn’t be (see, e.g., VoIP – it’s clear that if you called someone with a traditionally-switched telephone network you’d have protection, so why shouldn’t you also have privacy in your identical voice calls that happen to be packet-switched over the Internet?)

    Also, see Deal v. Spears, 980 F.2d 1153, 1158 (8th Cir. 1992). Some business owners suspected an employee was an accomplice in a robbery of their business and decided to listen in to all of her phone calls, regardless of if they related to their business interests, and the court called foul on that. The business couldn’t listen to everything, as once ascertaining that the call did not relate to business purposes they no longer had any right to eavesdrop.

    Also see U.S. v. Jones, 542 F.2d 661, 673 n.24 (6th Cir. 1976) (“…there is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls”). Default 24/7 monitoring of the content of every packet transmitted would therefore seem to be inconsistent with anything that might be permissible under the act.

  3. student says:

    Assuming nonsentient computers, who cares if a computer reads your email and never tells anyone about it?

    Consider the exploit discussed at New cracks in Google mail (Dan Goodin, The Register, 28 Sep 2007).

    Are you saying that this doesn’t violate the wiretap act if no one actually collects the diverted email?

  4. Frank says:

    Fascinating post. I just have one tangential recommendation of a resource that might be of interest:

    Chopra and White, Privacy and Artificial Agents, or, Is Google Reading My Email?, at

    http://www.sci.brooklyn.cuny.edu/~schopra/choprawhite497.pdf

    I also vaguely recall Larry Lessig’s discussion of the “worm” in Code which harmlessly inspected computers. From an Amazon review: “What about a computer worm that can search every American’s PC for top-secret NSA documents? It sounds obviously unconstitutional but the worm code can’t read your letters, bust down your door, scare you or arrest anyone innocent. If you’re not guilty, you won’t even know you were searched.”

  5. Orin Kerr says:

    Very interesting post, Bruce.

    The difficulty, it seems to me, is that the point of the monitoring for copyrighted material would be to act on the contents. That is, the results of the filter are presumably given to a person, who is alerted as to the presence of a copyrighted file and can take action on that. If I’m right about that, it sure seems like an intercept to me. I don’t see a difference between (a) having a person isten in to a call, as in a traditional telephone tap, and (b) having a computer listen in and then indicate to a person the contents of the communication. Indeed, all wiretapping of electronic communications is a form of (b); the computer “listens” to the zeros and ones and then reports back when particular strings signaling different letters and numbers are found.

  6. Bruce Boyden says:

    Wow, thanks everyone for these comments.

    Paul, you’re right that it all depends on the construction of the system. I’m sure content owners would prefer that the filter not only blocked traffic, but sent a follow-up e-mail: “Dear Mr. Lucas, 198.222.0.5 just tried to download the Empire Strikes Back!” Naming the infringing file would probably come too close to the “substance, meaning, or import” of the communication, however. But I don’t think a filter system would need to transmit any information at all in order to be useful (again, assuming the practical difficulties can somehow be overcome). Also, I don’t think logging an IP address, plus an indication that a file was blocked, would be acquisition of the “substance, meaning, or import” of a message, so probably at least that could be done, for whatever good it would do. An IP address seems more like a telephone number than the content of the communication.

    Re: Gmail, I don’t think I agree there either. I don’t see how a record that 54 unnamed people sent e-mails containing the word “catfish” today acquires the “substance, meaning, or import” of any communication. Obviously if you start stringing those results together and identifying them with particular messages you might at some point get the contents of a message, but I think Google could maintain at least some records without consent.

    Cathy, just to be clear, I’m not arguing for any difference between traditional telephone and VOIP. Sanders and Pascale both involved regular phone lines. I think that hooking up a wire that doesn’t lead to a speaker or some other way of producing human-audible content is not “acquisition” of a wire communication, either. And there have been cases that have held 24/7 monitoring to be permissible in some circumstances, at least under the business extension exception — see Arias v. Mut. Cent. Alarm Serv., Inc., 202 F.3d 553 (2d Cir. 2000). Since I’m maintaining that automated scanning is not even acquisition, of course, any limits on the business extension exception would be inapplicable.

    Student, the Google exploit as I understand it would forward a copy of a message to some other location — that’s like making a recording of a telephone call. Most courts have held that even unlistened-to recordings are “acquisitions,” and as I mentioned in the post, that strikes me as a logical conclusion. So the Google exploit would be an intercept, or perhaps a violation of the Stored Communications Act, 18 U.S.C. s 2701. As well as a violation of the Computer Fraud & Abuse Act, 18 U.S.C. s 1030.

    Frank, along the same lines as the Lessig worm hypo is a very interesting note written by a friend of mine, Michael Adler, Cyberspace, General Searches, and Digital Contraband: The Fourth Amendment and the Net-Wide Search, 105 Yale L.J. 1093 (1996).

  7. Bruce Boyden says:

    Orin, I agree with you that your (a) and (b) are pretty similar. The parallel I’ve been drawing is between a “tap” that goes nowhere, either to an inoperable speaker or perhaps to an empty room, and a computer scan that does *not* report the contents to any human. I thought you were about to say that the computer taking action was enough to make it an acquisition; that would be a distinction between the two cases, but I don’t think blocking would equal acquiring.

    I admit that my entire analysis misses the point if the only way to implement such a filter is to have the results reported to a human. The NYT Bits post doesn’t help us out too much here, since it’s pretty vague. But I had been assuming that the most feasible way to implement network-level filtering, given the speed and amount of traffic, would be to have some sort of automated process to detect and block certain files.

  8. student says:

    The Google exploit as I understand it would forward a copy of a message to some other location — that’s like making a recording of a telephone call.

    You didn’t answer the question I asked, though. I probably phrased it badly, and didn’t provide enough context.

    It’s usual for a provider to close email dropboxes when they’re discovered.

    Suppose that the exploit code directed mail to a Yahoo dropbox. The malicious code is discovered in the wild (say it was used for a domain hijacking). Yahoo is notified, closes the dropbox.

    But that doesn’t necessarily mean that the exploit isn’t still spreading (Google has by now patched this particular vuln). Nor does it necessarily mean that users’ Gmail accounts are clean (users have been urged to check their Gmail filters for this exploit).

    So, I probably shouldn’t have compressed all that into “if no one actually collects the diverted email?” I’ll try again: What if the dropbox is closed?

  9. CDeBoe says:

    I agree with Orin Kerr. The key is whether an action is taken on the intercepted material, not whether a human takes the action. When I send and receive email, I give consent only for transmission, not for the ISP to add, delete, or alter the transmission. If I send a racy email to my wife and my ISP accidentally sends it to 100,000 people, isn’t the ISP going to be accountable for that, even though the problem was was a computer setting rather than a human’s deliberate action?

    Further, if I remember my copyright law class correctly (which I may not, it’s been 20 years), any informatiion fixed in a tangible medium of expression is copyrighted. So the photo I download is copyrighted, whether it’s from an ad agency or my brother. And how about if I use that photo as the background image in a spreadsheet? What if I photoshop it? How is my ISP going to tell?

  10. clazy says:

    Who is the ISP to enforce copyright law? Do they have any standing to decide that some giant corporation owns a copyright rather than me? Don’t I at least have the right to contest their claim?

    As for the key issue being the meaning of acquisition, it seems to me that the key issue would be what is content, and to my mind, any information at all relating to the email would comprise content, including whether it carries a file that appears to be copyrighted.

  11. Paul Ohm says:

    There doesn’t seem to be a lot of disagreement here. Liability depends on what the ISP does with the packets that match the signatures. At one extreme end (do nothing) there is no liability. At the other extreme end (send a nasty letter to the user) there is clear liability and no immunity.

    But the devil is in the details, and that’s why your original post–which assumes away the practical complexities–could have given non-experts the misimpression that the ISPs were without risk here. The risk seems pretty significant, and if I were advising the ISPs, I would tell them act very, very cautiously.

  12. Stephen says:

    It seems to me that whatever program put in place to listen/scan to messages is acting as a agent of a human. It doesn’t seem legitimate to allow a program to scan communications and report whether criminal activity was discussed even if they don’t report the content of the message.

    If a human was listening to phone calls and only reported that the participant were discussing a burglary without recording or repeating the actual conversation, we’d still find that a breach.

    So, any program that does that should be considered a breach. Just because they are scanning for marketing information doesn’t make it okay. Embarrassing marketing info could be used coercively by an unethical firm.

  13. Gene Hoffman says:

    What is the substantive difference between:

    The network filter device flags this packet as copyrighted material and blocks its transmission.

    And

    The network filter device flags this packet as (Tiananmen Square/Supportive of the opposition party/A petition to redress grievances) and blocks its transmission?

    As such it seems pretty clear that performing even an automated action without human intervention is the acquire some essence of the communication and do something “actionable” and outside the intent of the sender.

    -Gene

  14. Bruce Boyden says:

    Student, I’m not sure I understand your question. Is the question whether there’s wiretap liability for the person making use of an exploit, if there’s no dropbox? I.e., the forwarded messages all bounce or something. That to me seems like the wire attached to a phone line that doesn’t lead anywhere productive — so under my analysis, no, that wouldn’t be a wiretap (assuming for the moment the Wiretap Act applies and not Section 2701). Naturally, any messages successfully received in the dropbox prior to its closing WOULD be “intercepted,” and under the majority of court decisions, that would be true even if the hacker never read them. And in any event the hacker is likely liable under the CFAA no matter what the situation is with the dropbox, just for exploiting the flaw.

    Paul, not to get all worked up about it, but it sounds from your second paragraph like you think my initial post was too glib. I don’t see how. In any event, in case it wasn’t clear, I reiterate my warning in the post to “any telecommunications company to be wary before proceeding here,” particularly given, as I discussed, the confused state of the law on this point and the paucity of cases supporting the distinction I want to make. It’s also worth noting that, as we discussed last weekend, even where the law is clear courts screw up the ECPA all the time. Certainly any telecom people reading this exchange should note that I’m responding to Orin’s post, and they’d be idiots to ignore his conclusion on the matter.

    Second, you’re right that I did assume some practical difficulties away, but I’m not sure why that would give rise to any misimpression that proceeding here would be “without risk.” For one thing, I explicitly assumed that network-level filtering is feasible. If it’s not, then Orin’s post and my post and the original Bits blog post are all just idle speculation, and it’s trivially true that proceeding is without risk because no one will proceed. Plus, I’m not an expert on the technology, but it doesn’t strike me as intuitively obvious that network-level filtering involving humans would be any *more* feasible than automated filtering. In any event, as I said in the post, the situation I intended to analyze was the one where there is “automated filtering,” and “no contents from the communication are recorded or transmitted to humans.” If that’s not how network-level filtering would actually be constructed, then I agree my analysis does not apply, but I think that’s obvious. And if it *is* how it would be constructed, then I think ISPs *should be* “without [Wiretap Act] risk”, subject to all of the appropriate caveats about untested arguments, the vagaries of litigation, and statutes and risk factors not discussed in this post (e.g., public relations).

  15. mrsizer says:

    The interesting technical issue (I’m not a lawyer): ISPs _already_ do this – they must. They read the packets to various levels in order to route them – or throw them away.

    What’s the difference between:

    a) sending packets to the “bit bucket” based on IP data (e.g. try sending packets from a 192.168.0.0 network address to a valid destination – they will vanish)

    b) throwing them away based on protocol (e.g. “we don’t allow ftp”)

    c) throwing them away based on content type (e.g. “we don’t allow transmission of photos”)

    d) throwing them away based on content value (e.g. “we don’t allow porn – and we’ll analyze your pictures to find it”).

    It’s all the same thing. It’s just a matter of how much analysis you’re doing on the packets (although trying to analyze content value would probably require re-assembling them, and they might not all be going through your network).

  16. mrsizer says:

    P.S. I did get the distinction between simply throwing stuff away and “intercepting”.

  17. Ted McClure says:

    Having spent some time in the intelligence business before and after law school, I’m puzzled why there is any confusion over the word “acquire” in this context. We used it to mean “obtain [a flow of information] so as to be able to monitor it.” Whether action was ever taken or whether any human ever sensed it was not relevant. If we were intercepting voice radio transmissions, we “acquired” the signal as soon as we could detect it clearly enough to translate it. We used “acquire” similarly for radar and telemetry intercepts, electronic countermeasures, imagery, and by extension visual observation.

    In the wiretap context, this means that when the signal is diverted, when the recording is made, and when the recording is listened to are not relevant. The question is, when is the information in the signal able to be meaningfully monitored? The answer for internet monitoring is as soon as the IP packets can be read.

    I suspect the difficulty with this expression arose from the different experiences of those who drafted the statute (who probably had some familiarity with the law enforcement and intelligence communities) and those who have been called upon to apply it in the real world.

  18. student says:

    First, I should note for the record that there is no evidence —none— that the cross-site request forgery (XSRF) written up in The Register‘s September article is the actual exploit used to inject the malicious filter used in the domain hijacking written up in the December article. Instead, that appears to be pure speculation by the victim. The Register’s John Leyden agreed with that guess, and reported that that particular injection vector had been closed by Google. But, actually, all we really know is that that particular XSRF vulnerability was one feasible way for a third-party to install a Gmail filter. And we know that the domain-hijack victim discovered a Gmail filter intercepting his email.

    In short, it’s just a guess that Google has patched the XSRF vulnerability exploited in the wild. I repeat that users have been urged to check their Gmail filters.

    Is the question whether there’s wiretap liability for the person making use of an exploit, if there’s no dropbox? I.e., the forwarded messages all bounce or something. That to me seems like the wire attached to a phone line that doesn’t lead anywhere productive — so under my analysis, no, that wouldn’t be a wiretap […].

    That answered my question—at least kinda, sorta.

    Let me step back. What I was hoping was that you would apply your understanding of the wiretap act to one class of hypothetical Gmail incidents “where there is ‘automated filtering,’ and ‘no contents from the communication are recorded or transmitted to humans’”

    To continue along that line:

    Internet email does not guarantee instantaneous delivery. In fact, it doesn’t guarantee delivery at all. (E)SMTP is simply a best effort service.

    RFC 2821 documents 4yz “Transient Negative Completion repl[ies]”, colloquially known as “Try Again” responses.

    Take the Gmail filter exploit, and suppose again that the Yahoo dropbox hasn’t been discovered, but instead that there is a temporary error preventing delivery. (Perhaps the Yahoo email quota has been exceeded.)

    Would you say the email interception violates the wiretap act during the time the email isn’t being delivered to the dropbox because of a temporary error condition?

  19. fishbane says:

    Just to say this upfront, I am not a lawyer, but rather a techie with a serious interest in the law.

    I realize that the law around this area is opaque and complicated, and to some extent based on analogizing new forms of communication to older forms.

    Just to take a different tack, how does, for instance, my router refusing to forward packets based on a signature not implicate me in the same way that I would be implicated in, say, setting a trap that harms someone?

    Setting aside contracts for now, if I boobytrap a door that then harms someone, I am liable for that harm, because my intention was to harm someone who did something (open the door) that I didn’t want them to do.

    If a person has a legitimate interest in their communications arriving at the destination, it seems to me that the intention of a person/carrier that interferes by installing a mechanism that selectively disrupts that communication is what is important, not that a human wasn’t directly involved in choosing whether or not to pass that packet. They preemptively made the decision, with deterministic results.

    Obviously, I’m not trying to compare the seriousness of dropping BitTorrent downloads with wiring a shotgun to a door handle, but the human agency involved in both do seem comparable to me.

    I forget where, but I saw a similar argument that a motion sensor on a video camera did not constitute surveillance, because a human was only involved once motion was detected. Since motion was consisdered suspect, at that point surveillance was justified. This strikes me as incredibly facile reasoning – obviously, the intent is to surveil, and a legal fiction that “only” a machine is watching until something suspicious happens simply begs expansion.

  20. A.J. Sutter says:

    It’s amazing to me that everyone is so tightly focused on the technical legal issues without questioning, even in passing, the social values implied by broad surveillance for copyright-violative material. (Clazy’s comment at 2008/01/11/13:56 comes close, but ultimately is focused more on the question of burden of proof.) Namely, that it’s OK for the interests of copyright owners to be deemed superior to the privacy interests of millions of individuals. Seems to me that if current law does permit such indiscriminate scanning, that should be fixed. And if it’s such a close call, then the protections for individuals should be strengthened.