DRM for Privacy: Part 2

In my previous post, I talked about the problem of online tracking and some of the solutions on offer.  In this post, I will propose a potential legislative model drawn from copyright law.  Several scholars (e.g., Pam Samuelson) have argued that intellectual property holds lessons for privacy.  Others have specifically explored whether copyright might.

I am not aware of any argument that the legal protection afforded efforts at digital rights management should be applied to efforts to safeguard one’s web surfing behavior.  People with long institutional memories were able to point me to some great technical papers (e.g., this one and this one) applying DRM techniques to safeguarding personal data.  It may be that I simply missed the lawerly side of the argument, although its absence would make some sense given DRM’s status as a persona non grata in the cyberlaw community.1

To be clear: I am not a fan of DRM or anti-circumvention either when it comes to copyright for many of the reasons Julie Cohen and others identify. And yet I believe the model holds promise as applied to consumers and their web-surfing habits and offer it up here for purposes of discussion.

President Clinton signed the Digital Millennium Copyright Act into law in 1998.  The DMCA does a number of things, including implement two treaties and immunize Internet platforms for the copyright violations of users, subject to requirements around notice-and-takedown.  Here I want to focus on Section 1201, known as the anti-circumvention provision. This provision creates a cause of action against those who circumvent technological measures taken to protect a right of a copyright owner.2  It also prohibits creating, distributing, etc. a technology designed “primarily” for this purpose.  The obvious goal of the Section is to support one side over another in an arms race over the use of copyrighted digital content and to incentivize and safeguard self-help measures by content-owners.

There are a number of problems with Section 1201, discussed in detail by others. For instance, the statute does not tell you how strong the DRM has to be before ignoring it triggers a violation.  It is not circumvention to access an unencrypted file sitting on one’s computer.  But one court found that having a CAPTCHA on a website—which is just a question posed to the user to help ensure that the user is an individual person rather than automated software—triggered Section 1201 protection.  Meanwhile, there is no obligation to warn users about notice, as Samuelson and Jason Schultz at UC Berkeley have explored.

A second problem is that companies (it is usually companies that have the sophistication to wrap something in DRM) can overprotect content, making it impossible to do things users are lawfully, even constitutionally permitted to do.  A third problem, explored by Cohen and others, is that DRM can threaten consumer privacy.  Interestingly, the anti-circumvention provision itself does not apply if the technology the user circumvents collects personally identifiable information.  But various DRM techniques report back on the user’s activities and can constrain intellectual curiosity in ways privacy would safeguard.

DRM, as deployed by companies against users, has all these problems and more.  How would Section 1201 fare if the tables were turned?  That is, what if the same mechanism were used to protect not companies against users with respect to copyright, but consumers against online advertisers with respect to privacy?  Say, a hypothetical cause of action with the following elements:

  1. a consumer uses a “technological measure”—including opting out of tracking, blocking cookies, using an incognito mode in a browser, adding a domain to a Tracking Protection List, or installing some other software to hide her online activities; and
  2. an online advertiser knows or has reason to know that the consumer has taken this step; and
  3. the online advertiser uses some other method to defeat the technological measure without first securing the consumer’s affirmative consent.

Here are the advantages of this model as I see it.  First, it fits roughly within the notice and consent paradigm currently dominating online privacy.  It is hard to imagine how anyone who takes an affirmative step to hide or erase their online activities can be said to consent to tracking those activities, whatever the privacy policies they do not read say.  And yet, as a technical matter, companies respawn cookies, construe opt outs narrowly, and use alternative tracking measures without necessarily violating any law. These companies would violate privacy anti-circumvention.

Second, the protection may incentivize individuals and entities to create and adopt privacy enhancing technologies because, like DRM, they will have the force of law behind them.  Third, the model shifts the burden to the online advertiser—but not with respect to every single consumer.  Only where there is evidence that a consumer has taken a technological measures to protect their privacy.  I am not suggesting that online advertisers violate this model every time they use a new tracking technique.  Rather, they violate it when they do so on purpose because they have determined that one or more users have defeated their old one.  Companies concerned that a consumer may have taken such a step have two options: they can wait to be sued and argue the matter before a court or they can ask the consumer.  This is not a step generally available to consumer’s worried about copyrighted digital material.  Yet online advertisers, who after all are in the business of putting content before users, can do it.

Clearly there are other questions and potential problems.  One is whether a Do Not Track header, without more, counts as a technological measure to control access to one’s online activities.  I can see arguments either way.  As far as I (and smarter people I asked) know, there has never been a Section 1201 case squarely on the question of whether a pop up or other warning of copyright infringement constitutes a technological measure.  Which makes sense: such a measure does not fit easily within Section 1201(a)(3)(b), defining the control of access as “requir[ing] the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work.”  But one great advantage of DNT is that it represents an unequivocal statement of a desire not to be tracked.

Then there is the problem of scope, that is, just what can a consumer hide and expect to be able to sue over when the information is uncovered by another means.  At a minimum, I believe that a consumer should be able to hide what websites she has visited from any online entity but her Internet service provider and the website itself.  Another example outside of behavioral targeting might be steps users take to hide content from the social networks it is posted on—I’m thinking here of Reputation.com’s uProtectIt app.  Yet another is Daniel Howe and Helen Nissenbaum’s Track Me Not, which obscures search history.  Unlike in copyright where constitutional rights are in play,3 I am comfortable being a little overbroad and allowing the advertising industry to make its case that tracking some activities should not permit recovery. I think there is a good case to be made that analytics and click-fraud detection, for instance, should not fit within the rule.  And I am probably more comfortable than many building these exceptions into the statute itself.

There is one final advantage that may be worth mentioning.  In my last post, I talked about the Battlestar Galactica nature of debates over online privacy (“This has all happened before”).  I believe we should not let the current moment go by without passing some legislation that helps tip the scales toward consumers.  I am very impressed with the privacy innovations of many companies, including online advertisers, over the last year and happy to see privacy emerge as a competitive differentiator in social networking and elsewhere.  But not everyone is raising their game.  At a minimum, consumers who take affirmatives steps to safeguard their privacy should be able to avail themselves of the same legal protections we afford an MP3 file.

Thanks to Jason Schultz for some very helpful background information.


[1] In the Stanford Law Review’s 2000 symposium devoted to cyberlaw and privacy, Jonathan Zittrain explores the converse question in another context: he examines whether trusted computing methods might supplement health privacy law as a means to give patients more control over their medical information.

[2] The provision does not on its face prohibit circumventing technology meant to stop the copying of copyrighted materials where fair use or another doctrine permits it. But of course, you need to be able to access something to copy it. There are also exceptions in the statute for research and certain other purposes.

[3] I don’t think the Supreme Court’s recent decision in Sorrell v. IMS Health carves out a First Amendment exception for targeting. In Sorrell, the doctor’s prescribing information was publicly available. I also happen to think the case was wrongly decided. If the government can force doctors to supply their prescription information, it should also be able to do so with access controls in place.

You may also like...

5 Responses

  1. Adam Shostack says:

    Hi Ryan,

    Some colleagues and I have a somewhat related paper in the first workshop on economics of information security, ”
    We considered the “traditional” proposed use of DRM protections for privacy, such as wrapping identifiers in crypto. Your posts strike an interesting middle ground. The technologist in me has a hard time considering these things as Technical Protections under the DMCA, but that statement applies to many things which, as you point out, are Technical Protections.

    Joan Feigenbaum, Michael J. Freedman, Tomas Sander and Adam Shostack, “Economic Barriers to the Deployment of Existing Privacy Technologies.” http://www.homeport.org/%7Eadam/econbar-wes02.pdf

  2. Ryan Calo says:

    Thanks for your comment, Adam, and for the very interesting position paper. Also: great Twitter pic.


  3. Chris Soghoian says:


    Instead of describing this as DRM, you may want to consider describing this as “anti circumvention protections for privacy”. This is far more accurate, and avoids the initial negative response that most people will have who hate DRM with a passion.

  4. Bruce Boyden says:

    It’s not a case, but I don’t think a mere warning of copyright protection would qualify as a technological measure under Section 1201(c)(3), the “no-mandate” clause. I describe this to my students as a provision that clarifies that there is no requirement in 1201 to look for and respond to what I call “flags” — bits of information that signify the protected status of the content, but do not themselves block or restrict access to the content. The reason that provision was inserted was that computer manufacturers did not want to have an obligation to scan all data coming into the computer to see if it carried this or that flag in it somewhere; rather, all they would have to do is not actively circumvent technological measures that blocked or restricted access, without authorization. There is a possible caveat here for “flags” that are required by some sort of law or regulation, and thus mandated elsewhere; perhaps “Do Not Track” would fall under that category. But it’s an untested argument.

  5. Steve Mathews says:

    In the Standards Committee ISO/IEC JTC1/SC32 WG1 (a bit of a mouthful, but that’s labelling for you) we have always taken the approach that Privacy can only occur where the sender of information is effectively able to constrain the use of data that they provide for the furtherance of some transaction. Whatever else you may think about DRM, that is exactly the function it SHOULD provide (not sure about the monitoring and tracking stuff). There is a whole standard (ISO/IEC 15944 Part 8) available free and addressing this requirement.

    But what you have to take on board is that no commercial party wants to have anything less than the total ability to do what they want with your data, howsoever they have obtained it. So far, what you do have in regulation is a ‘sort of’ requirement to encrypt data at rest because some folk in California had their vehicle licensing data stolen.

    Now you may not care for DRM – that’s not my problem, but absent the ability for a user to be able to determine the forwards use of their data then you have the free-for-all so ably demonstrated by all and sundry, not just the social networking sites.