Category: Privacy (Medical)


New Privacy Law Reference Book: Privacy Law Fundamentals

Professor Paul Schwartz (Berkeley School of Law) and I recently published a new book, PRIVACY LAW FUNDAMENTALS.  This book is a distilled guide to the essential elements of U.S. data privacy law. In an easily-digestible format, the book covers core concepts, key laws, and leading cases.

The book explains the major provisions of all of the major privacy statutes, regulations, cases, including state privacy laws and FTC enforcement actions. It provides numerous charts and tables summarizing the privacy statutes (i.e. statutes with private rights of action, preemption, and liquidated damages, among other things). Topics covered include: the media, domestic law enforcement, national security, government records, health and genetic data, financial information, consumer data and business records, government access to private sector records, data security law, school privacy, employment privacy, and international privacy law.

This book provides an concise yet comprehensive overview of the field of privacy law for those who do not want to labor through lengthy treatises.  Paul and I worked hard to keep it under 200 pages — our goal was to include a lot of information yet do so as succinctly as possible.   PRIVACY LAW FUNDAMENTALS is written for those who want a handy reference, a bird’s eye view of the field, or a primer for courses in privacy law.

We wrote this book to be a useful reference for practitioners — ideally, a book they’d keep at the corner of their desks or in their briefcases.

We also think it can serve as a useful study aid for students taking privacy law courses.

You can check it out here, where you can download the table of contents.

Can Suspicious Activity Reports Trigger Health Data Gathering?

In an article entitled “Monitoring America,” Dana Priest and William Arkin describe an extraordinary pattern of governmental surveillance. To be sure, in the wake of the attacks of 9/11, there are important reasons to increase the government’s ability to understand threats to order. However, the persistence, replicability, and searchability of the databases now being compiled for intelligence purposes raise very difficult questions about the use and abuse of profiles, particularly in cases where health data informs the classification of individuals as threats.
Read More

Online Health Data in Employers’ and Insurers’ Predictive Analytics

Did you know that buying generics instead of brands could hurt your credit? Or that a subscription to Hang Gliding Monthly could scare off life insurers? Or that certain employers’ access to electronic health records could lead them to classify you as “high-risk” or “high-cost”?

In all these cases, firms use “predictive analytics” to maximize profits. Consumers are the guinea pigs for these new “sciences” of the human. As Scott Peppet argues, it becomes more difficult to opt out of analytics systems as more people use them. What type of world are they leading us to?

Credit Analytics: Should Frugality be Punished?

One credit analytics company determined that buyers of cheap automotive oil were “much more likely to miss a credit-card payment” than those who paid for a brand-name oil. Spending on therapy sessions may also be a red flag. Appearing too frugal, too anxious, too spendthrift—all might lead to higher interest rates or lower credit limits. One R&D head at a credit analytics firm bragged that they consider over 300 characteristics to discover delinquency risk. He was not nearly as forthcoming about how the data is aggregated. Analyzing millions of transactions, the companies observe customers as a gardener might observe a rose garden: weeding out unpromising specimens, and giving a boost to incipient flourishers.

Many have complained about inaccuracy in these new forms of profiling, and consumers’ inability to review and correct digital dossiers collected about them. But let’s just assume that this profiling is correct, and choosing a generic really does correlate with increased credit risk. What’s the social value of this discovery? Maybe credit card companies can reduce rates infinitesimally (and increase profits) by burdening the generic buyers. But I’d be willing to bet that, for every few people whose generic purchases indicate financial trouble, there is another shopper who’s wisely frugal and increasing her chances of successfully repaying all her loans. It seems odd to penalize the financially responsible merely because they happen to engage in an activity shared by the distressed.
Read More


The Quantified Self: Personal Choice and Privacy Problem?

“The trouble with measurement is its seeming simplicity.” — Author Unknown

“Only the shallow know themselves.” — Oscar Wilde

Human instrumentation is booming. FitBit can track the number of steps you take a day, how many miles you’ve walked, calories burned, your minutes asleep, and the number of times you woke up during the night. BodyMedia’s armbands are similar, as is the Philips DirectLife device. You can track your running habits with RunKeeper, your weight with a WiFi Withings scale that will Tweet to your friends, your moods on MoodJam or what makes you happy on TrackYourHappiness. Get even more obsessive about your sleep with Zeo, or about your baby’s sleep (or other biological) habits with TrixieTracker. Track your web browsing, your electric use (or here), your spending, your driving, how much you discard or recycle, your movements and location, your pulse, your illness symptoms, what music you listen to, your meditations, your Tweeting patterns. And, of course, publish it all — plus anything else you care to track manually (or on your smartphone) — on Daytum or mycrocosm or me-trics or elsewhere.

There are names for this craze or movement. Gary Wolf & Kevin Kelly call this the “quantified self” (see Wolf’s must-watch recent Ted talk and Wired articles on the subject) and have begun an international organization to connect self-quantifiers. The trend is related to physiological computing, personal informatics, and life logging.

There are all sorts of legal implications to these developments. We have already incorporated sensors into the penal system (e.g., ankle bracelets & alcohol monitors in cars). How will sensors and self-tracking integrate into other legal domains and doctrines? Proving an alibi becomes easier if you’re real-time streaming your GPS-tracked location to your friends. Will we someday subpoena emotion or mood data, pulse, or other sensor-provided information to challenge claims and defenses about emotional state, intentions, mens rea? Will we evolve contexts in which there is an obligation to track personal information — to prove one’s parenting abilities, for example?

And what of privacy? It may not seem that an individual’s choice to use these technologies has privacy implications — so what if you decide to use FitBit to track your health and exercise? In a forthcoming piece titled “Unraveling Privacy: The Personal Prospectus and the Threat of a Full Disclosure Future,” however, I argue that self-tracking — particularly through electronic sensors — poses a threat to privacy for a somewhat unintuitive reason.

Read More

Health Privacy Paradigm Shift: From Consent to Reciprocal Transparency

Computational innovation may improve health care by creating stores of data vastly superior to those used by traditional medical research. But before patients and providers “buy in,” they need to know that medical privacy will be respected. We’re a long way from assuring that, but new ideas about the proper distribution and control of data might help build confidence in the system.

William Pewen’s post “Breach Notice: The Struggle for Medical Records Security Continues” is an excellent rundown of recent controversies in the field of electronic medical records (EMR) and health information technology (HIT). As he notes,

Many in Washington have the view that the Health Insurance Portability and Accountability Act (HIPAA) functions as a protective regulatory mechanism in medicine, yet its implementation actually opened the door to compromising the principle of research consent, and in fact codified the use of personal medical data in a wide range of business practices under the guise of permitted “health care operations.” Many patients are not presented with a HIPAA notice but instead are asked to sign a combined notice and waiver that adds consents for a variety of business activities designed to benefit the provider, not the patient. In this climate, patients have been outraged to receive solicitations for purchases ranging from drugs to burial plots, while at the same time receiving care which is too often uncoordinated and unsafe. It is no wonder that many Americans take a circumspect view of health IT.

Privacy law’s consent paradigm means that, generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. Some of the brightest minds in cyberlaw have focused on innovation designed to enable such self-protection. For instance, interdisciplinary research groups have proposed “personal data vaults” to manage the emanations of sensor networks. Jonathan Zittrain’s article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
Read More

RFID Tags for Nurses, then Everybody?

Survselfhelplittle.jpgAs an opinion piece by Theresa Brown explains, maintaining proper staffing levels in hospitals is becoming increasingly difficult. Surveillance systems are offering one way to address the problem; work can be performed more intensively and efficiently as it is recorded and studied. But such monitoring has many troubling implications, according to Torin Monahan (in his excellent book, Surveillance in a Time of Insecurity):

The tracking of people [via Radio Frequency Identification Tags] represents a . . . mechanism of surveillance and social control in hospital settings. This includes the tagging of patients and hospital staff. . . . When administrators demand the tagging of nurses themselves, the level of surveillance can become oppressive. . . . [because nurses face] labor intensification, job insecurity, undesired scrutiny, and privacy loss. . . . To date, such efforts at top-down micromanagement of staff by means of RFID have met with resistance. . . . One desired feature for nurses and others is an ‘off’ switch on each RFID badge so that they can take breaks without subjecting themselves to remote tracking. (122)

Like the “nannycam” employed by many a wary parent, the nurse-cam may be seen as a way to protect vulnerable patients (and perhaps increase the accuracy of evidence in malpractice cases). On the other hand, inserting a watchful electronic eye to monitor what is already an extremely stressful job may create many unintended consequences, or deter people from going into nursing altogether. Even advocates of pervasive surveillance recognize these difficulties.

The increasing pressure to monitor what happens inside hospitals reminds me of a recent article by Thomas Goetz in Wired (no link yet) on Google co-founder Sergey Brin’s quest to find a cure for Parkinson’s disease. As Goetz describes it, a new form of “high-speed science” depends on rapid accumulation of as much data as possible:

In Brin’s way of thinking, each of our lives is a potential contribution to scientific insight. We all go about our days, making choices, eating things, taking medications, doing things—generating what is inelegantly called data exhaust. . . . With contemporary computing power, that data can be tracked and analyzed. “Any experience that we have or drug that we may take, all those things are individual pieces of information. Individually, they’re worthless, they’re anecdotal. But taken together they can be very powerful.” In computer science, the process of mining such large data sets for useful associations is known as a market-basket analysis.

Goetz has promoted this as a new way to “do science in the petabyte age.”

I had a few responses to these ideas.
Read More


Contracts and Privacy

Sunlight Disinfects, Unless You Wear Shades

What is the relationship between public policy and contract damages?  A few days back, I blogged about the curious case of Canadian Gabriella Nagy.  Nagy, as you may recall, has sued her cellphone company Rogers Communications for $600,000 (Canadian), alleging “invasion of privacy and breach of contract.”  According to Nagy, Rogers consolidated her cellphone bill into a global family statement without notifying her.  This consolidation led her spouse to see she was calling another man with inordinate frequency, and she was forced to confess an affair.  The marriage dissolved, and Nagy blamed the cellphone company.

I think the breach of contract lawsuit, if filed in an American court applying fairly ordinary domestic contract principles, would be a loser.  Here are some reasons why.

The common law generally dislikes punishing breach with liability or damages when the inevitable consequence of performance is to motivate socially wrongful conduct, and nonperformance to retard it.  Though famously public policy is an “unruly horse,” it is settled law that the morality of the underlying conduct to be protected bears a significant relationship on the ability to seek relief at law (whether in terms of liability or damages.)  Consider a lovely case I teach in the first year, Shaheen v. Knight, 11 Pa. & C.2d 41 (1957).  In Shaheen, plaintiff contracted with defendant for guaranteed sterility following a vasectomy. When a child resulted, the snipped but still-virile  Shaheen sued for breach.  Though the vasectomy contract was not itself void – since family planning and private control are social goods- the court believed that to allow damages “for the normal birth of a normal child is foreign to the universal public sentiment of the people.” That is, the availability of damages turns on whether the plaintiff has been subjected to a harm (executory or otherwise) that society seeks to validate as legitimate.  The easy example is a contract between A and B to commit a crime or violate a statute.  Even if the contract weren’t void on its face, you can’t get damages (nor, often, restitution).  A little further down the line are transactions over the means to unlawful conduct.  Imagine a seller and a buyer enter into a sales contract, where the buyer is going to promptly relabel the goods for fraudulent resale. Seller, learning of the plan, refuses to deliver and the buyer sues the seller, seeking the difference in value between what he expected (delivery price) and what he got (presumably, market price to cover).  Can the buyer recover this remedy? Generally not, unless the seller knew of the improper purpose at the time of the contract, in which case the seller might have to disgorge something.

What about cases where A and B contract not to disclose some fact X, and the nondisclosure will create harm for innocent third parties.  These contracts are often enforced (every confidentiality clause probably shelters some fact with the potential for third party harm.)  But the degree to which the nonbreaching party can recover ought to turn on what’s being kept secret: if the secret is particularly socially harmful (oozing toxic sludge!) we might believe that the hiding, non-breaching, party doesn’t get to recover for breach.  Thus, you sometimes see cases where fraud-revealing employees are protected from consequences of nondisclosure agreements by (effectively) common law whistleblower doctrines.

Where the third-party harm relates to marriage, the law appears to be more categorical.  Public policy concerns about contracting and third party harm are strongest in agreements touching on issues of family life and infidelity.  This is evidenced (of course) by the skepticism that common law courts traditionally had toward premarital contracts, especially those that purported to limit post-divorce support obligations.  The theory was that such provisions encouraged divorce, and thus were not contracts that society wanted.  See generally Farnsworth’s Fourth Edition, § 5.4.  So, for example, imagine that two parties made a private contract to hide evidence of adultery from their respective spouses.  One party, overcome with conscience, decides to fess up. The “nonbreaching” adulterous party sues the “breaching” adulterous party, seeking benefit-of-the-bargain damages.  I think there little chance that the non-breaching adulterer could recover any damages in court. Cf. Jim Lindgren, Unraveling the Paradox of Blackmail, 84 Colum. L. Rev. 670, 681 n.58 (1984) (“[N]either a threat to do an immoral act (expose damaging information) nor an offer to breach a public duty (hide criminality) can be the subject of a legal contract.”)

Read More


The Havasupai Indians, Genetic Research and the Problem of Informed Consent

Researchers can gain significant genetic information by studying indigenous and preferably isolated populations. Although both researchers and indigenous populations can gain from this collaboration, the two  groups often do not see eye to eye.  This was the case of the collaboration between the Havasupai Indians and researchers from Arizona State University, which resulted in a long legal fight. The Havasupai Indians were suffering from high prevalence of diabetes and agreed to give their blood samples for genetic research on Diabetes. The members of the tribe were infuriated when they found out later that their blood samples were used for other purposes, among them genetic research on schizophrenia.

The New York Times reported yesterday that this conflict resulted in a settlement in which Arizona State University agreed to pay $700,000 to the tribe members and also return the blood samples. The Havasupai Indians’ main legal claim was of violation of informed consent. Informed consent requires that patients and research subjects receive full information that will enable them to decide whether to adopt a certain medical treatment plan or participate in research. Here, the Havasupai Indians argued that the informed consent principle was violated because they were told that their blood samples will be used for one purpose while, in fact, they were used for another.

No doubt, the Havasupai Indians informed consent argument resulted in their victorious settlement. But, the harder question is whether informed consent principle can be feasibly applied  in the area of genetics.  Genetic information is not just individual information it also provides information about groups and families. For example, assume there is a tribe in which some members agree to participate in genetic research investigating Manic Depression.  Other members of the tribe refuse because they are concerned that a result showing that there is a prevalent genetic mutation for Manic Depression among them could stigmatize them and even lead to discrimination against the tribe. The researchers collect samples only from the members of the group who agree to the research. But,  the results  still provide genetic information on all members of the tribe even those who refused to participate because of their genetic connection to those who participated. 

The result in the Havasupai settlement cannot be seen then as a victory for the principle of informed consent in the area of genetics. Restricting genetic researchers to use of samples only for the purpose for which they were collected only partly resolves the informed consent problem. The group nature of genetic information makes the application of informed consent to genetic research much more complicated than that.


Mainstreaming Privacy Torts

Much as crushed hands and burns were defining accidents of the Industrial Age, information disclosures and other privacy problems are characteristic hazards of the Information Age.  Despite the prevalence of privacy injuries that can be far worse than those of the past, modern privacy torts often fail to address them.  I recently posted on SSRN a draft of my article Mainstreaming Privacy Torts (forthcoming in California Law Review), which offers strategies for ensuring privacy tort law’s continued efficacy.  I would love comments on the piece.  Here is the abstract:

In 1890, Samuel Warren and Louis Brandeis proposed a privacy tort and seventy years later, William Prosser conceived it as four wrongs. In both eras, privacy invasions primarily caused psychic and reputational wounds of a particular sort. Courts insisted upon significant proof due to those injuries’ alleged ethereal nature. Digital networks alter this calculus by exacerbating the injuries inflicted. Because humiliating personal information posted online has no expiration date, neither does individual suffering. Leaking databases of personal information and postings that encourage assaults invade privacy in ways that exact significant financial and physical harm. This dispels concerns that plaintiffs might recover for trivialities.

Unfortunately, privacy tort law is ill-equipped to address these changes. Prosser built the modern privacy torts based on precedent and a desire to redress harm. Although Prosser’s privacy taxonomy succeeded in the courts because it blended theory and practice, it conceptually narrowed the interest that privacy tort law sought to protect. Whereas Warren and Brandeis conceived privacy tort law as protecting a person’s right to develop his “inviolate personality” free from unwanted publicity and access by others, Prosser saw it as addressing specific emotional, reputational, and proprietary injuries caused by four kinds of activities prevalent in the twentieth century. Courts have too often rigidly interpreted the four privacy torts, further confining their reach. As a result, Prosser’s privacy taxonomy often cannot address the privacy interests implicated by networked technologies.

The solution lies in taking the best of what Prosser had to offer – his method of borrowing from doctrine and focusing on injury prevention and remedy – while ensuring that proposed solutions are transitional and dynamic. Any updates to privacy tort law should protect the broader set of interests identified by Warren and Brandeis, notably a person’s right to be free from unwanted disclosures of personal information so that he can develop his personality. While leaking databases and certain online postings compromise that interest, we should invoke mainstream tort remedies to address them, rather than conceiving unattainable new privacy torts. In addition to supplementing privacy tort law with traditional tort claims, courts should consider the ways that the internet magnifies privacy harms to ensure law’s recognition of them.


Is Disclosing a 911 Call to the Public a Privacy Violation?

Whenever there’s a story these days about an emergency 911 call, the call is often disclosed to the public.  Recently, there was news of yet another public disclosure of a 911 call, this time a call by a woman who witnessed the suicide of Marie Osmond’s son.

I’ve long thought that the public disclosure of 911 calls violates the privacy of the callers.  Many 911 calls involve people calling for medical reasons, and matters about their physical or mental health are discussed in the call.  Doctors and nurses are under a duty of confidentiality, so why not 911 call centers, especially when people are revealing medical information?

The call about Osmond’s son was by a witness.  But suppose a person who attempted suicide called 911 and asked for an ambulance.  This would reveal highly sensitive medical information about the person and the fact the person attempted suicide.

Recently, the Associated Press ran a story on the issue of public disclosure of 911 calls:

Linda Casey dialed 911 and screamed, “Oh, God!” over and over again into the phone after finding her daughter beaten to death in the driveway of their North Carolina home.

Later that day, she heard the 911 recording on the local news and vomited.

“This was not only the most painful thing I have ever been through, it should have been the most private,” she said in an e-mail.

Because of situations like Casey’s, lawmakers in Alabama, Ohio and Wisconsin are deciding whether to bar the public release of 911 calls.

Missouri, Pennsylvania, Rhode Island and Wyoming already keep such recordings private. But generally, most states consider emergency calls public records available on request, with exceptions sometimes made for privacy reasons or to protect a police investigation.

AP, States Eye Ban on Public Release of 911 Calls (Feb. 23, 2010).

Since I blogged recently about the constitutional right to information privacy, it readily comes to mind in this context.  In Whalen v. Roe, 429 U.S. 589 (1977), the Supreme Court held that the right to privacy protects not only “independence in making certain kinds of important decisions” but also the “individual interest in avoiding disclosure of personal matters.”  This latter interest — the constitutional right to information privacy — is recognized by most federal circuit courts.

Read More