The Quantified Self: Personal Choice and Privacy Problem?

“The trouble with measurement is its seeming simplicity.” — Author Unknown

“Only the shallow know themselves.” — Oscar Wilde

Human instrumentation is booming. FitBit can track the number of steps you take a day, how many miles you’ve walked, calories burned, your minutes asleep, and the number of times you woke up during the night. BodyMedia’s armbands are similar, as is the Philips DirectLife device. You can track your running habits with RunKeeper, your weight with a WiFi Withings scale that will Tweet to your friends, your moods on MoodJam or what makes you happy on TrackYourHappiness. Get even more obsessive about your sleep with Zeo, or about your baby’s sleep (or other biological) habits with TrixieTracker. Track your web browsing, your electric use (or here), your spending, your driving, how much you discard or recycle, your movements and location, your pulse, your illness symptoms, what music you listen to, your meditations, your Tweeting patterns. And, of course, publish it all — plus anything else you care to track manually (or on your smartphone) — on Daytum or mycrocosm or me-trics or elsewhere.

There are names for this craze or movement. Gary Wolf & Kevin Kelly call this the “quantified self” (see Wolf’s must-watch recent Ted talk and Wired articles on the subject) and have begun an international organization to connect self-quantifiers. The trend is related to physiological computing, personal informatics, and life logging.

There are all sorts of legal implications to these developments. We have already incorporated sensors into the penal system (e.g., ankle bracelets & alcohol monitors in cars). How will sensors and self-tracking integrate into other legal domains and doctrines? Proving an alibi becomes easier if you’re real-time streaming your GPS-tracked location to your friends. Will we someday subpoena emotion or mood data, pulse, or other sensor-provided information to challenge claims and defenses about emotional state, intentions, mens rea? Will we evolve contexts in which there is an obligation to track personal information — to prove one’s parenting abilities, for example?

And what of privacy? It may not seem that an individual’s choice to use these technologies has privacy implications — so what if you decide to use FitBit to track your health and exercise? In a forthcoming piece titled “Unraveling Privacy: The Personal Prospectus and the Threat of a Full Disclosure Future,” however, I argue that self-tracking — particularly through electronic sensors — poses a threat to privacy for a somewhat unintuitive reason.

I do not worry that sensor data will be hacked (although it could be), nor that the firms creating such sensors or web-driven tracking systems will share it underhandedly (although they could), nor that their privacy policies are weak (although they probably are). Instead, I argue that these sensors and tracking systems are creating vast amounts of high-quality data about people that has previously been unavailable, and that we are already seeing ways in which sharing such data with others can be economically rewarding. For example, car insurance companies are now offering discounts if you install an electronic monitor in your car that tells the insurer your driving habits, and employers can use DirectLife devices to incentivize employees to participate in fitness programs (thereby reducing health insurance costs).

Such quantified, sensor-driven data become part of what I call the “Personal Prospectus.” The Personal Prospectus is a metaphor for the increasing array of verified personal information that we can share about ourselves electronically. Want to price my health insurance premium? Let me share with you my FitBit data. Want to price my car rental or car insurance? Let me share with you my regular car’s “black box” data to prove I am a safe driver. Want me to prove I will be a diligent, responsible employee? Let me share with you my real time blood alcohol content, how carefully I manage my diabetes, or my lifelong productivity records.

All of this seems like merely (quirky) personal choice at first, particularly for those with “good” information who begin the trend by self-quantifying and then using that data to personal advantage (through discounts, etc.). But personal choice begets privacy issues if these information markets begin to unravel. Unraveling occurs because when a few people with “good” information can verifiably measure, track, and share information, everyone (even those with “bad” information) may ultimately find they have little choice but to follow suit. If all candidates for a job are willing to wear a blood alcohol monitor and you’re not, the  negative inference drawn about you is obvious. If all the safe drivers quickly sign up for “discounts” that require electronic monitoring of their driving, those who refuse will quickly find themselves paying what amounts to a penalty. (For my recent post on unraveling as corporate strategy, see here.)

There are harms here beyond the pressure to consent. If you were somewhat horrified by the first paragraphs of this post — if you thought “why would anyone want to track so much data about themselves?” — the unraveling threat may particularly bother you. As Anand Giridharadas recently asked in a (short and worth watching) discussion of the quantified self movement, taken together these devices “imply an approach to life that may be something different than what we want life to be about … Because we have these things we’re just doing them, without thinking about whether we want to become the kind of people who do them.”

Your choice to quantify your self (for personal preference or profit) thus has deep implications if it necessitates my “choice” to quantify my self under the pressure of unraveling. What if I just wasn’t the sort of person who wanted to know all of this real-time data about myself, but we evolve an economy that requires such measurement? What if quantification is anathema to my aesthetic or psychological makeup; what if it conflicts with the internal architecture around which I have constructed my identity and way of knowing? Is “knowing thyself” at this level, and in this way (through these modalities), autonomy-enhancing or destroying, and for whom? What sorts of people — artists? academics? writers? — will be most denuded or excluded by such a metric-based world?

For anyone who has read Gary Shteyngart‘s Super Sad True Love Story, it’s not hard to see a future in which obsessive measurement — of ourselves, others, everything — may leave some feeling reduced immeasurably by the hegemony of the measurable. Because of the unraveling effect, these reluctant late adopters may not have a choice; as many choose to quantify the self, all may have no real choice but to follow …

You may also like...

3 Responses

  1. A.J. Sutter says:

    I just finished reading Super Sad True Love Story, and while I didn’t find it particularly super, it is definitely sad to see that it’s coming true. Thanks for this post, I guess.

  2. Frank says:

    You’re right about all these pressures, Scott. I think one of the only ways to stop the process is to forbid employers, etc., from asking for these types of profiles. EMR expert Sharona Hoffman has recently warned that “Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers.” It’s a really worrisome trend.

  3. Frank Pasquale says:

    Oh, and a few other items that might articulate cognate discomforts:

    1) You might like this book: You Are Not a Gadget: A Manifesto by Jaron Lanier, nicely reviewed here by Zadie Smith:

    http://www.nybooks.com/articles/archives/2010/nov/25/generation-why/?pagination=false

    Smith’s key point below

    “Lanier is interested in the ways in which people “reduce themselves” in order to make a computer’s description of them appear more accurate. “Information systems,” he writes, “need to have information in order to run, but information underrepresents reality” (my italics). In Lanier’s view, there is no perfect computer analogue for what we call a “person.” In life, we all profess to know this, but when we get online it becomes easy to forget. In Facebook, as it is with other online social networks, life is turned into a database, and this is a degradation, Lanier argues, which is based on [a] philosophical mistake…the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.”

    So, perhaps, people might maximize “quantified health” in ways that don’t really do much for their real health status. Or they might maximize “mental health” along the lines suggested in Radiohead’s OK Computer. Or they might write lots of short articles on controversial topics to get more downloads from SSRN. Both “downloads” and “links” strike me as very reductive measures of work’s quality.

    2. Here’s another interesting take on tech here:
    http://www.thenewatlantis.com/publications/romance-in-the-information-age

    “The other destructive tendency our technologies encourage is over-sharing—that is, revealing too much, too quickly, in the hope of connecting to another person. The opportunities for instant communication are so ubiquitous—e-mail, instant messaging, chatrooms, cell phones, Palm Pilots, BlackBerrys, and the like—that the notion of making ourselves unavailable to anyone is unheard of, and constant access a near-requirement. As a result, the multitude of outlets for expressing ourselves has allowed the level of idle chatter to reach a depressing din.”