Shepardizing the Academy

Having received the guest bogger’s dreaded “here’s your hat, what’s your hurry” from Dan Solove, I thought I’d sign off (and make way for my colleague Rachel Godsil who will undoubtedlly be far more re interesting than I), with a final entry on easing the plight of the scholar.

A perennial complaint of the legal scholar is the difficulty of keeping up with the literature. (Admittedly, the non-academic world, especially the part of it that moves heavy things, is not likely to be sympathetic to our travails, but we’re speaking within the club here).

One time-honored solution, of course, is to re-define one’s field into smaller and smaller fragments, thus excluding increasingly larger amounts of material from that about which one must know. This has its limitations, however. I don’t mean logical limitations because, like particle physics, any field can apparently be reduced to progressively smaller parts. The limits are mostly loss of credibility among colleagues and students when a supposed expert really doesn’t know much about the next quark over.

Anyhow, the problem, as I see it, that we don’t have a simple device to help us decide what’s worth reading. Our cousins in the bench and bar have such a mechanism for the tools of their trade –Shepards for Lexis and whatever West calls its imitation. But there’s no similar labor-saving device for scholarship. Thiink of how much easier our lives would be with some version of this for our articles:

shep61 (Small).jpg

Of course, one can jury-rig available tools to come up with some approximation of a Shepards, but present technology is too limited. It’s easy to see how often a particular article has been cited, but not so easy to see if it’s been string-cited, actually discussed at any length, and ultimately approved or disapproved by other scholars. Various rankings, of course, periodically attempt to redress this, but often do so only for the upper tier. SSRN downloads are a measure, but maybe only of how interesting the abstract (as opposed to the article) actually is, and then there’s the pesky problem of how to assess success once an article has been published and scholars are (presumably) accessing it on Lexis or Westlaw or (gasp!) even in hardcopy.

One partial solution is so obvious it’s amazing no one has thought of it before: a Shepards for scholarship. (OK, someone probably has, but the closest I’ve seen is Michael Madison, who suggests a theoretically better but practically more complicated method of post-publication review by scholars “tagging” others’ work. (Thanks for the citation to my colleague Frank Pasquale who thinks a lot about “Information Overload Externalities”.)

Naysayers will at once object that Shepardizing scholarship would lack the crisp tools of case law. It’s true that it is rare for a scholar to “overrule” himself and, of course, a scholar cannot be “reversed” by anyone else. More’s the pity. But a moment’s thought l reveals that much of the work done by Shepards is not limited to critical moments in a precedent’s life (or death). Rather, much of Shepards is devoted to tracking who cited whom, and, using what I assume are algorithms rather than human judgment, determining whether a particular authority has been “questioned” or “disapproved.” And I’m pretty confident that some algorithm often explains the ultimate summary signal that a case is bad law (Lexis uses a stop sign) or dubious (Lexis uses a question mark).

That’s all I want: some system designed to tell me who cited what, when, and how favorably.

Now, I admit, this is not a panacea. There are those who find signals to be misleading, and I admit to having been taken aback when I discovered that Brown v. Board of Education was assessed by Lexis as. orange_q_big.JPG

Who knew? Maybe the Court has shifted further to the right than I’d noticed.

But all tools have their limitations, and a Scholarly Shapards (or maybe a version of this on SSRN) would go a long way to filling a gap. Not to mention the fun of developing the correct algorithms to decode the academic language by which scholars express their true opinions of prior work.

The biggest problem with this proposal is the obvious — it would be of little use in deciding what new scholarship to read since, by definition, it hasn’t been around long enough to trigger reactions from other scholars. Truly, there’s no rest for the weary.

You may also like...