Setting the Bar, and the Limits of Empirical Research
Larry Ribstein and Jonathan Wilson are debating the merits of a strong, exclusionary, state bar.
Wilson’s position is pro-Bar:
Deregulating lawyers as punishment or retribution for a profession that has lost its way would be a recipe for disaster. Deregulating the practice of law would open the floodgates to fraud of every conceivable variety and would only compound the problems that the readers of these pages see in our civil justice system.
Ribstein, naturally, is pro-market:
Big law firms provide a strong reputational “bond” . . . Lawyers can be certified by private organizations, including existing bar associations, which can compete with each other by earning reputations for reliability. . . .We could have stricter pleading rules, or require losers to pay winners’ fees. Or how about this: let anybody into court, but adopt a loser pays rule for parties that come into court represented by anything less than a lawyer with the highest possible trial certificate . . . Even if only licensing would effectively deal with this problem, the licensing scheme should be designed specifically to protect the courts. Instead of requiring the same all-purpose license to handle a real estate transaction and to prosecute a billion-dollar class action, we could have a special licensing law for courtroom practice, backed by tight regulation of trial lawyers’ conduct – something like the traditional barrister/solicitor distinction in the UK.
Josh Wright has picked up the thread of the discussion at TOTM, and suggests that empirical evidence would inform this debate. Unfortunately, as both Larry and he note, there is a paucity of useful studies on point:
If I recall, the Federal Trade Commission has recently been involved in some advocacy efforts in favor of limiting the scope of unauthorized practice of law statutes. My sense is that a number of states must have relaxed unauthorized practice of law restrictions (I think Arizona is one), or similarly relaxed restrictions on lawyer licensing, such that one could directly test the impact of these restrictions on consumers in terms of prices and quality of service. There must be work on this somewhere.
Generally, I like Josh’s intuition. It would be quite useful to look to Arizona, or other natural experiments, to help us to answer the problem of the utility of the Bar Exam and other licensing barriers. Surely, there is no reason in the abstract to preserve an ancient system that keeps lawyer fees artificially high, diverts millions of dollars from law students to Barbri, and causes no end of mental anguish simply because it provides a new jurisprudential lens!
But I’m quite skeptical that this is an answerable question, at least in the short term. My thinking is informed somewhat by the new Malcolm Gladwell New Yorker essay about basketball. Although Gladwell extols the virtues of statistical analysis (instead of anecdote, judgment, and valuing the joy of watching Allen Iverson triumph despite his height), the lesson I took from the piece was that:
Most tasks that professionals perform . . . are surprisingly hard to evaluate. Suppose that we wanted to measure something in the real world, like the relative skill of New York City’s heart surgeons. One obvious way would be to compare the mortality rates of the patients on whom they operate—except that substandard care isn’t necessarily fatal, so a more accurate measure might be how quickly patients get better or how few complications they have after surgery. But recovery time is a function as well of how a patient is treated in the intensive-care unit, which reflects the capabilities not just of the doctor but of the nurses in the I.C.U. So now we have to adjust for nurse quality in our assessment of surgeon quality. We’d also better adjust for how sick the patients were in the first place, and since well-regarded surgeons often treat the most difficult cases, the best surgeons might well have the poorest patient recovery rates. In order to measure something you thought was fairly straightforward, you really have to take into account a series of things that aren’t so straightforward.
I know how I would test the direct cost of legal service in Pennsylvania, and I’ve no doubt that it would go down if I (by fiat) abolished the state bar. But I have no good idea of how we can measure lawyer “quality”. To take something as obvious as criminal defense, some really good public defenders will lose every case for a year, but take comfort in having not lost on the top count of a single indictment. Saying that a public defender who went 0 for 50 in 2005 was a less “good” attorney than a prosecutor who went 50-0 would be a real problem. Facts drive litigation, and make empirical investigation of lawyer quality as a quantitative matter hard. And that is for attorneys who perform in public. How do you evaluate the relative strength of deal counsel on a gross level? Count the typos in the document? Talk with the business folks, and ask who got in the way less? [Obviously, deal counsel can be very good and very bad: the point is we need metrics that are easily coded by, say, research assistants.]
So here is the question for our readers. Can you design an empirical project that measures both litigation and transactional practice quality as a function of licensing?