# The Weight of Probability, the Cloud of Uncertainty, and More

This is my second post responding to comments on *Probably?*. You can view the first post here.

A.J. Sutter asks whether all tax probabilities are created equal. He suggests that for very basic advice (for example, the permissibility of a home office deduction), the advisor’s opinion might be based on how many times his clients have gotten into trouble when audited, but for more exotic structures, it would be harder to estimate the chance that a court would uphold the transaction, because there wouldn’t be as much experience for estimating prior probabilities. I think this is right, but I’m not sure it means that we necessarily *know* the probability of success of the more common transaction, in a frequentist sense. Rather, I think Sutter’s point gets at the idea of the Keynesian idea of the “weight” of a given probability: that an argument has more “weight” than another when it is based on more evidence. And his point also highlights one of the problems with relying on tax advisors for your probability estimates: you’re not just getting a probability estimate, you’re also getting the tax advisor’s level of certainty about his estimate, and also, because the tax advisor will be liable if he really messes up, the probability number might have moved based on the tax advisor’s risk aversion. And you, the taxpayer, have no way of disaggregating those numbers.

Sutter also points out that some taxpayers might have more uncertainty aversion than others do, and, more importantly, some *types* of taxpayers might have more uncertainty aversion than other *types*. This certainly seems possible, and it would be great to see some empirical work on this. I want to be careful, though, that we distinguish between *risk* aversion and *uncertainty* aversion. Even if a wealthy, tax-shelter-loving taxpayer likes to “roll the dice,” he still might not like to play if he doesn’t know what the odds are–that is, he might like risk and not like uncertainty. One more small point on Sutter’s comment: most ambiguities are *not* likely to be resolved, at least not as long as the standard is what a court would do if the transaction were reviewed. The vast majority of transactions are never reviewed by a court, because most audits settle.

Finally, billb is concerned that high levels of uncertainty are likely to inhibit both legitimate and illegitimate transactions. First, let’s get a better sense of what “high levels of uncertainty” means. We can imagine uncertainty as a sort of plus/minus around a particular probability of success–a cloud of probability, of sorts. For example, we might imagine a highly uncertain transaction to be a transaction about which a tax advisor can say only, “Well, I think there is somewhere between a 10% chance and a 50% chance that a court would uphold that transaction, but I can’t be any more specific than that.” (One study that examined how subjects reacted to increased vagueness about whether a tax deduction would be disallowed created conditions of uncertainty either by including a statement from an accountant that “I am very unsure and hesitate to guess” about the probability that the deduction would be disallowed; to create risk, they included a statement that the given probability that a deduction would be disallowed was “exact.”)

At any rate, we might be able to manipulate levels of uncertainty so that we didn’t deter legitimate transactions, depending on what we mean by legitimate. For example, if the IRS doesn’t want anyone engaging in transactions that have a less than 50% chance of being upheld, and if it turned out that individuals who were engaging in transactions likely to be struck down were uncertainty averse, the IRS could increase uncertainty on the “bad end” of the compliance spectrum, but reduce uncertainty when the chance of success was higher. (For an interesting article about how we might apply uncertainty aversion in law, albeit non-tax law (you know it kills me to recommend a non-tax article), see Tom Baker, Alon Harel, & Tamar Kugler, *The Virtues of Uncertainty in Law: An Experimental Approach*, 89 Iowa Law Review 443 (2004); it presents the results of a non-tax experiment that suggests that uncertainty with regard either to the size of a sanction or the probability of detection increases deterrence.)

Thanks for your reply. As for your point about settlement rather than adjudication, yes, of course. I also agree that we wouldn’t know the probability of success of a home office deduction or other common transaction in a frequentist sense. For that reason, though, I’m not sure I buy your distinction between risk and uncertainty aversion in the case of fancy tax shelters either. When you say that a wealthy taxpayer might not want to roll the dice “if he doesn’t know what the odds are” begs the question of whether the odds are knowable. But since most rich folks probably don’t read this blog, maybe what matters more is whether they *feel confident* that they “know the odds;” they probably don’t know that the odds are unknowable a priori. In that case “risk” would be a subjective state too.

If it were me getting the tax advice, and I were thinking in such highfalootin’ terms at all, I’d probably consider the adviser’s basis for estimating prior probabilities — i.e., I’d be thinking in a more Bayesian vocabulary. (You hint at this with your Keynesian characterization.) I’d also be considering other chunks of information to form my own subjective probability estimates of whether the adviser is trustworthy, e.g., how many years’ experience does he or she have, is he wearing a pinkie ring, etc.

As I mentioned in a comment to an earlier post, the notion of “risk” seems like a bit of a straw man or foil, a point of comparison for contrasting an excessively theoretical approach with a real life one favored by the person making the distinction. It’s worth noting that even Kolmogorov, who created the modern measure-theoretical foundations of probability theory (which are also frequentist), waved his hands about the application of probability to reality, at least in situations other than highly controlled, laboratory conditions. See, e.g., Jan von Plato’s _Creating Modern Probability_ (Cambridge UP 1994) @ 219-221.

I admit I’m being something of a stickler in insisting on the rarity of true risk in everyday life. In an earlier post, you defend the distinction by saying that “Some things operate more like known probabilities, and some things operate more like unknown probabilities — that is, some things are more risky (e.g., coin flipping), and some things are more uncertain (e.g., presidential elections).” You make it sound like more of a fuzzily-defined difference in degree, rather than in kind; that might be OK for informal, colloquial use. And the way Frank Knight used it may have been fine, given his polemical purpose. But this raises the question of whether Knight’s distinction is really as profound as people make out when they try to use it in other contexts. You don’t need his distinction to distinguish frequentism from subjectivism, or to see the subjectivity of many probability estimates. On the other hand, his distinction can be misleading if it becomes reified and interpreted as something more rigorous than it is.