Computer Crime Law Goes to the Casino

Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.

House of Video Games

Kane found the bug in Game King video poker machines manufactured by IGT. He and Nestor used it to take casinos in Nevada and Pennsylvania for hundreds of thousands of dollars. Here’s how Poulsen describes their technique (for more details, see “Exhibit 1” here):

Kane began by selecting a game, like Triple Double Bonus Poker, and playing it at the lowest denomination the machine allows, like the $1.00 level. He kept playing, until he won a high payout, like the $820 at the Silverton.

Then he’d immediately switch to a different game variation, like straight “Draw Poker.” He’d play Draw Poker until he scored a win of any amount at all. The point of this play was to get the machine to offer a “double-up”, which lets the player put his winnings up to simple high-card-wins draw. …

At that point Kane would put more cash, or a voucher, into the machine, then exit the Draw Poker game and switch the denomination to the game maximum — $10 in the Silverton game.

Now when Kane returned to Triple Double Bonus Poker, he’d find his previous $820 win was still showing. He could press the cash-out button from this screen, and the machine would re-award the jackpot. Better yet, it would re-calculate the win at the new denomination level, giving him a hand-payout of $8,200.

They were charged under paragraph (a)(4) of the CFAA, which punishes:

Whoever—knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value … (emphasis added)

The sticking point in the charge is whether they “exceeded authorized access.” The government’s theory is that they exceeded their authorization by using the double-up switch to increase the payout from the first win. Their response, as summarized by Poulsen, is that they “played by the rules imposed by the machine.”

The Usual Statutes

There are, broadly speaking, two ways that a computer user could “exceed[] authorized access.” The computer’s owner could use words to define the limits of authorization, using terms of service or a cease-and-desist letter to say, “You may do this, but not that.” Or she could use code, by programming the computer to allow certain uses and prohibit others.

The conventional wisdom is that word-based restrictions are more problematic. Take the infamous Lori Drew case. She created a MySpace account for a fictional teen, “Josh Evans,” to flirt with and then cruelly reject Megan Meier, a thirteen-year-old neighbor who then committed suicide. A federal prosecutor charged Drew under the CFAA, for violating the MySpace terms of service, which prohibited providing false information in the sign-up process. Drew behaved reprehensibly, but if she was a computer criminal, then so are the millions of Americans who routinely violate terms of service. As explained by Judge Kozinski in the recent case of United States v. Nosal:

Or consider the numerous dating websites whose terms of use prohibit inaccurate or misleading information. Or eBay and Craigslist, where it’s a violation of the terms of use to post items in an inappropriate category. Under the government’s proposed interpretation of the CFAA, posting for sale an item prohibited by Craigslist’s policy, or describing yourself as “tall, dark and handsome,” when you’re actually short and homely, will earn you a handsome orange jumpsuit. (citations omitted)

The scholarly consensus is similar. The leading article is Orin Kerr’s Cybercrime’s Scope from 2003, which argues that reading the CFAA to encompass terms of service violations “grants computer network owners too much power to regulate what Internet users do, and how they do it.” In contrast, argue Kerr and many others, the CFAA should be reserved for real hacking cases: as he puts it, the “circumvention of code-based restrictions”

Unfortunately, it’s surprisingly hard to decide whether a user has gone beyond a code-based access barrier of the sort that shuld trigger the CFAA. Take this blog post in which Kerr tries to sort out authorized from unauthorized access using six hypotheticals about code-based restrictions. He argues that guessing a user’s password is “one of the paradigmatic forms of unauthorized access” but that guessing a unique URL is not, because “you can’t post stuff on the web for anyone to see and then just hope that only the right people happen to look at the right pages.”

This is a fine distinction indeed. According to Kerr, a user who views a confidential document by typing “eOH7KvedHxS3iYRa” into a text box on a webpage is a computer criminal, but a user who views a confidential document by typing by typing “?pw=eOH7KvedHxS3iYRa” into the browser’s URL bar should go free. It’s the same information, being used for the same purpose, in almost the same way.

The Australian Job

To get a sense of why these cases can be so difficult, consider an Australian case, Kennison v. Daire, in which the defendant was convicted of larceny for stealing AU$ 200:

He was the holder of an Easybank card which enabled him to use the automatic teller machine of the Savings Bank of South Australia to withdraw money from his account with that bank. … Before the date of the alleged offence, the appellant had closed his account and withdrawn the balance, but had not returned the card. On the occasion of the alleged offence, he used his card to withdraw $200 from the machine at the Adelaide branch of the bank. He was able to do so because the machine was off-line and was programmed to allow the withdrawal of up to $200 by any person who placed the card in the machine and gave the corresponding personal identification number. When off-line the machine was incapable of determining whether the card holder had any account which remained current, and if so, whether the account was in credit.

But Kennison raised a fascinating defense. He argued that the bank had “consented” to the withdrawal by programming the ATM to pay out money without checking the account balance when offline. He had a point; the bank had indeed programmed the ATM that way. It wasn’t as though he’d used a blowtorch to cut a hole in the side of the ATM, or pointed a gun at a teller.

Once you see the consent argument, you can’t unsee it. Perhaps MySpace “consented” to Lori Drew’s fake account by letting her create it. Perhaps IGT “consented” to Kane’s winning plays by programming the Game King to give him money. And so on. In any CFAA case, the defendant can argue, “You say I shouldn’t have done it, but the computer said I could!”

But Kennison lost. The High Court of Australia brushed off his consent argument:

The machine could not give the bank’s consent in fact and there is no principle of law that requires it to be treated as though it were a person with authority to decide and consent. … It would be quite unreal to infer that the bank consented to the withdrawal by a card holder whose account had been closed.

What this means, in other words, is that the “authorization” conferred by a computer program—and the limits to that “authorization”—cannot be defined solely by looking at what the program actually does. In every interesting case, the defendant will have been able to make the program do something objectionable. If a program conveys authorization whenever it lets a user do something, there would be no such thing as “exceeding authorized access.” Every use of a computer would be authorized.

Interpretive Layer Cake

Is it possible to salvage the code-based theory of authorization? Arguably, yes. We could say that Kennison knew that he no longer had an account with the bank, that we ordinarily use ATMs to withdraw money that we have previously deposited, that the ATM would have not let him withdraw the money if it had been were online, and that if a teller had been able to observe the transaction it would have been vetoed. We could call these social norms, or background facts, or context, but by whatever name, they suggest that a reasonable person in Kennison’s position would have recognized the offline withdrawal was an unauthorized exploit rather than an authorized disbursement of money.

This analysis is normatively and legally plausible. But notice what the approach requires. It requires us to ask what a person in the defendant’s position would have understood the computer’s programmers as intending to authorize. What the program does matters, not because of what it consents to, but of what it communicates about the programmer’s consent.

In other words, both word-based and code-based theories of authorization require an act of interpretation. To convict a defendant under a word-based theory, we must interpret terms of service; to convict a defendant under a code-based theory, we must interpret the code. This is not “interpretation” in the computer-science sense of running the program and seeing what happens. This is “interpretation” in the literary sense of ascribing a meaning to a text. Computer programs are texts, and in this context they convey meaning to human interpreters as well as to electronic ones.

Kennison’s case, then, involves an ambiguous text. The ATM that lets card-holders withdraw money from closed accounts when offline is susceptible to multiple meanings. It could be interpreted to authorize such withdrawals; it could be interpreted to prohibit them. The court resolved the ambiguity against Kennison, using some of the same interpretive devices it would apply to a statute or a contract. Indeed, we could say that the court quickly reached the limits of interpretation and exhausted the program’s linguistic meaning. It was forced to resort to an act of construction in determining the legal consequences of the ATM’s programming.

The same will be true in any other case of a code-based access restriction. The text—the program—will be capable of supporting at least two meanings. It can have the meaning that corresponds to its behavior: for example, paying out when the user switches games rather than doubling up. Or it can have another meaning, one that the programmer says corresponds to her true intentions: for example, not paying when the user switches games rather than doubling up. The ubiquity of bugs demonstrates that these two meanings will frequently diverge.

The distinction between “bug” and “feature” on which all of these cases turn is a social fact, not a technical one. That’s why Kerr can draw his line between text boxes and URLs. In his experience, text boxes for passwords are used to signal a level of security and confidentiality that complicated URLs are not. The distinction is plausible. It is also profoundly contingent on the habits of programmers — and it is far from clear that we should expect users to know about this line.

Three Game Kings

It should now be clear why code-based CFAA cases can be so puzzling. Consider Kane’s trick with the Game King. Were his jackpots “authorized?” In hindsight, IGT and the casinos would say “no”: IGT promptly released a patch once it realized how the double-up switch worked; the casinos installed it. But there is a difference between later regretting letting someone gamble in a particular way and prohibiting it at the time. Casinos regret letting card-counters play blackjack, but it’s only illegal if you use a device to keep count. So the casinos’ private intentions are irrelevant; what matters is what they communicated to the reasonable video poker player.

There are two different sets of rules at work on a video poker machine: the rules of the game of chance being simulated, and the rules of the software that simulates. The two must correspond: it’s illegal for a casino to deploy a machine that isn’t actually random. The best argument that Kane violated the software’s rules is that his big jackpots didn’t correspond to a legal play according to the rules of Triple Double Bonus Poker. You can’t change the stakes after you’ve won the hand but before you rake in the pot.

But wait. Triple Double Bonus Poker exists only on Game King machines; why shouldn’t it work that way? And perhaps gambling software is different than other kinds of software, since the underlying game is adversarial. In offline poker, players are expected to take full and ruthless advantage of their opponents’ mistakes. Deciding whether Kane was “authorized” to play as he did requires passing judgment not just on technical questions about how the Game King works, but also on social and normative questions about the experience of regulated gambling in America.

I don’t want to deny the possibility of reaching convincing answers in this and unauthorized-access cases. I just want to point out that by making the CFAA turn on authorization, we have committed ourselves to a messy, fact-laden inquiry, one that cannot be resolved solely by reference to the technical facts of the software in question. We have to ask how people generally use computers, and how we want them to use computers. And this messy, fact-laden inquiry is in significant tension with the goal of making easily-understood laws that draw clear lines around what is and is not allowed.

This tension may sound familiar. It is one of the problems scholars have identified with word-based restrictions on computer use. Orin Kerr has argued in his scholarship and convinced the judge in the Lori Drew case that a terms-of-service-based theory of unauthorized access is unconstitutionally vague, because it is too hard for reasonable people of ordinary prudence to learn about and obey them. But the same could be said about rules of conduct embedded in software. Code is law, unfortunately.

The irony runs deeper still. Words don’t have to be vague or ambiguous. Craiglist sent 3Taps a cease-and-desist letter telling it to stop scraping Craigslist’s listings. The resulting lack of authorization was crystal-clear. Words work for saying things; that’s why we use them.

In contrast, code is a terrible medium for communicating permission and prohibition. Software is buggy. It doesn’t explain itself. Not even programmers themselves can draw a bright line between features and bugs. If only there were some way for users to know, in so many words, what they are and aren’t allowed to do …

The Sting in the Tail

If we are concerned about terms of service liability under the CFAA, we should be even more concerned about code-based liability. The problem with the CFAA is not some recent mutation of a law that has outgrown its original purpose. The problem was there all along; it was inherent in the very project of the CFAA. The call is coming from inside the statute.

If we as a society care about online banking fraudsters, email eavesdroppers, botnet barons, and unrepentant spammers,, then we will need to continue to declare some uses of computers off-limits, on penalty of prison. But as a basis on which to do it, “without authorization or exceeding authorized access” is remarkably unhelpful.

The basic task of an anti-hacking law is to define hacking. You had one job, CFAA. One job.

(This post is based on notes I have been making towards an article on the legal interpretation of software; it was spurred by an exchange with Tim Lee on Twitter yesterday.)

You may also like...

23 Responses

  1. Orin Kerr says:

    James, glad you’re focusing on these issues. They’re great fun to play with, I think.

    You’re right that this is is messy and fact-laden in the marginal cases. But isn’t that equally true of the analogous trespass concepts in the physical world? For example, is it a physical trespass to enter someone else’s home? What if the door is wide open? What if the door is wide open and the home is having an open house because it’s for sale? What if it’s Halloweeen night? What if a homeowner invites you in thinking you’re a door-to-door salesman, but actually you plan to rob the home?

    These are the kinds of questions that courts grapple with in trying to distinguish permitted entry from entry without permission in the context of physical trespass laws. And they’re pretty fine distinctions. But we don’t generally say that the difficulty of these issues render trespass statutes inherently problematic. We don’t throw up our hands and say that no one knows when it’s okay to go inside someone else’s house. Instead, we just recognize that we’re dealing with the marginal cases, where the lines may be blurry, and that lines have to be drawn between entries that are okay and entries that aren’t. So we look for sensible ways to draw those lines.

    Given how much more complex are the ways that people use the computers than enter homes — and how much newer the problem is — it shouldn’t surprise us that there are equally (or even more) hard and fact-specific questions that come up with trying to draw lines between entries to computer systems that are okay and those that aren’t.

    As for Kennison v. Daire, it’s worth noting that no one claimed that using the ATM was unauthorized access. Rather, the issue was whether the bank had authorized the issuing of the money to the defendant. The charge was theft after the computer had been accessed — that is, the taking of property belonging to another — not unauthorized access to the computer.

  2. Orin Kerr says:

    Oh, and I blogged my thoughts on US v. Kane back in 2012, when the magistrate judge’s opinion came out:

  3. Thanks, Orin. I’m in broad agreement with you. We do want “sensible ways to draw these lines.” And I think your diagnosis of the CFAA’s core problem — that “without authorization” has come to bear more and more weight as the other elements have been broadened out of existence — is absolutely right. What I want to suggest, though, is that your code-based reading of “authorization” is (1) only a partial solution, and (2) a way of importing some social judgments about what kinds of conduct should be punished.

    Given that, Congress could do three things that would significantly help the courts draw fair and sensible lines. First, it could do as you suggest and pay more attention to harm elements of the CFAA, particularly in grading offenses. Second, it could do as David Thaw suggests and tighten up the mens rea elements, which would help mitigate many of the notice concerns. And third, it could say more about which forms of “access” are problematic. That may be my biggest critique of Cybercrime’s Scope: your broad reading of “access” means that “authorization” has to do more work, but some of the work you ask it to do might be clearer and easier to implement under the heading of “access.”

    As for Kennison, yes, it wasn’t strictly an unauthorized-access case. But for precisely the reasons you suggest when talking about consent to trespass, authorization was a critical question in the case. And that authorization was conveyed, if at all, through the code of the ATM. So it might be more accurate to say that the use of computers changes (and complicates) how authorization can be provided to a range of conduct, not just that it’s access to computers themselves that matters. Indeed, similar questions come up for all kinds of other computer-mediated access: access to copyrighted works under the DMCA, access to to stored communications under the SCA. And without too much stretching, they also come up in trespass to chattels, browsewrap contract formation, implied copyright licenses under robots.txt, etc. Part of what I want to do when I get to the article is complicate the story of computer-mediated consent — by server owners and by users — in all of these contexts. It’s one of the three or four Big Issues that define the field of Internet law.

  4. Orin Kerr says:


    I entirely agree it’s only a partial solution, and that it imports some social judgments. In particular, I think it’s important to have narrower liability for felonies under the Act. See here:

    I tend to disagree with David Thaw’s focus on mens rea, as most of the CFAA already uses the highest and narrowest mens rea standard, that if intent. You can’t really tighten up the standard from intent, at least unless you want to use a willful mens rea (which would be a bad idea here for many reasons). In my view, you need to change the underlying element of what the person is intentionally doing.

    As for your idea that it would be clearer and easier to do some of the work under the access prong rather than the authorization prong, can you fill in why you think it is easier and clearer that way? As you know from my article, I couldn’t come up with a good way to limit access, and I instead ended up concluding that it was really all best understood as an authorization problem. But maybe that’s wrong; would love to hear your take at some point (whether here or in a future article).

  5. David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.

    As for “access,” your contrast between password-protected webpages and obfuscated URLs strikes me as a question that might be clearer under access rather than authorization. One difference is that the former is two-step — the user goes to two webpages — and the latter is single-step — the user goes to one webpage. That’s potentially a plausible line at which to draw an “access” threshold.

    Another way of putting it is that there is an essential nexus between an “access” and the lack of “authorization” for that particular access. If we were more willing to say that an unauthorized access required crossing an access threshold from an authorized side to an unauthorized side, that would help narrow the ambiguities significantly. It would immediately eliminate cases about the impermissible use of information after an initially authorized access, for example.

  6. Great post. That must have been some twitter conversation 140 characters at a time.

    I share the concerns and thoughts – as I noted recently with concerns about scraping, for example. I wonder whether we can get any benefit from the DMCA anti-circumvention provision (which has its own issues). There, a common defense is that the measures were not effective protection, but they are usually rejected because the measures are effective in the ordinary course of usage.

    So, the question under that standard would be whether the ordinary course of usage would allow the access/use. I think the probably helps the gamblers, but not the ATM withdrawal. It also might allow for different parsing of things like URL guessing.

  7. Orin Kerr says:

    James writes:

    David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.

    I disagree. The problem is that this already *is* the intent standard. If you believe that violating TOS is a crime, then the intent standard requires that the person knows that they are violating the TOS and acts intentionally to do so. Actual notice is already required. But who cares? Actual notice that you’re violating a TOS has nothing to do with any actual harms.

    James next writes:

    As for “access,” your contrast between password-protected webpages and obfuscated URLs strikes me as a question that might be clearer under access rather than authorization. One difference is that the former is two-step — the user goes to two webpages — and the latter is single-step — the user goes to one webpage. That’s potentially a plausible line at which to draw an “access” threshold.

    Another way of putting it is that there is an essential nexus between an “access” and the lack of “authorization” for that particular access. If we were more willing to say that an unauthorized access required crossing an access threshold from an authorized side to an unauthorized side, that would help narrow the ambiguities significantly. It would immediately eliminate cases about the impermissible use of information after an initially authorized access, for example.

    I disagree. First, it doesn’t clarify things just to switch doctrinal boxes: It just takes a conclusion and rearranges it. Position A is this: “A person is authorized to visit a public webpage, but he is not authorized to then enter in a password belonging to someone else.” Position B is this: “A person doesn’t access a computer when they visit a public webpage, but he does access it without authorization if he then enters in a password for someone else’s account.” What’s the difference? You have to deal with the line drawing in one box or another. And as I argue in the NYU article, making the access prong do the work then sets up hard puzzles outside the application of the Web. For example, does sending a virus “access” the computer? You need to come up with definitions of access for each kind of Internet application, which seems pretty complicated.

  8. You’re starting to convince me on the intent point. I would say I want to go back and look at how some of the cases parse out “intent,” but in view of whom I’m having this conversation with, I’m inclined to take your word for it.

    I agree that from a functional perspective, the pile of dirt has to end up under one rug or the other. But it’s the same pile of dirt. Any complications that ensue from needing different definitions of access for different applications will also ensue from trying to determine the meaning of “authorization” for different applications. That’s the hidden issue with the Morris “intended function” test: the process of determining what the finger program’s “intended” function is (an “authorization” question) is isomorphic to the process of determining how the program ordinary works (an “access” question). I think it’s clearer to call it the latter, because in this subset of cases the focus tends to be on how the program works when users access it in what the DMCA calls “the ordinary course of its operation.”

  9. Orin Kerr says:

    I should add that there is very little caselaw on how the intent standard applies in this setting. United States v. Carlson, 209 Fed. Appx. 181 (3d Cir. 2006), is probably the leading case on what intent means in the CFAA, but it’s dealing with intent in 1030(a)(5)(A), not in the context of intentional authorized access. But the issue is rarely litigated because what intent means is entirely dependent on what authorization means. If the authorization line is TOS, then the intent standard requires notice of the TOS. If the authorization line is breaching code-based restrictions, then the intent standard is notice of breaching code-based restrictions. If the authorization line is doing whatever the computer owner doesn’t like, then the intent standard is notice of doing whatever the computer owner doesn’t like. The meaning of intent is all about what authorization means; the latter essentially governs the former.

    Re your point about Morris, I look at it differently. Gaining access contrary to the way the program ordinarily works is not an access issue; it’s a classic authorization issue. Consider a physical analogy. Imagine someone enters a home by jumping down the chimney, Santa Claus style. They land at the bottom, dust themselves off, and are arrested for trespass. “Trespass?!”, they respond, “But I entered through the open chimney! I was invited to enter!” That would seem ridiculous to us because entering a home through a chimney is contrary to the intended function of a chimney. An open chimney is a way for smoke to exit a house, not a way for people to enter. But this is largely a question of social understandings and ordinary usage: If a Martian landed on earth and heard this dispute, he might think that it is a very fine distinction indeed to say it is authorized to enter through an open door but not an open chimney. But we find the line intuitive because we intuitively understand that authorization to enter a home is partially about social expectations as to what ways of entering a home are intended ones. I see that as a question of authorization, not access. Entering a home by jumping down the chimney and ending up in the living room is very much still an access into the home; it’s just an access that is unauthorized.

  10. Thanks, Orin, the chimney analogy is extremely helpful. I think it clears up how close our points of view are, and where we still disagree.

    It appears we’re in complete agreement on the idea that there’s a meaningful difference between entering via the chimeny and entering via the door. We agree that the difference is small from a external Martian perspective. We agree that this puzzle can be resolved by taking by taking an internal perspective that understands that there are different social expectations about doors and chimneys. My point in the post is that this difference can be understood in terms of what chimneys and doors communicate to visitors.

    Here’s what I see as the sticky part for your theory of “authorization.” When you say that only “the circumvention of code-based restrictions” should count as unauthorized access, you introduce a second element to the test. First, we have to decide whether the access was authorized or unauthorized. Second, we need to decide whether the access was unauthorized because it involved the circumvention of code-based restrictions. It’s this second element that strikes me as an access test in disguise. Someone whose theory of unauthorized access encompasses violating terms of service or other contract- or word-based restrictions doesn’t have to draw such a line.

    So, in your example, not every civil trespass is a crime. The offline architecture-based equivalent to your code-baed test would be a law that criminalizes only breaking and entering, not entering without permission. We can say say that someone who dives down the chimney lacks authorization, but what makes him a dangerous criminal is not the lack of authorization per se but the entry via the security hole in the roof of the house. Yes, a court can infer lack of authorization from the means of entry, but if that’s the only permissible source from which the court is alllowed to infer lack of authorization, it’s also an access test.

  11. Orin Kerr says:

    James, I think you’ve lost me with your second paragraph. What is the “second element” of the test? To clarify, in my view, when you hook up a computer to a network and use a publicly-accessible platform to let others communicate with your machine, access to that open area of the computer is presumptively authorized. You necessarily authorize the access to that data by setting up the machine so that others can use it. On the other hand, access becomes unauthorized when you erect a code-based restriction designed to thwart that user from gaining initial or additional access that the user manages to successfully circumvent. Bypassing the code-based restriction renders that access to that data unauthorized. So yes, it’s akin to a breaking and entering idea in the physical world, but it’s still fundamentally a question of authorization because use of open architecture (like a public URL) necessarily makes access authorized.

  12. Bruce Boyden says:

    “To clarify, in my view, when you hook up a computer to a network and use a publicly-accessible platform to let others communicate with your machine, access to that open area of the computer is presumptively authorized.” That’s sort of the whole question isn’t it? That is, if a page owner can fairly be characterized as “letting” members of the public communicate with the machine, and the page itself is fairly characterized as “open,” then access is authorized pretty much by definition. On the other hand, if the page is “hidden,” and access “restricted” to those in possession of the magic word (the “nonpublic” URL, let’s say), then that sounds more like unauthorized access. But the “public/nonpublic,” or “restricted/open” distinction, just seems to reduce to a difference of opinion over whether unauthorized access can constitute something other than circumvention of a scheme of limited-distribution passwords entered in text boxes and backed by a properly implemented refusal to respond to other requests.

  13. James and Orin: Your list of hypotheticals is incomplete.

    The bugs you describe aren’t really the bugs that hackers exploit. The principle you describe is that computers do what the programmers say, but not necessarily what they want. That’s the source of a lot of ambiguity in the law.

    But the bugs that hackers exploit work on a different principle. They don’t simply access data, they cause code written by the hacker to run on their victim. Specifically, I’m talking about “buffer overflows” and “SQL injection”. You know how you are constantly patching Windows, Adobe, or Java? Those are usually patching buffer-overflow bugs. You know the major website breaches that hit the news? Those are usually SQL injection bugs.

    A typical buffer-overflow bug, when used in a URL, looks something like this:

    That jumble of data in the URL is x86 machine code. The original programmers can see where the buffer overflow exists, but they can’t predict what happens next, because it depends upon the code the hacker has injected.

    This gives us a clear line between authorized and unauthorized access. A public website that authorizes everyone to “access data” clearly still do not authorize anyone to “run code” on the server. When a hacker creates a URL like the one above, they are intentionally/purposefully doing something they know is not authorized.

    What you are discussing confuses us coders/hackers/nerds. We have a clear idea of the line between authorized and unauthorized. I feel like we are the Martians in the your discussion above. We come to Earth, and see that you guys have come up with a completely different set of rules about what is authorized and unauthorized. Your decisions seem arbitrary to us.

  14. lucia says:

    Robert David Graham

    This gives us a clear line between authorized and unauthorized access.

    This gives us a clear line in a specific instance where a clear line is obvious. But the example doesn’t clarify the line for other situations.

    Consider this semi-hypothetical (semi-because it springs from an action someone is doing. See ” Samuel Clay EMPLOYEE
    Samuel Clay (Official Rep) … but Craigslist is rate limiting NewsBlur. The reason insta-fetch works is because I obscure some things (and force a cache buster). I may just hard-code a cache buster for Craigslist since that’s the only thing that’ll fix it.
    ” at )

    I’m not entirely sure what Clay is doing and for all I know the coding is getting around a bug or something. But it does motivate my hypothetical:

    1) Person A running public facing utility (e.g. Craigslist), that ordinarily people can read (e.g. the feed and website.) Loading that feed to read is “authorized”; loading the website is authorized. In fact, Craigslist wants people to do both.

    2) A writes TOS which include a “UNAUTHORIZED ACCESS AND ACTIVITIES” section describing what (e.g. To the uninitiated non-legal scholar, non-computer nerd, it appears these TOS describe a huge range of things as unauthorized. For example

    Any copying, aggregation, display, distribution, performance or derivative use of craigslist or any content posted on craigslist whether done directly or through intermediaries (including but not limited to by means of spiders, robots, crawlers, scrapers, framing, iframes or RSS feeds) is prohibited. As a limited exception, general purpose Internet search engines and noncommercial public archives will be entitled […]

    […] Circumvention of any technological restriction or security measure on craigslist or any provision of the TOU that restricts content, conduct, accounts or access is expressly prohibited. […]

    3) A second party, “B” signs up for a service from “C”. The service from C is to provide B with content from A in some format that, evidently B prefers.

    4) A third party, say “C” the operates a system that visits A’s site and copies the content from a page and saved to A’s server. That page might be a “feed” or it might be “the regular old site”. The copied material copied from is then displayed to B who ‘subscribed’ and who may or may not have paid a fee to view the material in the format C presents. The material might in fact be displayed to anyone and everyone who loads the proper page at C’s site.

    5) Now suppose it turns out that B, who is C’s customer complains that content from A is not appearing expeditiously. C explains it appears that A may be “rate-limiting” B’s visits but B has attempted to implement methods to get around this an intends to implement more.

    Since this is about authorized access, I won’t ask questions like “Is C’s copying to his server and display from his server a copyright violation?” or “Is B’s request a copyright violation”. Instead, I want to know:

    Given these facts, in your view:

    * If the rate limiter did not kick in to limit visits, would you consider B authorized to visit for the purpose of copying and displaying to his subscriber C?

    * Once any ordinary user with modest coding skills (say D) encounters the rate limiter, would they be authorized to make bumbling efforts to visit more frequently than the rate limiter ordinarily permits?

    * Does the fact that a user like B has more advanced coding skills and can efficiently evade the coding restriction in a non-bumbling way change the answer to the previous question? That is: are users who can overcome rate limitations authorized to overcome something like a rate limiter?

    * Without regard to the rate limiter, if A’s TOS prohibits people from using a service like B to view their content, is TOS, C authorized to request B to access A’s content for the purposes of displaying it to C? Is the answer to this affected by whether or not B read the TOS? Or whether A detected behavior they didn’t like, discovered B’s identity and specifically informed them of the TOS?

    Note that none of these involve uploading a script to A’s machine, turning As machine into a zombie drone, injecting A’s into A’s machine. But I’d say as a non-attorney and a non-expert coder that it’s pretty clear that if a written TOS says a particular behavior is not authorized that behavior is not authorized. If a coding restriction has been put in place to inhibit a behavior, that behavior is not authorized. If the restriction has become apparent to B and, more over, they seem to recognize that it is intended to be a restriction and not just a bug, then B knows the sort of access associated with that behavior is not authorized by A. Moreover, in my opinion, some sort of legal penalties for B’s behavior are warranted.

    As for C who merely subscribed to B’s service: I’m a bit perplexed. I would say that C’s behavior is not authorized by A. But given the way a subscription might be set up and described by B, C may easily be unaware of A’s TOS, not understand them, have no idea what actions B is taking and so on. So, I don’t think C ought to find himself brought up on charges.

    But it seems to me that on VC I read some people who represented themselves as computer nerds who know all about this insist a that B’s access remains authorized if it is possible for a skilled programmer to code around a measure that would otherwise block their access either at a particular instant in time or fully. (I don’t remember the name of the commenter who took this position, but s/he was quite insistent.)

    So, getting back to your point, as I see it, the fact that a bright line may exist for cases involving uploading of scripts that take over a server, they don’t exist in all cases. Do you think your bright line helps above? If so,can you describe why it’s helpful?

  15. Orin Kerr says:

    Robert, a buffer overflow attack is a classic circumvention of a code-based restriction; it’s one of the standard examples of what we’re talking about. It works by exploiting a flaw that allows the actor to insert code where it is not supposed to go, allowing the actor to execute code he is not supposed to execute and thereby gain access to information that he is not supposed to have. In the language of the Morris decision, it gains access by using a program in a way contrary to its intended function.

  16. Orin, yea, but I’m trying to refute the claim in the first paragraph that “all CFAA cases are hard”. Cases involving buffer-overflows are easy.

    I was also trying to refute the idea that there is only one kind of computer code. Programmers write high-level languages like “C” but the machine interprets machine code. What’s interesting about a buffer-overflow is that the programmer does not know, can not know, how the machine interprets the programmer’s code to allow execution of the hacker’s code (and likewise, cannot predict the hacker’s code as well). You cannot interpret a buffer-overflow as the programmer granting access, because the programmer doesn’t have enough knowledge about the machine code that results from the high-level code.

    I was also trying to refute his interpretation of your post that guessing URLs is always legal. I think virtually all guessing of URLs should be legal (because it’s too confusing to figure out what “authorization” means), except for very narrow cases of buffer-overflows, SQL injection, and password guessing — cases where the hacker intentionally (in the mens rea sense) gains unauthorized access.

  17. Orin Kerr says:

    Ah, got it, Robert. Thanks for the explanation, and I’m glad we agree.

  18. Robert, I agree that buffer overflows are at the easiest end of the spectrum of CFAA cases. My point in saying that all CFAA cases are hard is that they depend on a series of assumptions about how programs work and what programmers intended to allow. It is possible to imagine a buffer overrun attack which the programmer intended to allow — as part of a security course, for example — it is just overwhelmingly unlikely in most cases, for reasons that depend. It’s precisely facts like the ones you list that help us reach this conclusion; my post was designed to help bring them out into the open.

  19. Good questions, Lucia. I’m a bit confused by the relationship between B and C in your example, since C seems to be loading the content from A, but B is the one taking the steps to get around the rate-limiter. But these caes are in a much harder part of the spectrum, because there is an attempt to reduce access by code but not a fully comprehensive one. These are hard cases on anyone’s theory of the CFAA, because the line has to be drawn somewhere. I’m not familiar with this very insistent VC commenter, but the “Could a skilled programmer defeat this system?” test isn’t and couldn’t be the law, because then even buffer overruns might be considered authorized.

  20. jon stanley says:

    With some irony…we might have come full circle to the very first EF Cultural opinion….and the concept of the “reasonable expectations of the website owner”

  21. David Thaw says:

    I am jumping into the thread a bit late, so apologies for re-opening an earlier point. I would like to turn back to James’ and Orin’s exchange re: intent.

    Orin writes (responsive to James):


    James writes:


    David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.


    I disagree. The problem is that this already *is* the intent standard. If you believe that violating TOS is a crime, then the intent standard requires that the person knows that they are violating the TOS and acts intentionally to do so. Actual notice is already required. But who cares? Actual notice that you’re violating a TOS has nothing to do with any actual harms.


    With respect to Orin’s position, I and (I think) the Ninth Circuit disagree. In Nosal (en banc), Chief Judge Kozinski addresses the 1030(a)(2)(C) distinction, describing it as “the broadest provision [of the CFAA], which makes it a crime to exceed authorized access of a computer connected to the Internet without any culpable intent. Were we to adopt the government’s proposed interpretation, millions of unsuspecting individuals would find that they are engaging in criminal conduct.”

    I do not find the Ninth Circuit’s language interpreting the current (a)(2)(C) intent as equivalent to the standard I propose (“specifically that [they expect and desire] their actions would violate the given restriction” — restating James’ formulation of my proposal). Quite the contrary — I read this language to be the court asserting that (a)(2)(C) has a nearly tautological intent element — if you engaged in the action, you therefore intended any/all possible results therefrom.

    This is, in my mind, a rather absurd result for an intent standard. It obliterates many of the (important!) intent distinctions drawn in the criminal law. For example, in crimes-against-persons, the difference between: 1) swinging my arm with the intent of slamming shut my (heavy) car door and *accidentally* striking someone’s face; and 2) swinging my arm with the intent of striking someone in the face (to cause them physical injury).

    Kozinki’s opinion then goes on to note:

    Minds have wandered since the beginning of time and the computer gives employees new ways to procrastinate, by gchatting with friends, playing games, shopping or watching sports highlights. Such activities are routinely prohibited by many computer-use policies, although employees are seldom disciplined for occasional use of work computers for personal purposes. Nevertheless, under the broad interpretation of the CFAA, such minor dalliances would become federal crimes. While it’s unlikely that you’ll be prosecuted for watching Reason TV on your work computer, you could be. Employers wanting to rid themselves of troublesome employees without following proper procedures could threaten to report them to the FBI unless they quit. [6] Ubiquitous, seldom-prosecuted crimes invite arbitrary and discriminatory enforcement.

    [6] footnote six is particularly important because it describes the fact that this employer-response threat is not hypothetical: see Lee v. PMSI, Inc., No. 8:10–cv–2904–T–23TBM, 2011 WL 1742028 (M.D.Fla. May 6, 2011). The fact that this case was dismissed does not, in my mind, at all lessen the probability that aggressive employers will (and do) engage in such threats.

    Actual notice is, to me, at the *core* of intent. In the Drew opinion, and if I recall correctly, in Orin’s Minn. L. Rev. piece on Vagueness Challenges to the CFAA, the concept of “fair notice” is essential to surviving a void-for-vagueness challenge. The Federal Trade Commission (and others) have repeatedly criticized lengthy Terms of Use, Privacy, and other Policies that are beyond the practical readability of the average user. At the same time, there is good empirical work (which I cite in my J. of Crim. L. & Criminology piece, and am happy to post links to here) on layered notices and other methods of providing effective notice to users upon which a theory of criminal liability might be based. Civil liability may still result from the underlying “deep” terms of the full contract — I do not (in this work) take a position on that point — but criminal liability, in my mind, requires a higher degree of notice.

    Finally, I note that the intent-based approach to CFAA reform still leaves a substantial amount of “wiggle room” for unusual results, hence why the second element of my proposal requires that the act in question *also* be either:

    1) in furtherance of something on a list of activities Congress specifically has identified as impermissible; or

    2) in furtherance of another act otherwise criminialized by state or federal law (essentially “glomming on” to the state statutes, as my colleague Rebecca Bolin suggested).

    The full draft of the proposal/paper is linked on my website ( for folks are are interested.

  22. Orin Kerr says:


    You are misreading Judge Kozinski’s Nosal opinion.

    When Kozinski writes that 1030(a)(2) “makes it a crime to exceed authorized access of a computer connected to the Internet without any culpable intent,” I’m pretty sure he means just that there is no requirement beyond the intentional unauthorized access. He’s comparing (a)(2) to (a)(4), which has the added elements that the unauthorized access must have an intent to defraud and must “further[] the intended fraud.” The government argued in its briefs in Nosal that the Court didn’t need to get into the overbreadth of (a)(2) because Nosal involved(a)(4), which required intent to defraud. In that passage, Kozinski was just noting that the same language applies to another part of the statute that does not require intent to defraud. It’s true that Kozinski uses the phrase “intent,” but the comparison to (a)(4) suggests that he just means that there isn’t an intent to defraud requirement. Thus, critically for our discussion, he’s not making a comment on what intent means when it is the mens rea associated with unauthorized access prong.

    Oh, and it might be of interest to readers (if there still are any) that the government moved to dismiss the CFAA counts in the Kane video poker case that is discussed in the main post. The court then dismissed the counts on the government’s motion.

  23. lucia says:

    since C seems to be loading the content from A, but B is the one taking the steps to get around the rate-limiter.

    Here’s how it works
    • C joins B’s service and clicks “subscribe to A”.
    • B then goes and collects A’s content, makes copies and stores those copies on B’s servers. B makes these visible to the public.
    • C then visits B’s server where he loads B’s copies of A’s content. So, C is reading content original to A (possibly copyrighted content. ) but that content was fetched by B and stored by B.

    If you think of B as being something like “feedreader”, then C could be a person who subscribed to the feed. C could be the person who subscribed to the feed. A is the content originator (e.g. Craiglist, NY Times etc.) But bear in mind: more than the “feed” is being collected and the TOS at least seem to say this shouldn’t be done. (Let’s assume for the hypothetical that the TOS really say this shouldn’t be done.)

    There can be many “C’s” in this system and then don’t necessarily know the details of what B does.