Search Engine Objectivity

(This is a guest post from Professor Mark R. Patterson of Fordham Law School. As someone who has participated in panels on antitrust with Prof. Patterson, I thought our readers would be interested in his perspective. –Frank Pasquale.)

pattersonM“Search is inherently subjective: it always involves guessing the diverse and unknown intentions of users. Regulators, however, need an objective standard to judge search engines against.”

The two claims above, from an essay by James Grimmelmann, are at the center of the conflict over regulation of search engines. Some argue that Google is a powerful gatekeeper for competing firms’ access to customers, so that it must operate in an objective or neutral manner to preserve a level competitive playing field. Those who make this argument necessarily assume that we can assess objectivity or neutrality in this context. Others, like Grimmelmann, support the first statement above, arguing that there is no objective, neutral means of assessing search results, so that there is no way to regulate search engines.

The European Commission (EC), having investigated Google’s practices and concluded that there are “competition concerns,” is apparently on the pro-regulation side, because it is entertaining proposed commitments from Google to address those concerns. (The U.S. F.T.C. conducted its own investigation and closed it without action, concluding that there was insufficient evidence to support the claim that Google’s practices lacked a legitimate business justification.) Google proposed a first set of commitments to the EC in April, but the Commission received “very negative” feedback from a market test of those commitments, so it asked Google for an improved proposal. Last month, Google proposed a second set of commitments. This new proposal was not put to a market test. Instead, the EC sent private inquiries to the complainants in the case and other market participants. Nevertheless, the proposal was leaked, and it offers much food for thought.

Google’s second proposal throws light on the two sides of the search-engine debate. One of the EC’s four concerns is “[t]he favourable treatment, within Google’s web search results, of links to Google’s own specialised web search services as compared to links to competing specialised web search services.” Google’s proposal would address this concern by displaying what it calls “Rival Vertical Search Sites” near Google’s own results. The critical question, of course, is which Rival Vertical Search Sites? Here is where it gets interesting. The choice of sites to be displayed is to be based on an auction, with each rival’s bid multiplied by its “position-independent predicted click-through-rate (pCTR),” i.e., a prediction of the percentage of those viewing the link who click on it, corrected for the position of the link on the page (because more prominent locations get a greater percentage of clicks). The pCTRs would be “calculated using solely a machine-learning regression model that will rely only on objective and verifiable explanatory features and will follow standard industry practices for such models as described in the scientific literature.”

Google has no doubt made an “objective and verifiable” proposal in response to EU competition law’s rule that even conduct that is prima facie abusive may be permissible if it has an “objective business justification.” But what does the proposal tell us about the two statements in the epigraph above? Maybe Google’s proposal is evidence against the search-is-subjective position, and shows that effective regulation is possible. Or maybe it shows no such thing. Maybe Google’s offer says nothing about the feasibility of regulation, because maybe pCTR is actually an ineffective, even if “objective,” means of addressing the competition problem with which the EC is concerned. That is, maybe the pCTR approach is objective, but would be an ineffective remedy because it does not provide a level playing field. (There are also objections to Google’s proposed remedy because it requires rivals to pay for their placement as Rival Vertical Search Sites, but here my concern is only with the pCTR mechanism.)

What do we want from a remedy, assuming that Google’s conduct poses a competition problem? Does a remedy need to be objective? In what sense? Certainly not in the sense that it is universally agreed to be the best, or even a good, remedy. Few remedies in antitrust, or at least few behavioral remedies, would satisfy that criterion. But “objective” does not mean “correct.” Instead, it means “not influenced by personal feelings or opinions.” Google’s offer seems to satisfy that definition. Is that enough?

The offer could at least allow the Monitoring Trustee provided for in the proposal to assess whether Google is complying with its commitments, because the Trustee would receive information about the how the pCTR mechanism works (though the proposal is not very clear regarding how much information Google would be required to provide). In that sense, the objectivity is of value, because it provides some degree of transparency, though only for the Rival Vertical Search Sites remedy, that at least allows the Trustee to determine whether Google is complying with its commitment.

The objectivity of the proposed remedy does not, however, tell us (or the Trustee) whether the remedy is correct in some other sense. In that respect, it is not clear that the EC asked the right question in its inquiry to market participants regarding the proposal: “In your opinion, is this mechanism objective, neutral and non-discriminatory or can it be subject to manipulation?” The pCTR approach could be “objective, neutral and non-discriminatory” but still not deliver the most appropriate and relevant results. Sites can produce click-throughs, and thus presumably high pCTRs, without being sites that consumers really want to visit. We all have clicked on sites that turn out not to serve our purposes. That is why Google uses a Quality Score in its AdWords advertising program, yet the proposed remedy does not include such a quality measure (though it does include two different, somewhat odd “quality-protection thresholds”). Nevertheless, the proposal states that “[t]he sole purpose of the machine-learning regression model shall be to calculate the pCTR as a means to evaluate the expected quality of a particular Rival Link.”

Regardless of whether the pCTR is a good approach to ranking, though, the very fact that Google has offered it as one raises interesting possibilities. For one thing, it should allow the Trustee to compare Google’s pCTR rankings to Google’s organic results. If the pCTR approach produces rankings that are very different from the organic results, that should raise concerns. The difference in the organic results might suggest that Google itself finds the pCTR rankings inadequate and corrects for their flaws. In that case, the remedy would arguably be shown to be inadequate, even on Google’s terms.

However, it is unclear whether, if the remedy were inadequate, the Trustee would have the authority to improve it. The proposal states that “[t]he Commission, upon advice from the Monitoring Trustee, may require changes to the detailed mechanism for calculating pCTRs or to the level of the quality-protection thresholds, if these elements . . . , without objective justification, discriminate against or exclude Rival Vertical Search Sites.” But it is not clear that changes to the “detailed mechanism” would be sufficient. It could be that the pCTR mechanism is fundamentally inadequate. The commitments should provide for the possibility of more extensive changes, or even termination, if the Trustee concludes that the pCTR mechanism is unsatisfactory.

Suppose, though, that the pCTR ranking matched Google’s organic results fairly well. Would that show that it is a good ranking approach? Not necessarily. The organic results might not be a satisfactory benchmark, because even if the pCTR rankings produce poor results, Google might use a similar approach in its organic results. After all, the sites at issue are Google competitors, so there seems no particular reason why Google would object to those competitors being downgraded in the organic results as well as the remedial proposal. If that happened, would we know? Maybe we wouldn’t have to. There may be ways in which the Trustee could use the data to provide valuable information, even if the remedy is inadequate.

First, presumably the Trustee could apply the pCTR approach to Google’s own sites and compare their rankings to Google’s rivals’. If the pCTR of rivals’ sites is better than Google’s own, then if the pCTR ranking is a valid one, the rivals’ sites presumably deserve similar prominence in Google’s display, perhaps even without having to pay for that placement. It would be awkward, it seems, for Google to simply respond that the pCTR approach is invalid, since it provided it and stated that it was intended to reveal the “expected quality” of sites. At the least, Google should have to offer an “objective justification” for displaying sites with higher pCTRs than its own sites’ pCTRs less prominently.

Second, perhaps we could rely on the market, as those who object to regulation of Google often argue. Not everyone clicks on sponsored links, either to Google’s own sites or to its rivals’. Some look only to the organic results. Google might therefore be unwilling to provide poor organic results, so that it might in fact be appropriate to rely on similarity between the pCTR rankings and the organic results as an indication of Google’s neutrality. This might not be true, as suggested above, because Google might be willing to allow its organic results to deteriorate, with the idea that doing so could just push consumers to the sponsored results. But the Trustee should be able to assess that possibility by comparing clicks from the organic results and from the sponsored ones. Moreover, if sponsored clicks predominate, the suggestion in the previous paragraph, of comparing the relationship of the pCTRs of Google and its rivals to the display of those sites, would be all the more important.

The fundamental point here is that Google has offered an objective ranking system, and by doing so has presumably represented it as one that is reasonable. It is not perfect, of course, and maybe it is not even good, but it provides a benchmark from which to develop techniques for evaluating search rankings. It provides such a benchmark because Google’s proposal of the mechanism would make it difficult for Google to simply reject comparisons based on it. It would still be perfectly valid for Google to explain why any comparisons to the pCTR benchmark were invalid, but those objections should start from the basis that the proposal itself is objective and neutral. In that sense, the proposal moves the ball forward, and it moves it forward in the direction favored by those who argue that regulation of search engines is feasible.

All this depends, of course, on the effectiveness of the Monitoring Trustee. For the proposal to provide value, the Trustee must receive sufficient information to make assessments like those suggested above. The Trustee also must have the authority to make these sorts of assessments and to force changes, perhaps significant changes, to the remedial proposals, or even to conclude that the commitments as a whole are inadequate to address the Commission’s concerns. The proposal presents some concerns in these respects. The Trustee’s powers appear to be limited to monitoring Google’s compliance with the proposed commitments, rather than to exploring the validity of those commitments. The proposal explicitly states that the Trustee “shall have no decision-making powers or powers of investigation of the kind vested in the Commission” and that “the Monitoring Trustee’s functions shall not include the power to review or resolve individual complaints relating to the ranking of websites in Google’s Search Results.” In this respect, the proposal may be unsatisfactory.

These evaluations could be performed by private parties, of course, if the pCTR model were made public, but the proposal, not surprisingly, does not contemplate public disclosure. Some value is certainly lost if only the EC and the Monitoring Trustee are privy to the results of any evaluations of this kind. But it is also true that in novel circumstances like this, there is considerable value in the opportunity for enforcement authorities to gather information about alternative means of evaluating possibly anticompetitive practices by search engines. Although competition law recognizes the importance of non-price competition like that for the quality of search results, the law has few techniques for assessing the effectiveness of such competition.

In sum, Google’s proposed commitments offer an “objective” remedy. It may or may not be an effective remedy, but the fact that Google has proposed it can be taken as accepting it as a valid starting point for considering alternative ways of evaluating the validity of search results. To advance beyond that starting point, though, the EC should require Google to allow it a reasonable scope for evaluating the validity of the remedy, rather than just providing for a narrow monitoring of whether Google is complying with it. If Google’s proposal provides that scope, it will go at least some way to providing the Commission with what Grimmelmann says it needs, “an objective standard to judge search engines against.”

You may also like...

2 Responses

  1. Great post, Mark; this really digs into the proposed remedy’s design. Your thinking echoes some of mine in Speech Engines (forthcoming in February). What users want is subjective, but the search engine’s own standardized assessments of what they want can provide an objective baseline from which to measure harmful or anticompetitive deviations. In the article, to keep the analysis to a reasonable length, I focus on suits by poorly ranked websites; I like your extension of similar ideas to competing services. And like you, I think that a remedy trying to protect competitors ultimately requires some kind of more general oversight of the organic search results. (This problem even bedevils Google’s settlement with the FTC.)

    I’m skeptical of the EU remedy for other reasons. Foundem makes a good point: because of the auction, many of the benefits of this system inure to Google. If you think the organic rankings are problematic because they promote Google over competitors, as the EU seems to, why not insist on modifying the organic rankings? Even if you use an auction or a pCTR to allocate slots on the results page as between competitors, then the revenue from that auction should not be flowing to Google, because the theory of inserting the links in the first place was that Google reaps an unfair advantage from control of the page. And shifting more of each results page from organic to paid does not strike me as being particularly pro-consumer.

  2. Mark Patterson says:

    Thanks very much for the comment, James. I do think our views have similarities, but they also have differences. They are similar, most importantly, in that we agree it is critical to have a baseline for assessing search engine performance. And we share, I think, a willingness to derive that baseline from Google itself. We differ, perhaps, in three ways.

    One is in what we expect from Google. You say in “Speech Engines” that Google “is free to establish its own criteria for measuring and describing quality.” I don’t think I agree, given Google’s power. I turn to Google for the baseline not because of any unwillingness to apply an external baseline, but because of an inability to identify one. That is, I think we can hold Google to its pCTR proposal not because it reflects the criteria that Google applies, but because it has represented the pCTR approach as “a means to evaluate the expected quality of a [link].”

    Two, I think we differ on in how specific a baseline must be. With your focus on the subjective user experience, I think you might require that a baseline be user-specific. I am willing to start from Google’s pCTR proposal, which I suppose could be user-specific but seems not to be, because the proposal refers to an “ad-query-site combination.”

    Third, I think we differ in what we would define as bad conduct. In “Speech Engines,” you would require “subjective falsity.” (I really like your explanation of how this explains the F.T.C.’s apparent purpose-based approach.) I would not. Instead, I think Google could be required to justify any way in which its results differ from those of the pCTR approach, which it has represented as an objective quality measure. Interestingly, I think we then have complementary problems. I’m not sure how I would respond to Google saying, “You know, we thought that pCTR approach was a good quality measure, but it isn’t. Sorry.” And I’m not sure how you would respond to Google saying, “Our criteria for measuring and describing quality have changed. We now comply with our new criteria (which could change any minute).” I think I’m happier with my problem, because I would be happier requiring Google to justify a change from what it previously said was objectively valid than in justifying a change from its own previous subjective views.