Consumer surveys have long been relied on in trademark infringement cases. Recently, courts have noted that surveys have become “de rigueur in patent cases” as a tool to evaluate and quantify damages relating to alleged infringement. Surveys have also been increasingly used in class certification matters and antitrust cases.
Despite this trend, some remain skeptical of the “probative significance” of survey evidence in litigation. To counter these concerns and to capitalize on the evidence that primary research can yield, attorneys can turn to reputable survey experts for the application of academically rigorous and unbiased methodologies. There are best practices in survey design and implementation used for the development of confirmatory evidence of survey results.
More specifically, in their litigation work, these experts should consider three questions.
Experts must demonstrate that the appropriate questions are asked clearly, that respondents understand the survey questions as intended, and that respondents can complete the survey without fatigue.
An appropriate and admissible survey should be grounded in academically rigorous and unbiased methodologies. Once the key questions are identified, the survey expert should consider the most appropriate approach. For example, to assess the impact of particular product logos or claims in advertising in a trademark or consumer confusion matter, a test and control experimental design is often the best choice, as it can isolate whether there is a causal link between the logos or claims and consumer behavior. Such a method can be used to isolate a causal influence on consumer perceptions and preferences of an element of a product, advertisement, or other marketing material.
If the task is to evaluate the relative importance or value of various attributes to consumer choice in, for example, a patent infringement case, a conjoint study – a market research technique used to determine how people value the features that make up a product or service – or other choice-based method may be helpful.
A survey will have greater probative value if the expert can document and support the choice of sample, question, and method, while minimizing the possibility or appearance of biases.
Survey evidence, like most expert-presented evidence, is generally sponsored by a party in litigation. To avoid biases, the right people must be asked the right survey questions in the right way. This encompasses multiple design choices and requires the expert to demonstrate that the survey does not drive results in a particular direction.
Critically, the expert must define, target, and sample from the segment of the population whose beliefs are relevant to the issues in the case. Even if every other step was taken appropriately, if the wrong people are surveyed, the results are likely to be irrelevant and the data may be excluded by the court.
Recent court opinions also indicate that transparency regarding the design process can be critical to admissibility. To demonstrate the relevance of particular design decisions, for example, the survey may be pretested before a full launch to increase the likelihood that questions are clear and to minimize the possibility of unintended implications, such as a respondent’s ability to guess the sponsor or purpose of a study. Further, it may be helpful to demonstrate that potential biases have been minimized by conducting surveys and experiments in a manner that is “double-blind,” thus eliminating the chance that the interviewer could influence the results. The survey expert’s decision to use open-ended or closed-ended questions can also have implications in terms of relevance, analysis, and perceived bias.
Analysis Group and affiliate
Ravi Dhar of the Yale School of Management assisted AT&T in its acquisition of DIRECTV. Professor Dhar, supported by a team led by Managing Principals
T. Christopher Borek and
Rebecca Kirk Fair and Vice President
Kristina Shampanier, developed, conducted, and analyzed a survey study examining consumer attitudes toward bundled Internet and television services. AT&T and DIRECTV cited the outcome of the study in their applications to the Federal Communications Commission (FCC), pointing to the benefit to consumers from bundled services. The FCC and U.S. Department of Justice approved the acquisition.
If survey results are confirmed with other data, the convergent results may help to strengthen the survey’s evidentiary weight and support distinctions between the survey and the marketplace.
To demonstrate that the results of a survey are consistent with other data or economic theory, survey experts and their teams can also provide complementary evidence. For example, surveys and market research conducted in the normal course of business by the parties in suit or by third parties may support (or refute) the findings of a survey conducted in a litigation context. Similarly, data analyses may provide results consistent with those found in a survey. For example, if a conjoint design is used to evaluate several product features, and the market price for one or more of the tested features can be determined from transaction data, comparisons can be drawn to confirm or scale survey results to match with historic pricing.
Fact witnesses, deposition testimony, and the evidentiary record – as well as economic theory – can also corroborate survey results. For example, communication between customers and manufacturers, or third-party product reviews, may indicate that particular features are of importance in a purchase decision. But if these features appear irrelevant in the survey, one might conclude that the survey design was flawed.
Surveys have been shown in some circumstances to be a useful method for the gathering of evidence, and can be particularly valuable when other sources of data are not available. Nonetheless, courts have been and may remain skeptical of surveys – and methodological flaws can hurt both admissibility and weight of impact. Recent decisions relating to “gatekeeping” and survey evidence, along with other high-profile litigation outcomes, highlight the necessity for adherence to best practices at every step. ■
Adapted from “3 Questions To Ask When Using Surveys In Litigation,” published in Law360, May 15, 2015.
Rebecca Kirk Fair is a Managing Principal in the Boston office. Laura O’Laughlin is a Senior Economist in the Montreal office.
From Analysis Group Forum: 2015 Year in Review