Is fear of science costing your firm? - part 2
In the first part of this topic, we introduced the role of science and prediction in litigation and an overview of the jury research industry. In part two, we examine the importance of predictive validity and how it is achieved to provide actionable insight.
Establishing Qualifications and Maximizing Predictive Validity
Before proceeding, let’s define one more term, namely, qualifications. Certainly in a field as rich and complex as litigation, scientific credentials alone could not be sufficient, and indeed, the simple concept of experience is also a key variable. As we shall discuss momentarily, simulation of actual trial conditions is a key component to obtaining predictive validity. Therefore, in jury research, it goes without saying that a practitioner should be able to have a competent grasp of what such conditions are like, and how they affect juror conduct. The myriad ways in which things can “go bump in the night” in litigation require resolution through experience if one is to accurately implement research to formulate reliable inferences for a trial outcome.
So qualifications must entail the right kind of scientific training, but it must also refer to the right kind of experience in the trenches to accompany it. However, these criteria are rarely checked when vetting prospective jury research practitioners. Those who purchase and acquire jury research – let’s call them consumers – are almost invariably trained as lawyers, and typically do not have scientific backgrounds. As a result, trial teams are simply not equipped to evaluate the soundness or rigor of scientific research, and so they do not evaluate potential jury research practitioners from the standpoint of scientific research training criteria.
I once watched an engaging young person become hired to conduct jury selection in a $100 million patent case. She had previously worked in litigation for about one year and had never picked a jury before in her life. Perhaps unbelievably, in many cases, litigators do not even ask about experience, let alone credentials. I have no way to document these assertions except from my personal observation of actual trial teams, but I certainly would have no reason to make the contentions were they not in keeping with what I have actually seen.
Instead of qualifications, jury research practitioners are most often hired based on other criteria, primarily relationships, liking and word of mouth. Real world examples of poor levels of predictive validity in existing jury research are unfortunately common, but easily understandable in the context of how the industry is structured and regulated (or rather unstructured and unregulated). For example, I was recently contacted by a major electronics firm, and I asked the trial team “Are you picking a litigation consultant based on qualifications or relationships?” The reply was “relationships.” After choosing a different jury research person, this defendant eventually was assessed damages of $23 million by the actual jury, while the other co-defendants had already settled out for far less. In other words, the company had the least strategically effective position of any party in the case, despite the fact that it had conducted jury research – with a practitioner chosen on the basis of “relationships.”
If prediction is the highest level of science, it certainly is not reasonable to expect results that rise to such a level when there is no requirement for, or investigation of, experience, qualifications or credentials. It should be emphasized, however, that this state of affairs is not intended to disparage those who acquire and use the services; as mentioned previously, most people are not even aware that prediction of behavior is a specialized area of research in psychology, nor that such expertise even exists; is available; or is indeed effective in the field of litigation.
Talking to litigators, the vast majority do not expect the research to be predictive. The most common viewpoint is that “jury research is more art than science.” What seems to be missing is the realization that there is a choice here – it can be more science than art, if the consumers of these services understand that science is indeed available. However, simply by virtue of the astounding levels of diversity among jury research practitioners, it makes sense that scientists would become involved at some point, especially given the stakes involved. After more than thirty years of evolution, enough has been learned from a scientific perspective so that guidelines can be drawn as to what makes the research predictive.
What Can Science Add?
Research-based prediction will never, of course, be perfect: Complicating factors such as uncontrollable judges and aberrant rulings; intractable witnesses; and the luck of the draw in jury selection will always represent wild cards that can lead actual trials astray. The position taken presently, however, is simply this: Under optimal conditions, when research is designed and implemented by practitioners with appropriate qualifications, the accuracy of the results far exceed intuition and conjecture, and in fact provide information that, in the end, saves trial teams enormous amounts of money as a result of maximally effective trial tactics and settlement strategies (Speckart 2008, 2010).
Jury research may take many different forms, but outside of post-trial juror interviews, the process typically involves (1) recruitment of mock jurors; (2) a trial simulation containing presentations of each side; and (3) the implementation of psychological measurement and/or other research methods. Validity – that is, whether the research accurately reflects real world conditions – is determined by these three “legs of a stool,” and “breaking” any one of these “legs” will cause the structure to collapse.
With regard to prediction for an individual, let’s say a person (on a defense trial team) says prior to jury selection, “I think we need to get rid of labor union members.” So we test the question,
Are you or have you ever been a labor union member? 1. Yes. 2. No.
It turns out that the statistical relationship between this Yes/No response and later verdict preference is insignificant, i.e., the data is not predictive.
So we then refine the measurement. In response to the same question, potential answers are now:
- Yes, and I attended meetings or held an official position
- Yes, but did not attend meetings nor hold an official position
- Never had a chance to join a labor union
- Had a chance to join but declined
Statistical analysis of these response options does show a significant relationship with verdict, with option 1 corresponding to a plaintiff verdict, option 4 linked to a defense verdict, and options 2 and 3 not predicting either one.
What has been accomplished? Through refinement of psychological measurement we have started the process of identifying a juror profile that predicts verdict. In practice, of course, more information than simply labor union history is needed, but this is an exemplar of how psychological measurement enables prediction of behavior starting from a single item and then building to a general profile by progressively adding more.
Contrast this process to what normally occurs on trial teams, in which members use experience to attempt to verify predictive relationships. In actual jury selection, trial team members most commonly use general subjective inferences (“I like him”) instead of citing statistical relationships. While it is not intended that subjective bases for evaluating jurors be denigrated or dismissed as useless, certainly the addition of scientific criteria for jury selection adds increased power to create a useful profile.
At the group level, scientific research is cost-effective in making settlement decisions, since on average, research-derived damages estimates are far more accurate than “intuition,” “guessing” or “hunches.” As discussed previously (Speckart 2008, 2010), scientific rigor is required for obtaining maximally precise estimates of what a jury would award. The resultant information can then be used to determine where to draw the line in settlement negotiations. If the estimation of damages is inaccurate, tremendous waste can occur in over/under-payment to settle cases.
Traditional means of settling cases based on “experience” have been shown repeatedly to be far more costly than carrying out the research in a scientific manner to estimate the true value of the case in terms of what a jury will actually award. Numerical analyses of settlements in multiple cases with and without scientific research as a guide have shown unequivocally that not only is guessing more expensive than implementing the research, it is far more expensive.
For example, a claims adjuster was writing a check to settle a case for $750,000 when his Vice President ordered him to conduct a mock trial. He objected, citing the fact that the exposure was under $1 million, whereupon his supervisor ordered the project anyway. When three mock juries came in at $150,000 -- $250,000, they ultimately settled the case for $400,000 – a savings of $350,000, and a return on investment (relative to mock trial costs) of about 800%. That year the insurance company came in $83 million under budget against its loss reserves in just one department. The cost effectiveness of scientific research methods have been documented repeatedly, but putting these lessons into practice is quite another matter.
Conclusions
Often we hear people say “we will use jury research if the case does not settle” but this perspective misses one of the most valuable functions of the research – knowing how much to settle for. Knowing what a jury would do with the case in advance through properly designed research is a key factor in this determination, but the utilization of accurate research to allow savings in this area is typically neglected because of a lack of faith in its predictive validity. This lack of faith, as we have seen, can at least in part be ascribed to a ubiquitous failure to establish qualifications. For example, on dozens of occasions, we have heard insurance claims adjusters refuse to pay for jury research because “it’s not predictive.” The foregoing discussions on the nature of the industry, of course, explain precisely how this conclusion became prevalent in the first place.
Ultimately, the emergence of predictive research is not an academic issue, but rather a pragmatic one: Once it is known that science is available and that it generally works, the key is to use it in the right manner, and to insist on science where it is legitimately needed to avoid the inevitable waste involved in guessing at damages. For example, in the Exxon Valdez case, our research predicted an award of $5.2 billion, while the actual amount was $5.0 billion. Exxon’s stock went up after the verdict, because Wall Street had pegged the award at $10-15 billion; however, these are the types of estimates that are used to settle cases in many instances.
In addition, a vital function of the research is to induce actual trial outcomes that are better than those predicted, i.e., to improve the tactical position of the trial team. One common question that arises when discussing predictive validity is, “If the research is designed to inform the litigator, should not the actual trial results be better than those observed in the research?” Compilation of our outcome statistics comparing actual and mock trial results confirms that the answer to this question is typically “yes” – actual trial results are more favorable than mock trial (research) results nearly 50% of the time, whereas satisfactory levels of predictive validity – results predicting either a defense verdict or damages that are within 5% of those in the actual trial – hover around 40%. The remaining 10% represent actual trial results that are worse than the research outcomes, typically as a result of the complicating factors mentioned previously (“worse” is defined as a verdict reversal or damages that diverge in excess of 5% in an unfavorable direction, respectively).
The absence of a scientific foundation in jury research turns to be more serious than one might think. We have seen millions of dollars wasted as a result of “research” conducted by unqualified practitioners. For smaller plaintiff law firms investing their own money, the results of unreliable research can be devastating (we have seen more than one such firm destroyed as a result of poor research). Thus, ethical implications may be involved: Even when large corporate defendants are involved, whose money is being thrown away by paying more, or (for plaintiffs) accepting less than a jury would award in order to resolve a case (excluding adjustments for risk, nuisance factors, etc.)? How much financial waste is involved by relying on research that lacks scientific rigor? To what extent have unqualified practitioners damaged the reputation of the research, causing yet more reliance on guesses, and more over- or under-payment in settlements that are poorly estimated?
Fortunately, the remedy is simple: Guidelines can easily be proposed to prevent substandard research. There is no need to abandon previously-used criteria (“Who have you worked with?”) but the following are minimum requirements that should be added:
- What research background do you have in psychological measurement and prediction of behavior?
- In what area of specialization did you receive your Ph. D.?
- What is the track record of accuracy in your research?
- How many years of experience do you have in this industry? How many years of courtroom experience do you have?
While many users of jury research services profess in a somewhat fatalistic manner that “It’s more art than science,” by contrast, the position of the present article is “If you want science you merely have to look for it.” If a trial team is willing to roll up its sleeves, obtained qualified help, and take on the extra costs of utilizing science and experience, the economic benefits typically far outweigh the costs.
Jury research costs about as much as a fine car, say an Infiniti or Lexus. How much research would one normally expend in choosing a car? I would submit that the amount should not be more than the amount expended in choosing a jury research practitioner. Unlike the automobile purchase, the results of the choice in jury research can have financial implications that extend far beyond the immediate purchase at hand.
Speckart, George, “Identifying the Plaintiff Juror: A Psychological Analysis,” For the Defense, 2000, vol. 42, no. 9
Speckart, George, “Trial by Science,” Risk & Insurance, October 2008, vol. 19, no. 13
Speckart, George, “Do Mock Trials Predict Actual Trial Outcomes?” In House, Summer, 2010, vol. 5, no. 13