The Use of Evidence and Cost Effectiveness by the Courts: How Can It H

By David M. Eddy, Kaiser Permanente Southern California

Special issue contains articles from expert meeting, 'Evidence: Its Meanings and Uses in Law, Medicine and Health Care.

What Can Health Care Do to Address These Problems?

These issues raise grave problems for health care, and it is going to take every bit of intelligence, honor, and courage to address them. Both conceptual and practical steps need to be taken.

 

Conceptual Steps

The first and most important conceptual step is to resolve the issue of conflicting philosophies and standards of care. This does not necessarily require that one position be agreed on and the others discarded. To be sure, agreement on a single vision for health care would be desirable for the simplicity, consistency, and public confidence it would create. But if that is not possible, which appears to be the case for at least several years, this conceptual step only requires that each position be recognized and respected as ethical and legitimate. The key is for all the actors in health care to acknowledge that there can be different settings in which health care is delivered, and that the standards of care can be different in those different settings.

To understand what this might look like let us use the two possible settings introduced earlier as examples. In the setting of limited budgets, costs would be considered as critical variables in determining the standards of care, and cost-effectiveness would be used when data and methods permit. Because ineffective treatments waste money and expose patients to unnecessary risks, in this setting let us also imagine that there would be a requirement for good evidence before a new treatment is covered or an old treatment is affirmatively recommended. Old treatments that are not supported by good evidence would still be covered but would not be affirmatively recommended through guidelines or as a standard of care. In the other setting, which has an open-ended budget, costs would not enter medical decisions or determinations of standards of care. In this setting we can also imagine that the standards of evidence might be looser. A new treatment might be covered if it is possibly effective and/or it is the patient's last hope. An old treatment that is not supported by good evidence might be made the standard of care if it is "time honored" and there is a general consensus about its appropriate role.

Each of us might have a personal belief about which of these two settings is preferable, but our immediate need is not to force agreement on a single position, but to acknowledge that multiple settings can exist, and the standards of care can appropriately be different in the different settings. In fact, the existence of the two settings and the balance between them should ultimately be determined by people and patients-the ones who actually receive the treatments, live with their outcome, and pay their costs. People who are willing to pay any costs and want to receive investigational treatments (and are willing to pay the higher premiums to have them covered), will choose plans that operate with an open-ended budget, with a relatively loose standard of evidence. People who balk at paying higher costs and only want to pay for treatments that are known to provide benefit, will choose plans that operate under a limited budget, with a tight standard of evidence. Both settings and their respective standards of care are correct and ethical, provided they accurately reflect what people want and are willing to pay for.

This step of acknowledging the existence of more than one setting and different standards of care should greatly improve the ability of the courts to resolve our disputes. If experts respect the existence of different settings and different standards of care, and if they are careful to provide testimony that is appropriate to the settings in which the disputes arose, they can concentrate on the matters at hand and spare the courts confusing and conflicting testimonies that reflect differences in philosophies more than differences in the facts or evidence.

The second step is to acknowledge that there is no such thing as a community standard. We can still talk of a "standard of care" or "standards of care," with all that that implies about coverage and malpractice. What we need to drop are the assumptions that the majority of physicians are currently following a single standard of care, and that one can learn the standard of care by observing the practices of a community of physicians. This step by itself will be enormously useful in everyone's attempts to reduce inappropriate care and waste. Currently, many physicians claim to feel that a particular practice is inappropriate, and that they would personally prefer not to do it but are compelled to do it out of fear that they will be compared to a community standard. The result is that they all do it, which in turn makes it the community standard, which further entrenches the practice. This conceptual step will break that chain.

The third conceptual step is to reaffirm that ultimately truth is learned from empirical evidence. This may seem like a trivial step to take, given that the scientific revolution has been under way for more that a half a millennium, but there are still vestiges of the thought that truth is what great authorities say it is. A practical implication of this step is that we should all be humble, even skeptical, about our ability to form accurate beliefs subjectively. We should challenge ourselves and consciously hold back on locking in a belief until we have seen a systematic review of the evidence.

A fourth conceptual step is to acknowledge that the subjective beliefs of experts or a consensus of subjective beliefs cannot necessarily be taken at face value. This does not imply that all experts are wrong. Indeed, it is probable that most are right most of the time. This step only states that an expert or consensus of experts is not necessarily right, and there is no easy way to determine when they are and when they are not (without looking at the actual empirical evidence). It is also important to understand that this step does not mean we do not need experts. They will always be needed to survey and interpret the evidence. It is only to say that when experts offer conclusions about the standard of care, they need to describe the underlying evidence.

Return to Contents

 

Practical Steps

These conceptual steps lead to several practical steps. The first has already been introduced: we need to be much more careful about anchoring beliefs and claims to empirical evidence. The ideas here were first described in the context of "evidence-based guidelines" (Eddy 1990) and have subsequently blossomed into "evidence-based medicine"(Guyatt 1991; Sackett and Guyatt 1992; Sackett et al. 1997). For the development of guidelines and standards of care, "the evidence based approach explicitly describes the available evidence that pertains to a guideline and ties the guideline to evidence. . . . It consciously anchors a guideline, not to current practices or the beliefs of experts, but to experimental evidence. The usual question is whether the practice under consideration has been shown to be effective in improving the most important outcomes. Merely providing evidence as background material or peppering a guideline with occasional references to support particular positions do not count" (Eddy 1990: 1272). The most commonly cited definition of evidence-based medicine is less directive about the balance between subjective reasoning and empirical evidence but still emphasizes the role of evidence: "The conscientious, explicit and judicious use of current best evidence in making clinical decisions about the care of individual patients" (Sackett et al. 1996: 71).

The second practical step is for those who fund and conduct clinical research to improve the collection and interpretation of empirical evidence. The problem is not just the paucity of good evidence but the mixed quality of the evidence that does exist. Empirical evidence can be very difficult to collect and interpret. It is easy to be misled, especially if a misinterpretation coincides with one's prior beliefs and self interests. While a full description of experimental methods would obviously be inappropriate here, two things are especially important. One is the need for controls to minimize the effects of patient selection biases. The other is the need to track a treatment's effect all the way to health outcomes to avoid being misled by changes in biological outcomes. A high proportion of the evidence collected and reported by researchers, and cited by proponents of treatments, is at best worthless, and at worst misleading. This needs to be corrected.

None of the previous steps will be fully effective unless plans, purchasers, and patients communicate much more precisely about what a plan is expected to cover. The main vehicle for accomplishing this is the benefits language in contracts. Thus the third practical step is to improve the contracts that describe what plans will cover. This should involve the following:

  • Include explicit descriptions of (1) the standard of evidence that will be used to determine coverage and to design guidelines, and (2) the role that costs and cost-effectiveness analysis will play in coverage determinations and guidelines.
  • Describe any methods and processes that will be used to make specific determinations.
  • Include examples to illustrate the principles, methods, and processes.
  • List specific treatments that are likely to be controversial.
  • Make the language as clear as possible. This includes using big print, paying attention to different language needs, discussing the coverage with people face to face, providing counselors, and avoiding any advertising or other public statements that may mislead people about the extent of coverage. The goal is to achieve truly informed consent that will not only create accurate expectations but that will hold up in court.
  • Create an appeals process that is fair to patients but is consistent with the terms of the contract. The process will involve a review by external impartial experts, but the instructions to the experts will be different depending on the setting and the terms of the contract. For example, in a setting that is committed to practicing evidence-based medicine, the question to reviewers should be: What is the evidence and does it meet the standard described in the contract? In a setting that wants a looser approach to evidence, the question can be: In your opinion, is this treatment appropriate?

It is difficult to overemphasize the importance of this step. This is where plans can make clear the type of setting in which they are practicing, the role of costs, the approach to evidence, and the implications of all these for coverage and the standards of care. And this is where people will make their choices about the type of coverage and care they want to receive and the premiums they want to pay. If people prefer a different setting than is being offered by a particular plan, they can and should seek another plan that offers the setting, care, and costs they prefer. But once the plan has made its commitment in the contract, and once a person accepts that commitment, both parties should be held to it by the courts.

Return to Contents

 

What Can the Courts Do to Help?

If we imagine for a minute that "the courts" is an entity that can respond to social needs and make changes, there are several things it can do to help support the conceptual and practical steps health care must take.

The first is to make a similar commitment to empirical evidence. Experts will continue to play the critical role of introducing evidence to the court, but it is the expert's role as interpreter of the evidence rather than holder of a belief that is desired. When an expert expresses an opinion, the court can and should ask to see the evidence behind that opinion.

The court should not assume that the validity of an expert's testimony is correlated with the expert's training, affiliations, or other indirect measures, even experience. In fact, these characteristics may well correlate better with personal and professional biases than with validity. For example, the principle investigator of the world's largest research project to demonstrating the value of some new treatment arguably has the world's biggest incentive to have that treatment covered. Furthermore, credentials and experience in some medical specialty do not necessarily mean that an expert is trained in the formal methods of interpreting evidence, even in that specialty. For a reminder, experts in high-dose chemotherapy, including the principle investigators of some of the most prominent trials, testified vehemently that the treatment was effective, only to be proved wrong. To the greatest extent possible, the credibility of an expert should be based on the credibility of the evidence he or she presents.

The third thing courts can do is honor the contract. Specifically, to the extent that a plan's contract describes the standards of evidence it will apply, and the role of costs in coverage decisions and guidelines, the court should respect what the contract says and ensure that the case is evaluated within the standards and methods agreed to in the contract. If the contract specifies a strict requirement for evidence of effectiveness before covering a new treatment, the court should look for evidence that meets that standard. If the contract says that costs will be considered and cost-effectiveness will be used, the court should accept and apply the results of cost-effectiveness analyses. If the contract implies that a treatment will be considered noninvestigational and covered if the patient's personal physician believes it is in the patient's best interests, then that should be the test applied by the court. Similarly, patients should be expected to make reasonable attempts to read and abide by the contracts they sign. In deMeurer v. HealthNet, Mr. deMeurer admitted he "threw it in a pile with all the other papers" without reading it (Larson 1996). That should not invalidate the contract.

A fourth step is to strive for consistency and predictability. There is no way that health care can reduce variations in practice patterns and high rates of inappropriate care-two of the biggest contributors to bad quality and waste-without making decisions that will be controversial. (Notice that every instance of inappropriate care is initiated by someone who will argue it is appropriate.) In order for individual physicians, groups of physicians, or plans to make controversial decisions, they have to be confident that they will be treated fairly if they end up in court. A big part of the burden of achieving consistency rests on the plan's contract; court decisions cannot be consistent until contracts are clear. But the court itself, even setting issues of ERISA aside, also have a responsibility for increasing consistency. Judgments that bewilder outsiders can destroy confidence that the courts can address difficult cases fairly.

The importance of this problem calls for some examples, all drawn from HDC/ABMT for breast cancer. In one case the court ruled in favor of the plaintiff on a finding that the following statement in the coverage policy was ambiguous: "Autologous bone marrow transplants of other forms of stem cell rescue (in which the patient is the donor) with high dose chemotherapy and irradiation are not covered" (Bailey v. Blue Cross & Blue Shield of Virginia, No. 94-2531 [4th Cir. Oct. 11, 1995]). In stark contrast, a different court found the following criterion for excluding coverage to be sufficiently precise to find for the defendant: A treatment is "experimental or of a research nature" if it is not "generally accepted medical practice" (Peruzzi v. Summa Medical Plan, No. 98-0069 [6th Cir. Feb. 27, 1998]). In an especially puzzling case, to avoid any possibility of bias, HealthNet asked a group of cancer experts from UCLA, UCSF, and Scripps to define for themselves the indications for high-dose chemotherapy for breast cancer. A case of breast cancer occurred that the experts agreed did not meet the indications they themselves had defined. HealthNet held the line, only to lose the case, have to pay $1.3 million in damages, be castigated by the court for "extreme and outrageous behavior [that] exceeded all bounds usually tolerated in a civilized society," and be featured on the front cover of Time (Larson 1996). Needless to say, we need more order than this if we are to successfully address the issues of evidence and cost-effectiveness.

A final recommendation is for the court to use court-appointed, neutral experts. It is difficult enough for impartial people who specialize in the interpretation of evidence to make sense of the existing research. It can be hopeless for people who have less experience with numbers. Reliance on competing experts in an adversarial mode is more likely to confuse and distort than clarify. Some of the legal issues raised by this "gatekeeper" model are discussed by Daniel W. Shuman in this issue.

Return to Contents

 

Precedents and Prospects

This is a long list of difficult recommendations. Some of the precedents are hopeful; some are discouraging; and some are blatantly naive. One of the encouraging areas is the potential of the court to press experts for the empirical evidence that supports their beliefs. Certainly there are many precedents for demanding to know the empirical evidence behind an expert's opinion. A forensic expert who specializes in footprints is not just asked: "Was the accused at the scene of the crime?" He or she is grilled on the evidence behind any answer to a question like that. In health care cases, the courts appear to be more deferential to medical experts and require less explanation or empirical evidence. But the precedent to ask much more is there.

The precedents for applying or accepting cost-effectiveness analysis (CEA) are less encouraging to me. As discussed by Peter D. Jacobson and Matthew L. Kanna elsewhere in this issue, the use of CEA in other types of cases is mixed. In some cases an appeal to cost-effectiveness provides a successful defense, in others it kills a defendant's case. If there is a pattern at all, it appears to be that cost-effectiveness analysis is used when it can support the little guy against the big guy. To the extent that this is true, it is not very encouraging to health plans. From the perspective of health care, the most promising case law is the reasoning of Judge Learned Hand in United States v. Carroll Towing (159 F.2d 169 [2d Cir. 1947]); there is negligence when the cost of preventing an accident is less than the probability of the accident multiplied by the gravity of the resulting injury. Judge Hand's formula can be adapted nicely to the types of problems seen in health care. But there appears to be considerable variability in the extent to which Judge Hand's formula is actually used, and its transferability to health care is unclear.

The most encouraging new development is the U.S. Supreme Court's recent unanimous ruling in Pegram v. Herdrich (120 S. Ct. 2143 [2000]). Herdrich, a member of the physician-owned Carle Care HMO, had an inflamed abdominal mass. The physician, Pegram, determined that it did not constitute an emergency and instead of ordering an immediate ultrasound at a nearby hospital, scheduled an ultrasound at a Carle Care hospital eight days later. In the meantime, Hendrich's appendix ruptured, requiring emergency surgery. Because Carle Care rewards its physician owners with a year-end bonus for controlling costs, Herdrich argued that the financial incentives compromised the medical care she received and constituted an inherent breach of an ERISA fiduciary duty. The case is a good example of the type of decision that has to be made in a budget-limited setting. A physician chose a course of treatment that has a slightly higher risk, because the magnitude of the risk was judged to be too small to justify the cost. It has the added feature that the physician making the decision would share in the savings.

The Court found for the physician. The immediate reason was that the plaintiff had failed to create a claim under ERISA. But Justice David Souter's opinion for the Court went out of its way to include a policy discussion of the use of rationing and cost-cutting incentives by HMOs. His opinion made it clear that it is appropriate for HMOs to ration care, control costs, and even use physician financial incentives to accomplish these goals. "Imposing federal liability for efforts to reduce costs—the entire purpose of managed care—could destroy HMOs altogether." He continued: "No HMO organization could survive without some incentive connecting physician reward with treatment rationing. . . . Inducement to ration care goes to the very point of any HMO scheme, and rationing necessarily raises some risks while reducing others." The Court did not specifically address the use of CEA, but the inclusion of cost-effectiveness in benefit contracts and the use of formal methods to determine cost-effectiveness should find support in Pegram.

At this point it is worth observing that the methods of CEA are imperfect and easily abused (see Drummond Rennie's essay in this issue), and often the necessary data are unavailable or untrustworthy. But before we reject such analyses, we need to note that virtually every other aspect of medical decision making is also imperfect and suffers from incomplete data. Some examples from this discussion are the use of experts by courts, benefits language in contracts, references to community standards, clinical research, and the estimation of a treatment's outcomes. Furthermore, the options that remain if one does not use formal CEA, such as ignoring costs, or trying to guess a treatment's cost-effectiveness, are imperfect. At least this provides a formal, accountable method for addressing costs that is at least as good as any other approach. Limitations of the methods and data should not cause us to discard it altogether but to limit its use to cases the methods and available data permit.

Return to Contents

 

One Final Thought

Considering all the problems, the necessary steps for health care, the help needed from the courts, and the prospects for accomplishing it all, it is easy to be discouraged. Indeed, we can expect turmoil for many years to come. This leads to a final thought. The best way for the health care system to interact with the courts is to stay out of them. We in health care should do everything we can to prevent misunderstandings and conflicts, and to address them outside the courts through such things as counseling, mediation, and binding arbitration. In the end, the most important lessons are to be clear in our thinking, to be precise in our communications, to be fair and consistent in our applications, and to be respectful and helpful when disagreements do arise.

Return to Contents

 

References

 

Eddy, D. M. 1990. Practice Policies: Where Do They Come From? Journal of the American Medical Association 263:1265-1275.

 

Guyatt, G. H. 1991. Evidence-Based Medicine. ACP Journal Club (Annals of Internal Medicine 114 [suppl. 2]): A-16.

 

Larson, Erik. 1996. The Soul of an HMO. Time Jan. 22. Available on-line at www.time.com/time/magazine/archive/1996/dom/960122/cover.html.

 

Sackett, D., and G. H. Guyatt. 1992. Evidence-Based Medicine: A New Approach to the Teaching of Medicine. Journal of the American Medical Association 268:2420-2425.

 

Sackett, D. L., W. S. Richardson, W. Rosenberg, and R. B. Haynes, eds. 1997. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingston.

 

Sackett, D. L., W. Rosenberg, J. A. Muir Gray, R. B. Haynes, and W. S. Richardson. 1996. Evidence-Based Medicine: What It Is and What It Isn't. British Medical Journal 312:71-72.

Return to Contents

Current as of April 2001
Internet Citation: The Use of Evidence and Cost Effectiveness by the Courts: How Can It H: By David M. Eddy, Kaiser Permanente Southern California. April 2001. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/evidence-based-reports/jhppl/eddy2.html