Summary of the Presentations

Expanding Research and Evaluation Designs to Improve the Science Base for Health Care and Public Health Quality Improvement Symposium

On September 13-15, 2005, AHRQ convened a meeting to examine public health quality improvement interventions.

Description of the Meeting Format

The format of the meeting was:

  1. An opening night dinner with keynote remarks designed to frame the meeting.
  2. Presentations on quality improvement projects at a variety of levels of the health care and public health systems followed by commentaries and discussion periods.
  3. Breakout sessions in which all attendees were encouraged to make recommendations to improve quality improvement intervention research and evaluation.
  4. Reports of the recommendations from the facilitators of the breakout sessions.

This symposium was designed to be interactive and to elicit the input of the approximately 120 attendees, whose backgrounds spanned quality improvement, health care, public health, research, training, and patient advocacy.

Tuesday, September 13, 2005

Welcoming Remarks

Thomas Chapel, M.A., M.B.A. (Facilitator)
Senior Health Scientist, Office of the Director, Office of Strategy and Innovation, Centers for Disease Control and Prevention

Mr. Chapel welcomed participants to the symposium and thanked them for attending. The symposium was sponsored by the Agency for Healthcare Research and Quality (AHRQ); the Centers for Disease Control and Prevention (CDC); the National Institutes of Health's (NIH) Office of Disease Prevention, National Cancer Institute (NCI), and National Heart, Lung, and Blood Institute (NHLBI); the Department of Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI); and the Robert Wood Johnson Foundation (RWJF). Dr. Denise Dougherty, Ph.D., Senior Advisor, Office of Extramural Research, Education, and Priority Populations (OEREP), AHRQ, introduced Dr. Clancy.

Return to Contents

 

Expanding Research and Evaluation Designs for Quality Improvement Interventions

Carolyn Clancy, M.D.
Director, Agency for Healthcare Research and Quality

Several years ago, the Institute of Medicine (IOM) published the report Crossing the Quality Chasm which discusses the gap that persists between the best possible health care available and the care that most patients receive. Dr. Clancy emphasized that progress in improving health care in this country still has a long way to go. The question is: are we making any progress? The Agency for Healthcare Research and Quality's second annual reports focusing on the state of health care quality and disparities in America showed overall improvement and identified areas still in need of improvement, including pervasive disparities related to race, ethnicity, and socioeconomic status. Dr. Clancy highlighted some of the findings in these reports, including the following:

  • Health care quality continues to improve at a modest pace across most measures of quality, with the overall rate of improvement being 2.8 percent. This improvement covers a range of areas, including prenatal care, hip fracture, and alcohol dependence.
  • Health care quality improvement (QI) is variable, with notable areas of high performance in patient safety, which showed a 10.2% improvement, and in Medicare's QIO (quality improvement organization) measures, which showed a 9.2% improvement.

A survey performed last fall by AHRQ, in conjunction with the Kaiser Family Foundation and the Harvard School of Public Health, showed that the proportion of the American public dissatisfied with health care increased from 44% in 2000 to 55% in 2004. Meanwhile, 40% of people surveyed believe that the quality of health care in this country has gotten worse. Dr. Clancy believes that more people are beginning to recognize that disparities in care do exist in relation to race, ethnicity, income, education, and other factors. She noted the need for collaboration between research groups that address quality and those examining disparities in health care.

AHRQ's mission is to improve the quality, safety, efficiency, and effectiveness of health care for all Americans. To achieve that goal, the Agency must find answers that are valid, timely, convincing, and practical. However, this is a great challenge. There is reason to be humble regarding the rate of turning funded research to the benefit of patient care. According to Balas12, it takes 17 years to turn 14% of original research to the benefit of patient care. There are many clinical treatments that have gone by the wayside, such as hormone replacement therapy to prevent cardiovascular disease or sulfuric acid to treat scurvy. Like clinical interventions, quality interventions need to be evaluated. Thus, the methods need to be right. If we want our results to have effect in the short-term, they have to be practical. Dr. Clancy added there are great opportunities to build on what we know and the methods we have, in order to develop the methods we need. Funding and performing randomized trials will not provide all the answers because of the characteristics of quality improvement interventions. Randomized trials are sometimes not appropriate for these studies because:

  • The targets of quality improvement interventions (QIIs) are not individual patients.
  • QIIs are complex and sometimes change over time.
  • The setting is an essential component of the question and QII.

Dr. Clancy noted that we can learn important lessons when things do not go well. According to an AHRQ-funded study at a major urban teaching hospital, the computerized physician order entry systems that are expected to significantly reduce medication errors must be implemented thoughtfully to avoid facilitating certain types of errors. Implementation problems can be minimized through testing and adaptations to meet the needs of individual clinical settings.

Dr. Clancy listed some currently important questions in the field of quality improvement interventions:

  • Can a regional health information organization improve interoperability of health information technology systems and improve patient safety and quality of care?
  • Can pay for performance improve quality?
  • Do changes in hospital culture reduce medical errors?
  • What quality improvement (QI) strategies work for reducing disparities?

Dr. Clancy noted that to develop the area of QII evaluation designs and methods, we can draw upon the evaluation designs used in medical, clinical, and health services research, such as the randomized controlled trial, group-randomized trial, and case-control study, as well as those used in the behavioral/social science fields, such as the interrupted time series and qualitative methods, to give just a few examples.

Dr. Clancy concluded by suggesting that we need to look at how we use these methods and build on these methods to get to the next phase of quality improvement intervention and evaluation.

Return to Contents

 

An Integrated Model for Improvement: Implications for Study Design

Frank Davidoff, M.D.
Editor Emeritus, Annals of Internal Medicine
Executive Editor, Institute for Healthcare Improvement

Dr. Davidoff noted that the goals of this symposium are, first, to find better ways of knowing whether QI in health care works and, second, to find ways to increase the impact of that knowledge. To accomplish those purposes, three questions must be answered:

  • What kind of evidence do QI studies produce?
  • What are the strengths and limitations of that evidence?
  • How can QI and the dissemination of its results help one another?

It is the role of this group to answer these questions. Dr. Davidoff said that his job was to create a context in which these questions can be answered.

Why is it so hard to improve health care?

 

Dr. Davidoff noted that providing health care that is both science-based and humane (caring) is an extraordinarily complex and demanding undertaking. He presented a "basic model" for the delivery of scientifically-grounded medical care (Figure 1a). The model involves the judicious application of a large body of established, generalizable knowledge to the idiosyncratic needs of patients and families, as well as to micro- and macro-systems, to reach the desired outcomes. This must all be done in a demanding and multi-layered local socio-technical environment. Dr. Davidoff noted that the missing element from this model is "improvement." Although the care provided in that model may be optimal at any moment, it is not going to improve over time unless mechanisms for improving that model are built into the process.

 

Dr. Davidoff emphasized that there is an enormous gap between medicine's potential and what it actually delivers, i.e., the "Knowledge-Performance Gap". However, several recent important performance initiatives have been designed to help close this gap. The first initiative was the practice of evidence-based medicine, which is largely directed at the level of care of individual patients, in contradistinction to the concept of evidence-based medicine. The second is the quality movement, which is directed at the level of systems of care of communities and of the public health. Drawing on elements of these initiatives, and building on the basic model, Dr. Paul Batalden and Dr. Davidoff have developed an integrated model of improvement (Figure 1b). The integrating concept that underlies this model is that improvement is fundamentally a learning process. The model is driven by three kinds of learning: a) scientific discovery, which is learning about "what is"; b) experiential discovery, which is "learning about learning" or, in effect, "learning about how learning works"; and c) experiential learning, which is actually learning "how to" do something. Unless newly discovered scientific knowledge, which has entered the "reservoir" of new generalizable knowledge, is translated into practice, it does not have an effect. Dr. Davidoff posed the question, "Where does experiential learning enter in?" We see it in the "performance elements," in terms of locating, acquiring, and evaluating established knowledge, adapting evidence to local circumstances, redesigning practices, executing changes, measuring outcomes and using those data to modify care accordingly. However, another element is needed. There is learning to be done about what goes on in each of those steps, and we call that learning "experiential discovery." Scientific discovery and experiential discovery enter the integrated model as inputs to the knowledge reservoir. In other words, knowledge translation is used to redesign care delivery. The changes are then executed and measured, and feedback is used to modify the information for care delivery.

Dr. Davidoff compared the three kinds of learning listed above. Experiential learning is the source of most of our knowledge, not just in medicine, but in general. It is driven by experience and not by doing randomized trials. However, Dr. Davidoff said that despite the ubiquity of experiential learning, it largely has been ignored in academic medicine. Experiential learning necessarily takes place in concrete, local, and real-world contexts, which means that variables are either difficult to control or cannot be controlled. The product of this learning is concrete "know how" and competence. The learning that takes place is also described as "reflexive," because the product or outcome of that learning is intended to change the thing one is learning. In this way, the effects of experiential learning are intrinsically unstable because of the continual feedback on performance.

Dr. Davidoff contrasted experiential learning with scientific discovery and experiential discovery. The setting for scientific discovery is often artificial in clinical research and very protocol-driven for the purpose of controlling contextual variables. The product of scientific discovery is abstract in that it is conceptual knowledge. The product of scientific discovery is unchanged by its discovery. For example, discovery that an antibiotic is effective for certain diseases does not change its effectiveness. Originality is the hallmark of scientific discovery, because the purpose is to discover what is unknown or not understood.

How does experiential learning work? To describe it simply, it is learning by doing. The generic cycle of experiential learning consists of four elements:

  • Experiencing something fully and openly.
  • Questioning what has just occurred.
  • Conceptualizing: trying to relate the questions the experience has raised to other models or processes and reflecting on how to do it better.
  • Going back and trying again.

All four elements are essential. Without questioning and conceptualizing, there is stagnation. Without experience, there is pedantry and inaction.

Why is experiential learning particularly relevant to quality improvement? The characteristics of experiential learning, in that it is "real world," reflexive, produces know how, is unstable, and applies what is known, are particularly suited to dealing with the problems of everyday health care delivery. These problems are that health care delivery is messy, nonlinear, and happens in complex adaptive systems. The characteristics of experiential learning listed above also present challenges in that experiential learning is not easily amenable to traditional hypothesis-testing research methods. This points to the importance of experiential discovery, the science of learning about experiential learning, which is a discipline that is just beginning to coalesce. Experiential discovery can provide the link between the gritty, messy experiential world of local "know how" and the orderly, scholarly world of conceptual knowledge.

Dr. Davidoff presented some thoughts on the nature and role of that link between experiential learning and experiential discovery. The traditional experiential learning cycle (experience, question, conceptualize, retry) describes the informal individual learning process. However, it does not capture the formal work done by organizations to improve their performance. Therefore, the study cycle needs to include an additional planning step and to collapse the questioning and conceptualizing steps into a single study step. This results in the familiar "plan, do, study, act" cycle.

This formalizing of experiential discovery brings it a step closer to the way scientific discovery works because it lends an element of formality and planning to it. However, they still differ in important ways. E.O. Wilson13 has stated that in science a discovery does not exist until it has been reviewed and safely is in print. In other words, publication is an integral part of the scientific process. Dr. Davidoff mentioned the importance of published findings, as proof and lack of disproof lie at the heart of the logic of science. Therefore, only full and open publication provides the type of access and transparency needed to exercise that logic. Thus, the action cycle in scientific discovery and experiential discovery would be "plan, do, study, publish." What is missing from the experiential learning action cycle, since experiential learning is performance- and action-oriented, is the publishing step. What are the implications of the difference between these two types of cycles for quality improvement? It suggests the need for new action cycles for quality improvement. A new generic cycle might be "plan, do, study, act/disseminate." For experiential learning, it might be "plan, do, study, act/teach/coach." Finally, for experiential discovery, to learn what works better in a more formal way, it might be "plan, do, study, act/publish/discuss. The failure to publish reports of new knowledge from experiential learning could result in the following consequences:

  • Makes establishing repeatability difficult.
  • Prevents public scrutiny and accountability.
  • Reduces the incentives and opportunities to clarify thinking, verify observations, and justify inferences.
  • Slows the spread of known improvements.
  • Inhibits the discovery of innovations.
  • Ethical issue: fails to give information or a product back to the public.
  • Limits the influence of publications on QI.

We usually think of scientific discovery shaping publication, with editors and journals being passive recipients of discoveries, but, as Dr. Davidoff explained, publication also shapes scientific discovery. He described some examples, in addition to peer review, of the publication process's reciprocal influence on discovery. There are new requirements for authors to have control of data and the decision to publish. A new proposal is for randomized clinical trials to be registered in formal, nonprofit registries. Another example of the way publication "pushes back" is the development of publication guidelines, such as the general guidelines developed by the International Committee of Medical Journal Editors. Their "Uniform requirements for manuscripts submitted to biomedical journals" is a framework that sets the tone that there can be a standardized way of representing one's work in print. More recently, specific guidelines have been developed for studies done using a particular study design or content areas. The first was the CONSORT (Consolidated Standards of Reporting Trials) guidelines for reporting randomized trials. Some additional functions have flowed from these guidelines documents. Explication of CONSORT guidelines has an educational function, not only on writing up research, but also on planning and conducting the research. When articles are reported in a consistent way, it potentially may make aggregating studies into higher levels of analysis easier. The next question is: is there a way, in parallel, that publication can shape quality improvement? Recently, there was a set of guidelines developed by Richard Thompson and Fiona Moss for quality improvement reports published in Quality and Safety in Health Care, and these guidelines were adopted by BMJ. These are particularly relevant for case-report types of articles. For more formal and complex types of studies, Drs. Batalden and Davidoff developed a new version of publication guidelines for quality improvement reports. (These were included in the symposium binder and have since been published in Quality and Safety in Health Care (2005, Oct 14(5):319-325).) These guidelines follow the IMRaD format, with the learning cycles that take place during the study being reported in the results section. Dr. Davidoff proposed that these guidelines have limitations in that they are only appropriate for certain kinds of quality improvement work. The strengths of the guidelines are that they potentially may have educational value as well as a positive influence on the planning, funding decisions, and editorial processes for QI studies.

To summarize, science-based health care is extraordinarily complex and trying to improve it is like trying to change the tire on a car while the car is running in the Indianapolis 500, but the process of improvement needs to be built in to health care. Because improvement is so difficult, unless improvement becomes an integral part of the health care process, it will be very challenging to make it work and to sustain it. Drs. Davidoff and Batalden have proposed an integrated model including three kinds of learning: scientific discovery, experiential discovery, and experiential learning, with the last having a key role. Experiential learning is uniquely suited to quality improvement; however, it is difficult to study using traditional methods. Experiential discovery, or learning about learning and publishing the results, is one important way to link experiential learning into the body of scientific discovery. Finally, dissemination of results is an essential element of scientific discovery, thus it must be part of improvement, whether through teaching and coaching or publication and discussion. Publication guidelines can help shape quality improvement via their reciprocal interaction with the study methods used, the completeness of reports, funding decisions, and the potential for aggregating results.

Discussion

Lawrence Green, Dr.P.H., Adjunct Professor, School of Medicine and Comprehensive Cancer Center, University of California at San Francisco, noted how well Dr. Davidoff had set the stage for the rest of the symposium. In particular, Dr. Davidoff described the complexity of the environment in which quality improvement interventions are conducted. Dr. Green posed the question, "how ‘out of control' can we let controlled trials become?" We cannot impose so much control that trials are not representative of the environments we hope to understand. Dr. Green noted his mantra is "if we want more evidence-based practice, we need more practice-based evidence." This means that we need to find a place for practice-based quality improvement studies that will inform evidence-based practice. Dr. Green would add the practice of engaging practitioners in participatory research, putting scientists in practice settings, and engaging patients in the research enterprise to the set of tools that Dr. Davidoff recommended.

In the discussion following Drs. Clancy, Davidoff, and Green's remarks, participants made a number of cogent comments:

  • Several areas of expertise that could be brought to bear on this discussion include the study of knowledge management and tacit knowledge as well as social network research. The potential for partnerships with science agencies is great and should be actively pursued.
  • QI and performance improvement are going to occur in the health care system whether or not they are studied. The key component is not the researchers or scientists, but the organizations and the people they serve.
  • Health services research (HSR) brings in nothing of the phenomenon being studied in that the principal theory used is statistical theory. We send subjects down two arms of a study and record a few measures at the end. This research model misses out on the role of experience and its effect on performance. It will be important for us to deal with a different way to learn throughout this symposium's sessions.
  • Experience must be linked to reflection and feeding back on performance.
  • Experience can lead to discovery or learning, but learning from one experience could be termed superstition. We need to be careful about experiential learning that draws an inaccurate conclusion and thus does not generate knowledge.
  • There are many ways that experiential learning could be amenable to empirical study. One could use the critical incident technique used in nursing and salesmanship, or the technique of eliciting theories in practice from highly expert people in their fields. Classification of phenomena is, of course, part of science.
  • We need to figure out what are the appropriate tradeoffs to the stakeholders. We need to think about research in the context of the people who are using that research. We need many different ways to proceed with research. Dr. Davidoff responded that we can short-circuit the dilemma that arises from the long time span between a study's findings and their publication. He has been emphasizing peer-reviewed publication, but there always have been alternatives. When all publication was print, it was called the "gray literature." And there are venues for "works in progress" such as meeting abstracts. Nowadays, it is a totally different universe with electronic publishing on the Internet and people putting out working papers, white papers, and other kinds of reports, but a better term for this is dissemination rather than publication.
  • A participant asked how dissemination relates directly to organizations. Organizations are concerned about spread, or how to spread a change concept within the organization, even just to get an idea from one department to another. How do you integrate that into Dr. Davidoff's model so ideas get back to the organization and are not just known among researchers? Dr. Davidoff responded that the publication process can enter into the process, even if it is internal publication, but "spreading" means getting the word out and it involves social processes among other things. That process is so much more complicated than doing individual projects. Dr. Davidoff acknowledged that he does not know that literature too well, but that there is a literature on spread within the VA system which describes various dimensions of the process in which there are social, administrative, and management issues that do not exist in smaller, community-based improvement projects. Ultimately, the improvement process involves making plans, doing trials, learning from experience, and modifying the efforts. Some organizations are just bigger and slower to move because of their scale.
  • We could learn from the treatment of pediatric cancer. Pediatric oncologists enroll every single patient in either a randomized trial or an observational trial. In that way, every patient's treatment is recorded in a massive database from which there is constant knowledge being generated from every case. If we used this same concept in QI projects, we would have a powerful tool for sharing experiential learning across institutions.
  • The publication of negative QI projects is very important, as we have found from clinical trials where negative findings are hardly ever published. A significant portion of experiential learning comes from failure. An important question for QI is what is the gold standard to know whether a technique is yielding the truth?
  • In managed care quality improvement, most projects have no publication pressure, and spread may be just within an institution. This is where most of the unproven and unpublished knowledge exists. The question is: how do we change this? The quality improvement directors who could say whether something worked or not are not in attendance at this symposium. We need to put dissemination pressure on the system to bring out the existing knowledge so that it can be peer-reviewed, disseminated, and used by the rest of the health care system. Dr. Davidoff responded that this point gets to the issues of incentives and rewards and intellectual gratification.

Return to Contents
Proceed to Next Section

Current as of March 2009
Internet Citation: Summary of the Presentations: Expanding Research and Evaluation Designs to Improve the Science Base for Health Care and Public Health Quality Improvement Symposium. March 2009. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/news/events/other/phqisymp/phqi2.html