Elias Brandt, Douglas H. Fernald, Bennett Parnes, Wilson Pace, David R. West
We describe the use of electronic health record (EHR) data to develop, distribute, provide followup, and analyze results in a study that developed and tested sustainable, guideline-consistent treatment strategies for community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA). Of the many applications of EHRs, one example is the enhanced capabilities for collecting patient-specific data within practice-based research networks (PBRNs). Historically PBRNs utilized physical "card" studies to collect information from clinicians at the point of care. Using the EHR-linked datasets available through eNQUIRENet, cases of certain skin or soft tissue infection (SSTI) were identified on a nightly basis. The research team then used internal email systems to invite clinicians to take part in evaluations of the cases. The email included links to internal Web sites and online evaluations. Evaluation data were then linked back to de-identified EHR data. To study clinical decisionmaking related to SSTI, we successfully developed a reliable, Health Insurance Portability and Accountability Act of 2009 (HIPAA)-compliant method of using EHR data to guide tailored data collection. We showed that data could be collected by clinicians and patients and be linked electronically to researchers in remote locations. Electronic card studies offer a new and useful tool for performing enhanced clinical research and for developing and launching practice-based guidelines. Removal of clinician error in initiating the survey and in selecting the proper survey improves results. A better understanding of response rates is obtained than in traditional card studies because all cases can be identified, and providers do not have to remember to initiate data collection.
Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) is a significant public health concern; it has the potential to develop quickly into an invasive skin infection and cause other life-threatening complications. Despite the relatively low prevalence of skin and soft tissue infections (SSTIs) in primary care,1S. aureus is the most common pathogen causing SSTIs.2 In response to this public health problem, the Centers for Disease Control and Prevention (CDC) convened an expert panel and published recommendations and a clinical algorithm/flow sheet for outpatient management of CA-MRSA.3 The feasibility and uptake of the CDC guidelines in busy primary care settings were previously unknown. Thus, a request for task order was commissioned by the Agency for Healthcare Research and Quality (AHRQ) and awarded to SNOCAP-USA to further the understanding of CA-MRSA and to develop and test real-world, sustainable strategies consistent with the CDC guidelines, using a practical trial approach.4,5
Although the CDC guidelines for treatment of SSTI have been widely disseminated, they are primarily based on expert opinion because there are few empirical data on which to base treatment decisions. It is still not the norm for SSTI treatment to account for CA-MRSA3 or to follow CDC recommendations. Such findings may be interpreted as demonstrating substandard care from a clinician who is either not familiar with or not following the established guidelines or may represent logical clinical decisionmaking in an understudied area. Practice-based research networks (PBRNs) have been studying this kind of phenomenon since their inception, often discovering that the "evidence-based guidelines" were not well-suited for primary care or that they were not applicable to specific patients in primary care due to other comorbidities, life expectancy, patient preferences, and other factors.4,6,7
Overall, the CA-MRSA project sought to develop and evaluate sustainable, guideline-consistent treatment strategies for CA-MRSA by working with the primary care clinicians and infectious disease consultants in two health care organizations. Specifically, we sought to evaluate MRSA management decisions for specific SSTI cases of interest in real time. To evaluate whether providers were able to provide care concordant with the CDC guideline for MRSA,8 encounter-related data were extracted from electronic health records (EHRs) and used to inform an online survey that was presented to the clinicians—an approach that could be generalized to other areas of clinical inquiry and expanded to better inform policy and clinical decisionmaking.
The method was based on the traditional PBRN study method of "card studies," which are observational studies that collect discrete, patient-level survey data at the point of care.9 Card studies are particularly suitable for collecting patient-level information for disease-specific conditions that may be uncommon. In a standard card study, the data collection instrument is a long, thin card designed to fit in a pocket and be carried around and completed in less than 60 seconds as the clinician provides care. Card studies have been a popular tool in practice-based research for many years.10 Although the original card studies were typically simple convenience samples, subsequent designs have progressed in terms of the scope of the data collected, the sources of data, and the collection methods. Card studies remain a popular research tool and are often used in PBRNs to understand clinical decisionmaking around selected conditions, to gain a better understanding of the incidence and prevalence of conditions in primary care, and to provide pilot data for subsequent studies. Card studies have played a significant role in developing new standards of care for conditions including miscarriages, headaches, and otitis media, and they provide a better understanding of care processes.4,6,7,11 The ability to extract existing clinical data from EHRs offers an opportunity to rethink the traditional card study approach.
To learn about the clinician decisionmaking process, we designed a survey using EHR data to inform the data collection process and to incorporate the data collection into the clinicians' workflow. The result was a novel "electronic card study" informed by data from the EHR. The study used multiple data sources including EHRs, manual chart audits, patient-reported outcomes, and provider-reported clinical decisionmaking processes. Data from each source were stripped of patient identifiers, with the exception of dates of service, and linked together for analysis. This paper describes the development, use, and potential benefits and limitations of an electronic means to collect feedback from clinicians about their clinical decisionmaking near the point of care, an approach that could be applied to the study of many other infectious diseases or other clinical care processes.
The CA-MRSA study was conducted in eNQUIRENet (formerly DARTNet), a federated network of standardized EHR data and other clinical information from multiple organizations across the United States.12,13 The EHRs and clinical information systems reside in member practices and are linked through a secure Web-based system so that they can be searched and queried as one large database while maintaining the privacy and confidentiality of patient data. The CA-MRSA study was designed to collect data through data extraction from the participating health systems' EHRs. Our intent was to extract EHR data to see if clinicians were providing guideline-concordant care and then to ask them about the care they provided as close to the point of care as possible. We linked clinicians' decisionmaking considerations with a limited dataset extracted from each organization's EHR system. For example, the dataset contained information about which medications were prescribed and whether a procedure was performed. The rationale for why the clinicians chose the prescribed medications and the reasoning behind the decision to perform a procedure were collected via the electronic card study. Data elements from these sources were merged using a common random identifier to form a comprehensive record of the encounter that included which treatments were chosen and why.
Two health care systems independently reviewed the CDC guidelines for the treatment of SSTIs, including ambulatory treatment for CA-MRSA, and developed interventions to promote guideline-concordant care. The intervention included a ready-made tray/kit for incision and drainage procedures, a patient information handout, provider MRSA education, and patient home care instructions.3 SSTIs were identified and care processes were tracked using EHR data. During the intervention phase, both organizations contacted patients seen for SSTIs 2 weeks after the index visit, to collect patient-reported outcomes concerning resolution or the need for further care outside the clinical organization. Directed chart audits were used to determine guideline-concordant care for selected conditions that could not be extracted as discrete data elements from the EHR (for instance, extent of erythema around an abscess). To better understand clinicians' decisionmaking, a total of 19 primary care clinicians from 16 primary care practices in the two organizations were asked to participate in an electronic card study activity. Figure 1 shows the data flow for the electronic card study.
A limited dataset containing information about SSTI encounters from both participating organizations was sent via secure FTP to the research team each night for analysis. The limited dataset was generated by QED Clinical, Inc. (dba CINA, Dallas, TX), a software company that provides clinical decision support and registry services to clinics irrespective of which EHR system they use. Every night, at each location, an automated process extracted data from the client's EHR and other data sources (such as practice management systems), standardized the data, and placed them into a proprietary clinical data repository housed on a server behind the client's firewall. For this study, CINA set up a second automated process to search the clinical data repository nightly for encounters during which an ICD-9 code for SSTI was used (680.x, 681.x, 682.x). For encounters where such a diagnosis code was used, a limited dataset was generated and sent via secure FTP to the research team. The limited dataset included patient age, diagnosis codes, culture records, procedure records, and prescription records, but no direct patient identifiers.
Note: CINA is the decision support software vendor
SSTI = skin and soft tissue infections
Encounter Categorization and Survey Invitation
Each morning, the research team reviewed the new encounter records to determine if the providers of record should be invited to complete an electronic card and, if so, to determine which evaluation option the provider would receive. To determine provider eligibility, we checked whether the provider of record had consented to participate in the study. If the provider had consented for the card study, it was further determined if he/she had any outstanding evaluations. We also checked whether the encounter was the patient's first in which an SSTI diagnosis code was used within a time window. If the provider had consented to participate and had no evaluations outstanding, and the encounter was the patient's first in the last 30 days with an SSTI ICD-9 code, the encounter was eligible for an evaluation. The research team analyzed the data to categorize each encounter into one of 16 different clinical scenarios, each of which was evaluated differently:
- Child (yes or no).
- Culture performed (yes or no).
- Procedure performed (yes or no).
- Antibiotics prescribed (yes or no).
After encounter eligibility was determined and the encounter was categorized, an invitation to complete an electronic survey was sent to the provider via email. Invitation emails containing a hyperlink to an intranet Web site were sent to providers of record 1–3 days after the encounter.
Internal Web Site
The hyperlink in the invitation email took providers to a Web site running on their organization's server. The Web site tapped into clinical information stored in the clinical data repository to display information intended to refresh the provider's memory regarding the encounter to be evaluated. It displayed patient identifiers related to the encounter, including name and date of birth—enough information for the provider to go into the EHR to review the patient's records. A unique random encounter identification number generated by CINA for this study was embedded in the invitation email hyperlink and used to link to data about the clinical encounter and display information about the patient to the provider. This Web site was behind the organization's firewall and was therefore inaccessible to anyone outside the organization. Logic behind the internal Web site read the information embedded in the hyperlink to generate a customized link to the specific online evaluation for the selected clinical scenario.
The hyperlink on the internal Web site took the provider to the online evaluation tool and contained information about the clinical scenario (e.g., a child with an SSTI who had a procedure performed and antibiotics prescribed, but no culture done). The information embedded in the link was used by the survey (set up using CheckBox [Watertown, MA]) to prepopulate certain fields in the data table and to determine which set of questions to present to the clinician. The unique random encounter identification number was also transmitted via the link so that the responses to the evaluation could be linked back to the limited dataset.
We successfully developed a reliable, HIPAA-compliant method of using EHR data to guide tailored data collection concerning clinical decisionmaking related to SSTIs. Encounters were categorized into one of 16 different clinical scenarios, and the research team decided whether or not to send the invitation to evaluate the encounter via the card study. The decision to send an invitation was based on whether the clinical data indicated a new SSTI episode and whether the provider of record had consented to participate and had a recent prior invitation to which they had yet to respond. The research team triggered 157 electronic cards, which resulted in a response rate of 70.7 percent. Upon receipt of the invitation email, the clinicians clicked on the link, which took them to an intranet Web site that displayed specific information about the encounter and a link to the electronic card study. On average, the clinicians completed the evaluations in 3.5 days.
The electronic card study helped to clarify several areas of interest from the electronic data pulls. For instance, the rate of incision and drainage procedures seemed low, 10.3 percent at baseline and 4.7 percent during the intervention across organizations. This was partly explained through chart audit, where it was apparent that not all procedures were billed. Greater understanding, however, came from the card study, where it became clear that in many cases the abscess had already spontaneously drained and the need for a procedure had been alleviated. Another area where the card study helped elucidate the quantitative findings was related to obtaining cultures. Again, the culture rates appeared low from the electronic data, 17.1 percent across organizations at baseline and 14.2 percent during the intervention period. In this case, a number of clinical decisions in combination helped clarify the findings, including the disease processes that fell into specific diagnostic codes; the lack of material to culture, such as in smaller, spontaneously drained lesions; and the overall lack of clinical utility in culture results. In fact, in followup interviews driven by the card study results, responding clinicians could identify no instances of culture results altering clinical care. Thus, there was little evidence to encourage greater use of cultures for SSTIs.
Although traditional card studies have been successful in large part because of their simplicity, they also have significant limitations that would not have made their use possible in the CA-MRSA study. Electronic card studies have the potential to address many of these limitations and to offer advantages that are not possible using a paper card methodology. Paper card studies require provider recall and initiation based on patient-eligibility criteria, which can sometimes be complex or of low prevalence, and thus not all cases are captured. Since the initiation of the electronic card is done by the researcher or preset software, this issue is reduced. In addition, the traditional cards cannot be tailored to different clinical scenarios. Electronic card methodology allows the research team to customize the cards sent to the clinicians, to collect only the data appropriate for the clinical scenario. Also, all cases can be captured because researchers are no longer reliant on providers to select the right survey or to complete the survey. Another advantage of the electronic card study is that it can be directly linked to the patient's record. The providers are shown details of the patient's visit prior to completing the card to help refresh their memory of the case. The paper cards, on the other hand, rely on the provider to accurately match the card to the specific patient. Speed is another advantage of the electronic cards. Electronic cards can be sent to the researchers for review on a daily basis and do not require the practice to mail the cards.
In the present study, without information from clinicians near the point of care, many of the findings would most likely have been interpreted as indicative of very poor guideline concordance. These findings might have been attributed to poor underlying knowledge and overall clinical care. Our understanding of the decision processes could have been supplemented by interviews, where hypothetical situations were posited, but it would be difficult to know how well the answers tracked with actual patient experiences. Using the near-point-of-care electronic card study approach, clinicians were asked to respond based on an actual clinical case. We believe these answers are more likely to represent the breadth of the decision processes that occur when translating guidelines into clinical care.
Although the advantages of electronic card studies are numerous and far outweigh the limitations, there are limitations that warrant discussion. One major disadvantage is the additional upfront set-up time required by the use of EHR linkages. It takes a significant amount of time to develop the data extraction, transfer, and preliminary analysis processes and to set up the data collection tool. Once the system is in place, however, it can be easily modified and reused for future studies with minimal cost. Another concern is that several issues were discovered in the accuracy and timeliness of the data capture itself. For example, culture records were slow to appear in the EHR. Also, certain procedures were not captured consistently in the EHR and were difficult to locate within the system. In addition, some antibiotic prescriptions were still handwritten and were not captured reliably in the EHR. However, the electronic card study system detected that the data were missing and asked the provider, in the evaluation, to verify the missing information.
There is great potential for future electronic card studies. Lessons learned from this study confirm that the use of the EHR to capture data allows clinicians and researchers to perform real-time automated card studies using a much larger sample size than was previously thought possible. Further, electronic cards can be tailored to match very specific criteria designed by researchers. Logical areas where this approach could add to current clinical knowledge include reasons for low rates of human papilloma virus (HPV) immunizations and reasons for not including HPV testing during cervical cancer screening; reasons antibiotics were prescribed for acute (most likely viral) infections; antibiotic use and choices in treating otitis media; reasons for low rates of screening for human immunodeficiency virus; and many other concepts. Electronic cards can even be generated for completion by patients for clinical purposes and then be used secondarily for research. The approach developed for this study can be generalized and expanded to more areas of inquiry and more clinics to even better inform policy decisions.
This project was funded under contract no. HHSA 290 2007 10008, Task Order no. 4, from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services, as part of the Primary Care Practice-based Research Network (PBRN) Master Contract. The findings and conclusions in this document are those of the authors, who are responsible for its content, and do not necessarily represent the views of AHRQ. No statement in this report should be construed as an official position of AHRQ or the U.S. Department of Health and Human Services.
AAFP National Research Network, Leawood, KS (EB, WP). University of Colorado School of Medicine, Aurora, CO (DHF, BP, WP, DRW). DARTNet Institute.
Address correspondence to: David R. West, Ph.D., Director, SNOCAP USA, Director, Colorado Health Outcomes Program, School of Medicine, University of Colorado, 13199 Montview Blvd, Suite 300, Aurora CO 80045; Email: email@example.com.
1. Price CS, Savitz LA. Improving the measurement of surgical site infection risk stratification/outcome detection. Final report (Prepared by Denver Health and partners under Contract No290-2006-00-20); AHRQ Publication No. 12-0046-EF. Rockville, MD: Agency for Healthcare Research and Quality; August 2012. Available at http://psnet.ahrq.gov/resource.aspx?resourceID=25139. Accessed March 24, 2014.
2. Rubin MA, Jones M, Huntington JL, et al. Screening for surgical site infections by applying classification trees to electronic data. In Advances in the Prevention and Control of HAIs (AHRQ Publication No. 14-0000). Rockville, MD: Agency for Healthcare Research and Quality; May 2014.
4. Opportunities for advancing delivery system research. In: AHRQ Expert Meeting on the Challenge and Promise of Delivery System Research: Meeting summary. February 16-17, 2011; Sterling, VA. Rockville, MD: Agency for Healthcare Research and Quality; 2012. Available at http://www.ahrq.gov/professionals/systems/system/delivery-system-initiative/index.html. Accessed March 25, 2014.
8. AHRQ activities using community-based participatory research to address health care disparities. AHRQ Publication No. 09-P012. Rockville, MD: Agency for Healthcare Research and Quality; September 2009.
9. Creswell JW, Klassen AC, Plano VL, et al. Best practices for mixed methods research in the health sciences. Rockville, MD: National Institutes of Health, Office of Behavioral and Social Sciences Research; August 2011. Available at http://obssr.od.nih.gov/mixed_methods_research/. Accessed March 26, 2014.
11. Brossette SE, Hacek DM, Gavin PJ, et al. A laboratory-based, hospital-wide, electronic marker for nosocomial infection: the future of infection control surveillance? Am J Clin Pathol 2006;125(1):34-9. PMID 16482989
12. Gaynes RP, Culver DH, Horan TC, et al. Surgical site infection (SSI) rates in the United States, 1992-1998: the National Nosocomial Infections Surveillance System basic SSI risk index. Clin Infect Dis 2001 Sep;33(Suppl 2):S69-77. PMID 11486302