A Critical Analysis of Patient Safety Practices
October 30, 2017 | Author: Anonymous | Category: N/A
Short Description
Amy J. Markowitz, JD. Robert M. Wachter, MD. Project Director. Kathryn M. McDonald, MM. UCSF ......
Description
Evidence Report/Technology Assessment Number 43
Making Health Care Safer: A Critical Analysis of Patient Safety Practices Prepared for: Agency for Healthcare Research and Quality U.S. Department of Health and Human Services 2101 East Jefferson Street Rockville, MD 20852 www.ahrq.gov Contract No. 290-97-0013 Prepared by: University of California at San Francisco (UCSF)–Stanford University Evidence-based Practice Center Editorial Board Kaveh G. Shojania, M.D. (UCSF) Bradford W. Duncan, M.D. (Stanford) Kathryn M. McDonald, M.M. (Stanford) Robert M. Wachter, M.D. (UCSF) Managing Editor Amy J. Markowitz, JD Robert M. Wachter, MD Project Director Kathryn M. McDonald, MM UCSF–Stanford EPC Coordinator
AHRQ Publication 01-E058 July 20, 2001 (Revised printing, indexed)
ii
Preface The Agency for Healthcare Research and Quality (AHRQ), formerly the Agency for Health Care Policy and Research (AHCPR), through its Evidence-based Practice Centers (EPCs), sponsors the development of evidence reports and technology assessments to assist public- and private-sector organizations in their efforts to improve the quality of health care in the United States. The reports and assessments provide organizations with comprehensive, science-based information on common, costly medical conditions and new health care technologies. The EPCs systematically review the relevant scientific literature on topics assigned to them by AHRQ and conduct additional analyses when appropriate prior to developing their reports and assessments. To bring the broadest range of experts into the development of evidence reports and health technology assessments, AHRQ encourages the EPCs to form partnerships and enter into collaborations with other medical and research organizations. The EPCs work with these partner organizations to ensure that the evidence reports and technology assessments they produce will become building blocks for health care quality improvement projects throughout the Nation. The reports undergo peer review prior to their release. AHRQ expects that the EPC evidence reports and technology assessments will inform individual health plans, providers, and purchasers as well as the health care system as a whole by providing important information to help improve health care quality. We welcome written comments on this evidence report. They may be sent to: Director, Center for Practice and Technology Assessment, Agency for Healthcare Research and Quality, 6010 Executive Blvd., Suite 300, Rockville, MD 20852. John M. Eisenberg, M.D. Director Agency for Healthcare Research and Quality
Douglas B. Kamerow, M.D. Director, Center for Practice and Technology Assessment Agency for Healthcare Research and Quality
The authors of this report are responsible for its content. Statements in the report should not be construed as endorsement by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services of a particular drug, device, test, treatment, or other clinical service.
iv
Structured Abstract Objectives: Patient safety has received increased attention in recent years, but mostly with a focus on the epidemiology of errors and adverse events, rather than on practices that reduce such events. This project aimed to collect and critically review the existing evidence on practices relevant to improving patient safety. Search Strategy and Selection Criteria: Patient safety practices were defined as those that reduce the risk of adverse events related to exposure to medical care across a range of diagnoses or conditions. Potential patient safety practices were identified based on preliminary surveys of the literature and expert consultation. This process resulted in the identification of 79 practices for review. The practices focused primarily on hospitalized patients, but some involved nursing home or ambulatory patients. Protocols specified the inclusion criteria for studies and the structure for evaluation of the evidence regarding each practice. Pertinent studies were identified using various bibliographic databases (e.g., MEDLINE, PsycINFO, ABI/INFORM, INSPEC), targeted searches of the Internet, and communication with relevant experts. Data Collection and Analysis: Included literature consisted of controlled observational studies, clinical trials and systematic reviews found in the peer-reviewed medical literature, relevant non-health care literature and “gray literature.” For most practices, the project team required that the primary outcome consist of a clinical endpoint (i.e., some measure of morbidity or mortality) or a surrogate outcome with a clear connection to patient morbidity or mortality. This criterion was relaxed for some practices drawn from the non-health care literature. The evidence supporting each practice was summarized using a prospectively determined format. The project team then used a predefined consensus technique to rank the practices according to the strength of evidence presented in practice summaries. A separate ranking was developed for research priorities. Main Results: Practices with the strongest supporting evidence are generally clinical interventions that decrease the risks associated with hospitalization, critical care, or surgery. Many patient safety practices drawn primarily from nonmedical fields (e.g., use of simulators, bar coding, computerized physician order entry, crew resource management) deserve additional research to elucidate their value in the health care environment. The following 11 practices were rated most highly in terms of strength of the evidence supporting more widespread implementation. •
Appropriate use of prophylaxis to prevent venous thromboembolism in patients at risk;
•
Use of perioperative beta-blockers in appropriate patients to prevent perioperative morbidity and mortality;
•
Use of maximum sterile barriers while placing central intravenous catheters to prevent infections;
v
•
Appropriate use of antibiotic prophylaxis in surgical patients to prevent postoperative infections;
•
Asking that patients recall and restate what they have been told during the informed consent process;
•
Continuous aspiration of subglottic secretions (CASS) to prevent ventilator-associated pneumonia;
•
Use of pressure relieving bedding materials to prevent pressure ulcers;
•
Use of real-time ultrasound guidance during central line insertion to prevent complications;
•
Patient self-management for warfarin (Coumadin™) to achieve appropriate outpatient anticoagulation and prevent complications;
•
Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill and surgical patients; and
•
Use of antibiotic-impregnated central venous catheters to prevent catheter-related infections.
Conclusions: An evidence-based approach can help identify practices that are likely to improve patient safety. Such practices target a diverse array of safety problems. Further research is needed to fill the substantial gaps in the evidentiary base, particularly with regard to the generalizability of patient safety practices heretofore tested only in limited settings and to promising practices drawn from industries outside of health care.
This document is in the public domain and may be used and reprinted without permission except those copyrighted materials noted for which further reproduction is prohibited without the specific permission of copyright holders. Suggested Citation: Shojania KG, Duncan BW, McDonald KM, et al., eds. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Evidence Report/Technology Assessment No. 43 (Prepared by the University of California at San Francisco–Stanford Evidence-based Practice Center under Contract No. 290-97-0013), AHRQ Publication No. 01-E058, Rockville, MD: Agency for Healthcare Research and Quality. July 2001.
vi
CONTENTS Summary .........................................................................................................................1 Evidence Report .............................................................................................................9 PART I. OVERVIEW ..........................................................................................................11 Chapter 1. An Introduction to the Report ..............................................................................13 1.1. General Overview .......................................................................................................13 1.2. How to Use this Report ...............................................................................................18 1.3. Acknowledgments.......................................................................................................20 Chapter 2. Drawing on Safety Practices from Outside Health Care......................................25 Chapter 3. Evidence-based Review Methodology ................................................................29 PART II. REPORTING AND RESPONDING TO PATIENT SAFETY PROBLEMS ...........................39 Chapter 4. Incident Reporting ...............................................................................................41 Chapter 5. Root Cause Analysis ............................................................................................51 PART III. PATIENT SAFETY PRACTICES & TARGETS ...........................................................57 Section A. Adverse Drug Events (ADEs).................................................................................58 Chapter 6. Computerized Physician Order Entry (CPOE) with Clinical Decision Support Systems (CDSSs)....................................................................................................................59 Chapter 7. The Clinical Pharmacist’s Role in Preventing Adverse Drug Events..................71 Chapter 8. Computer Adverse Drug Event (ADE) Detection and Alerts..............................79 Chapter 9. Protocols for High-Risk Drugs: Reducing Adverse Drug Events Related to Anticoagulants ........................................................................................................................87 Chapter 10. Unit-Dose Drug Distribution Systems .............................................................101 Chapter 11. Automated Medication Dispensing Devices....................................................111 Section B. Infection Control ...................................................................................................118 Chapter 12. Practices to Improve Handwashing Compliance .............................................119 Chapter 13. Impact of Barrier Precautions in Reducing the Transmission of Serious Nosocomial Infections..........................................................................................................127 Chapter 14. Impact of Changes in Antibiotic Use Practices on Nosocomial Infections and Antimicrobial Resistance – Clostridium Difficile and Vancomycin-resistant Enterococcus (VRE)....................................................................................................................................141 Chapter 15. Prevention of Nosocomial Urinary Tract Infections........................................149 Subchapter 15.1. Use of Silver Alloy Urinary Catheters .................................................149 Subchapter 15.2. Use of Suprapubic Catheters ................................................................155 Chapter 16. Prevention of Intravascular Catheter-Associated Infections............................163 Subchapter 16.1. Use of Maximum Barrier Precautions During Central Venous Catheter Insertion.............................................................................................................................165 Subchapter 16.2. Use of Central Venous Catheters Coated with Antibacterial or Antiseptic Agents ...............................................................................................................................169 Subchapter 16.3. Use of Chlorhexidine Gluconate at the Central Venous Catheter Insertion Site .....................................................................................................................175 Subchapter 16.4. Other Practices .....................................................................................178 vii
Chapter 17. Prevention of Ventilator-Associated Pneumonia.............................................185 Subchapter 17.1. Patient Positioning: Semi-recumbent Positioning and Continuous Oscillation .........................................................................................................................185 Subchapter 17.2. Continuous Aspiration of Subglottic Secretions ..................................190 Subchapter 17.3. Selective Digestive Tract Decontamination .........................................193 Subchapter 17.4. Sucralfate and Prevention of VAP .......................................................198 Section C. Surgery, Anesthesia, and Perioperative Medicine ................................................204 Chapter 18. Localizing Care to High-Volume Centers .......................................................205 Chapter 19. Learning Curves for New Procedures – the Case of Laparoscopic Cholecystectomy ..................................................................................................................213 Chapter 20. Prevention of Surgical Site Infections ..............................................................221 Subchapter 20.1. Prophylactic Antibiotics........................................................................221 Subchapter 20.2. Perioperative Normothermia ................................................................231 Subchapter 20.3. Supplemental Perioperative Oxygen.....................................................234 Subchapter 20.4. Perioperative Glucose Control ..............................................................237 Chapter 21. Ultrasound Guidance of Central Vein Catheterization ....................................245 Chapter 22. The Retained Surgical Sponge.........................................................................255 Chapter 23. Pre-Anesthesia Checklists To Improve Patient Safety ....................................259 Chapter 24. The Impact Of Intraoperative Monitoring On Patient Safety ..........................265 Chapter 25. Beta-blockers and Reduction of Perioperative Cardiac Events .......................271 Section D. Safety Practices for Hospitalized or Institutionalized Elders................................280 Chapter 26. Prevention of Falls in Hospitalized and Institutionalized Older People........281 Subchapter 26.1. Identification Bracelets for High-Risk Patients ...................................284 Subchapter 26.2. Interventions that Decrease the Use of Physical Restraints .................287 Subchapter 26.3. Bed Alarms...........................................................................................291 Subchapter 26.4. Special Hospital Flooring Materials to Reduce Injuries from Patient Falls ...........................................................................................................................................293 Subchapter 26.5. Hip Protectors to Prevent Hip Fracture ................................................295 Chapter 27. Prevention of Pressure Ulcers in Older Patients ..............................................301 Chapter 28. Prevention of Delirium in Older Hospitalized Patients ...................................307 Chapter 29. Multidisciplinary Geriatric Consultation Services...........................................313 Chapter 30. Geriatric Evaluation and Management Units for Hospitalized Patients ...........323 Section E. General Clinical Topics .........................................................................................331 Chapter 31. Prevention of Venous Thromboembolism .......................................................333 Chapter 32. Prevention of Contrast-Induced Nephropathy .................................................349 Chapter 33. Nutritional Support ..........................................................................................359 Chapter 34. Prevention of Clinically Significant Gastrointestinal Bleeding in Intensive Care Unit Patients .........................................................................................................................368 Chapter 35. Reducing Errors in the Interpretation of Plain Radiographs and Computed Tomography Scans ...............................................................................................................376 Chapter 36. Pneumococcal Vaccination Prior to Hospital Discharge .................................385 36.1. Vaccine Effectiveness .............................................................................................386 36.2. Vaccine Delivery Methods......................................................................................388
viii
Chapter 37. Pain Management.............................................................................................396 Subchapter 37.1. Use of Analgesics in the Acute Abdomen ...........................................396 Subchapter 37.2. Acute Pain Services..............................................................................400 Subchapter 37.3. Prophylactic Antiemetics During Patient-controlled Analgesia Therapy ...........................................................................................................................................404 Subchapter 37.4. Non-pharmacologic Interventions for Postoperative Plan ...................406 Section F. Organization, Structure, and Culture ....................................................................411 Chapter 38. “Closed” Intensive Care Units and Other Models of Care for Critically Ill Patients..................................................................................................................................413 Chapter 39. Nurse Staffing, Models of Care Delivery, and Interventions ..........................423 Chapter 40. Promoting a Culture of Safety ..........................................................................447 Section G. Systems Issues and Human Factors ......................................................................458 Chapter 41. Human Factors and Medical Devices ..............................................................459 Subchapter 41.1. The Use of Human Factors in Reducing Device-related Medical Errors ...........................................................................................................................................459 Subchapter 41.2. Refining the Performance of Medical Device Alarms .........................462 Subchapter 41.3. Equipment Checklists in Anesthesia ....................................................466 Chapter 42. Information Transfer.........................................................................................471 Subchapter 42.1. Information Transfer Between Inpatient and Outpatient Pharmacies ..471 Subchapter 42.2. Sign-Out Systems for Cross-Coverage ................................................476 Subchapter 42.3. Discharge Summaries and Follow-up ..................................................479 Subchapter 42.4. Notifying Patients of Abnormal Results ..............................................482 Chapter 43. Prevention of Misidentifications......................................................................487 Subchapter 43.1. Bar Coding ...........................................................................................487 Subchapter 43.2. Strategies to Avoid Wrong-Site Surgery..............................................494 Chapter 44. Crew Resource Management and its Applications in Medicine ......................501 Chapter 45. Simulator-Based Training and Patient Safety..................................................511 Chapter 46. Fatigue, Sleepiness, and Medical Errors..........................................................519 Chapter 47. Safety During Transport of Critically Ill Patients............................................534 Subchapter 47.1. Interhospital Transport ..........................................................................536 Subchapter 47.2. Intrahospital Transport .........................................................................538 Section H. Role of the Patient.................................................................................................544 Chapter 48. Procedures For Obtaining Informed Consent ..................................................546 Chapter 49. Advance Planning For End-of-Life Care .........................................................556 Chapter 50. Other Practices Related to Patient Participation ..............................................569 PART IV. PROMOTING AND IMPLEMENTING SAFETY PRACTICES .......................................573 Chapter 51. Practice Guidelines ..........................................................................................575 Chapter 52. Critical Pathways .............................................................................................581 Chapter 53. Clinical Decision Support Systems..................................................................589 Chapter 54. Educational Techniques Used in Changing Provider Behavior.......................595 Chapter 55. Legislation, Accreditation, and Market-Driven and Other Approaches to Improving Patient Safety ......................................................................................................601
ix
PART V. ANALYZING THE PRACTICES .............................................................................611 Chapter 56. Chapter 57. Chapter 58. Chapter 59.
Methodology for Summarizing the Evidence for the Practices .......................613 Practices Rated by Strength of Evidence .........................................................619 Practices Rated by Research Priority ...............................................................627 Listing of All Practices, Categorical Ratings, and Comments........................633
Appendix: List of Contributors .................................................................................649 Index ............................................................................................................................655
x
Summary Overview Patient safety has become a major concern of the general public and of policymakers at the State and Federal levels. This interest has been fueled, in part, by news coverage of individuals who were the victims of serious medical errors and by the publication in 1999 of the Institute of Medicine’s (IOM’s) report To Err is Human: Building a Safer Health System. In its report, IOM highlighted the risks of medical care in the United States and shocked the sensibilities of many Americans, in large part through its estimates of the magnitude of medicalerrors-related deaths (44,000 to 98,000 deaths per year) and other serious adverse events. The report prompted a number of legislative and regulatory initiatives designed to document errors and begin the search for solutions. But Americans, who now wondered whether their next doctor’s or hospital visit might harm rather than help them, began to demand concerted action. Three months after publication of the IOM report, an interagency Federal government group, the Quality Interagency Coordination Task Force (QuIC), released its response, Doing What Counts for Patient Safety: Federal Actions to Reduce Medical Errors and Their Impact. That report, prepared at the President’s request, both inventoried on-going Federal actions to reduce medical errors and listed more than 100 action items to be undertaken by Federal agencies. An action promised by the Agency for Healthcare Research and Quality (AHRQ), the Federal agency leading efforts to research and promote patient safety, was “the development and dissemination of evidence-based, best safety practices to provider organizations.” To initiate the work to be done in fulfilling this promise, AHRQ commissioned the University of California at San Francisco (UCSF) – Stanford University Evidence-based Practice Center (EPC) in January 2001 to review the scientific literature regarding safety improvement. To accomplish this, the EPC established an Editorial Board that oversaw development of this report by teams of content experts who served as authors.
Defining Patient Safety Practices Working closely with AHRQ and the National Forum for Quality Measurement and Reporting (the National Quality Forum, or NQF)—a public–private partnership formed in 1999 to promote a national health care quality agenda the EPC began its work by defining a patient safety practice as a type of process or structure whose application reduces the probability of adverse events resulting from exposure to the health care system across a range of diseases and procedures. This definition is consistent with the dominant conceptual framework in patient safety, which holds that systemic change will be far more productive in reducing medical errors than will targeting and punishing individual providers. The definition’s focus on actions that cut across diseases and procedures also allowed the research team to distinguish patient safety activities from the more targeted quality improvement practices (e.g., practices designed to increase the use of beta-blockers in patients who are admitted to the hospital after having a myocardial infarction). The editors recognize, however, that this distinction is imprecise.
1
This evidence-based review also focuses on hospital care as a starting point because the risks associated with hospitalization are significant, the strategies for improvement are better documented there than in other health care settings, and the importance of patient trust is paramount. The report, however, also considers evidence regarding other sites of care, such as nursing homes, ambulatory care, and patient self-management. The results of this EPC study will be used by the NQF to identify a set of proven patient safety practices that should be used by hospitals. Identification of these practices by NQF will allow patients throughout the nation to evaluate the actions their hospitals and/or health care facilities have taken to improve safety.
Reporting The Evidence As is typical for evidence-based reviews, the goal was to provide a critical appraisal of the evidence on the topic. This information would then be available to others to ensure that no practice unsupported by evidence would be endorsed and that no practice substantiated by a high level of proof would lack endorsement. Readers familiar with the state of the evidence regarding quality improvement in areas of health care where this has been a research priority (e.g., cardiovascular care) may be surprised and even disappointed, by the paucity of high quality evidence in other areas of health care for many patient safety practices. One reason for this is the relative youth of the field. Just as there had been little public recognition of the risks of health care prior to the first IOM report, there has been relatively little attention paid to such risks – and strategies to mitigate them – among health professionals and researchers. Moreover, there are a number of methodologic reasons why research in patient safety is particularly challenging. Many practices (e.g., the presence of computerized physician order entry systems, modifying nurse staffing levels) cannot be the subject of double-blind studies because their use is evident to the participants. Second, capturing all relevant outcomes, including “near misses” (such as a nurse catching an excessive dosage of a drug just before it is administered to a patient) and actual harm, is often very difficult. Third, many effective practices are multidimensional, and sorting out precisely which part of the intervention works is often quite challenging. Fourth, many of the patient safety problems that generate the most concern (wrong-site surgery, for example) are uncommon enough that demonstrating the success of a “safety practice” in a statistically meaningful manner with respect to outcomes is all but impossible. Finally, establishing firm epidemiologic links between presumed (and accepted) causes and adverse events is critical, and frequently difficult. For instance, in studying an intuitively plausible “risk factor” for errors, such as “fatigue,” analyses of errors commonly reveal the presence of fatigued providers (because many health care providers work long hours and/or late at night). The question is whether or not fatigue is over-represented among situations that lead to errors. The point is not that the problem of long work-hours should be ignored, but rather that strong epidemiologic methods need to be applied before concluding that an intuitive cause of errors is, in fact, causal. Researchers now believe that most medical errors cannot be prevented by perfecting the technical work of individual doctors, nurses, or pharmacists. Improving patient safety often involves the coordinated efforts of multiple members of the health care team, who may adopt strategies from outside health care. The report reviews several practices whose evidence came
2
from the domains of commercial aviation, nuclear safety, and aerospace, and the disciplines of human factors engineering and organizational theory. Such practices include root cause analysis, computerized physician order entry and decision support, automated medication dispensing systems, bar coding technology, aviation-style preoperative checklists, promoting a “culture of safety,” crew resource management, the use of simulators in training, and integrating human factors theory into the design of medical devices and alarms. In reviewing these practices, the research team sought to be flexible regarding standards of evidence, and included research evidence that would not have been considered for medical interventions. For example, the randomized trial that is appropriately hailed as the “gold standard” in clinical medicine is not used in aviation, as this design would not capture all relevant information. Instead, detailed case studies and industrial engineering research approaches are utilized.
Methodology To facilitate identification and evaluation of potential patient safety practices, the Editorial Board divided the content for the project into different domains. Some cover “content areas,” including traditional clinical areas such as adverse drug events, nosocomial infections, and complications of surgery, but also less traditional areas such as fatigue and information transfer. Other domains consist of practices drawn from broad (primarily nonmedical) disciplines likely to contain promising approaches to improving patient safety (e.g., information technology, human factors research, organizational theory). Once this list was created—with significant input from patient safety experts, clinician–researchers, AHRQ, and the NQF Safe Practices Committee—the editors selected teams of authors with expertise in the relevant subject matter and/or familiarity with the techniques of evidence-based review and technology appraisal. The authors were given explicit instructions regarding search strategies for identifying safety practices for evaluation (including explicit inclusion and exclusion criteria) and criteria for assessing each practice’s level of evidence for efficacy or effectiveness in terms of study design and study outcomes. Some safety practices did not meet the inclusion criteria because of the paucity of evidence regarding efficacy or effectiveness but were included in the report because an informed reader might reasonably expect them to be evaluated or because of the depth of public and professional interest in them. For such high profile topics (such as bar coding to prevent misidentifications), the researchers tried to fairly present the practice’s background, the experience with the practice thus far, and the evidence (and gaps in the evidence) regarding the practice’s value. For each practice, authors were instructed to research the literature for information on: • • • • • • •
prevalence of the problem targeted by the practice; severity of the problem targeted by the practice; the current utilization of the practice; evidence on efficacy and/or effectiveness of the practice; the practice’s potential for harm; data on cost, if available; and implementation issues.
The report presents the salient elements of each included study (e.g., study design, population/setting, intervention details, results), and highlights any important weaknesses and
3
biases of these studies. Authors were not asked to formally synthesize or combine the evidence across studies (e.g., perform a meta-analysis) as part of their task. The Editorial Board and the Advisory Panel reviewed the list of domains and practices to identify gaps in coverage. Submitted chapters were reviewed by the Editorial Board and revised by the authors, aided by feedback from the Advisory Panel. Once the content was finalized, the editors analyzed and ranked the practices using a methodology summarized below.
Summarizing the Evidence and Rating the Practices Because the report is essentially an anthology of a diverse and extensive group of patient safety practices with highly variable relevant evidence, synthesizing the findings was challenging, but necessary to help readers use the information. Two of the most obvious uses for this report are: 1) to inform efforts of providers and health care organizations to improve the safety of the care they provide, and 2) to inform AHRQ, other research agencies, and foundations about potential fruitful investments for their research support. Other uses of the information are likely. In fact, the National Quality Forum plans to use this report to help identify a list of patient safety practices that consumers and others should know about as they choose among the health care provider organizations to which they have access. In an effort to assist both health care organizations interested in taking substantive actions to improve patient safety and research funders seeking to spend scarce resources wisely, AHRQ asked the EPC to rate the evidence and rank the practices by opportunity for safety improvement and by research priority. This report, therefore, contains two lists. To create these lists, the editors aimed to separate the practices that are most promising or effective from those that are least so on a range of dimensions, without implying any ability to calibrate a finely gradated scale for those practices in between. The editors also sought to present the ratings in an organized, accessible way while highlighting the limitations inherent in their rating schema. Proper metrics for more precise comparisons (e.g., cost-effectiveness analysis) require more data than are currently available in the literature. Three major categories of information were gathered to inform the rating exercise: •
Potential Impact of the Practice: based on prevalence and severity of the patient safety target, and current utilization of the practice;
•
Strength of the Evidence Supporting the Practice: including an assessment of the relative weight of the evidence, effect size, and need for vigilance to reduce any potential negative collateral effects of the practice; and •
Implementation: considering costs, logistical barriers, and policy issues.
For all of these data inputs into the practice ratings, the primary goal was to find the best available evidence from publications and other sources. Because the literature has not been previously organized with an eye toward addressing each of these areas, most of the estimates could be improved with further research, and some are informed by only general and somewhat speculative knowledge. In the summaries, the editors have attempted to highlight those assessments made with limited data. The four-person editorial team independently rated each of the 79 practices using general scores (e.g., High, Medium, Low) for a number of dimensions, including those italicized in the
4
section above. The editorial team convened for 3 days in June, 2001 to compare scores, discuss disparities, and come to consensus about ratings for each category. In addition, each member of the team considered the totality of information on potential impact and support for a practice to score each of these factors on a 0 to 10 scale (creating a “Strength of the Evidence” list). For these ratings, the editors took the perspective of a leader of a large health care enterprise (e.g., a hospital or integrated delivery system) and asked the question, “If I wanted to improve patient safety at my institution over the next 3 years and resources were not a significant consideration, how would I grade this practice?” For this rating, the Editorial Board explicitly chose not to formally consider the difficulty or cost of implementation in the rating. Rather, the rating simply reflected the strength of the evidence regarding the effectiveness of the practice and the probable impact of its implementation on reducing adverse events related to health care exposure. If the patient safety target was rated as “High” impact and there was compelling evidence (i.e., “High” relative study strength) that a particular practice could significantly reduce (e.g., “Robust” effect size) the negative consequences of exposure to the health care system (e.g., hospital-acquired infections), raters were likely to score the practice close to 10. If the studies were less convincing, the effect size was less robust, or there was a need for a “Medium” or “High” degree of vigilance because of potential harms, then the rating would be lower. At the same time, the editors also rated the usefulness of conducting more research on each practice, emphasizing whether there appeared to be questions that a research program might have a reasonable chance of addressing successfully (creating a “Research Priority” list). Here, they asked themselves, “If I were the leader of a large agency or foundation committed to improving patient safety, and were considering allocating funds to promote additional research, how would I grade this practice?” If there was a simple gap in the evidence that could be addressed by a research study or if the practice was multifaceted and implementation could be eased by determining the specific elements that were effective, then the research priority was high. (For this reason, some practices are highly rated on both the “Strength of the Evidence” and “Research Priority” lists.) If the area was one of high potential impact (i.e., large number of patients at risk for morbid or mortal adverse events) and a practice had been inadequately researched, then it would also receive a relatively high rating for research need. Practices might receive low research scores if they held little promise (e.g., relatively few patients are affected by the safety problem addressed by the practice or a significant body of knowledge already demonstrates the practice’s lack of utility). Conversely, a practice that was clearly effective, low cost, and easy to implement would not require further research and would also receive low research scores. In rating both the strength of the evidence and the research priority, the purpose was not to report precise 0 to 10 scores, but to develop general “zones” or practice groupings. This is important because better methods are available for making comparative ratings when the data inputs are available. The relative paucity of the evidence dissuaded the editors from using a more precise, sophisticated, but ultimately unfeasible, approach.
5
Clear Opportunities for Safety Improvement The following 11 patient safety practices were the most highly rated (of the 79 practices reviewed in detail in the full report and ranked in the Executive Summary Addendum, AHRQ Publication No. 01-E057b) in terms of strength of the evidence supporting more widespread implementation. Practices appear in descending order, with the most highly rated practices listed first. Because of the imprecision of the ratings, the editors did not further divide the practices, nor indicate where there were ties. •
Appropriate use of prophylaxis to prevent venous thromboembolism in patients at risk;
•
Use of perioperative beta-blockers in appropriate patients to prevent perioperative morbidity and mortality;
•
Use of maximum sterile barriers while placing central intravenous catheters to prevent infections;
•
Appropriate use of antibiotic prophylaxis in surgical patients to prevent perioperative infections;
•
Asking that patients recall and restate what they have been told during the informed consent process;
•
Continuous aspiration of subglottic secretions (CASS) to prevent ventilatorassociated pneumonia;
•
Use of pressure relieving bedding materials to prevent pressure ulcers;
•
Use of real-time ultrasound guidance during central line insertion to prevent complications;
•
Patient self-management for warfarin (Coumadin™) to achieve appropriate outpatient anticoagulation and prevent complications;
•
Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill and surgical patients; and
•
Use of antibiotic-impregnated central venous catheters to prevent catheterrelated infections.
This list is generally weighted toward clinical rather than organizational matters, and toward care of the very, rather than the mildly or chronically ill. Although more than a dozen practices considered were general safety practices that have been the focus of patient safety experts for decades (i.e., computerized physician order entry, simulators, creating a “culture of safety,” crew resource management), most research on patient safety has focused on more clinical areas. The potential application of practices drawn from outside health care has excited the patient safety community, and many such practices have apparent validity. However, clinical research has been promoted by the significant resources applied to it through Federal, foundation, and industry support. Since this study went where the evidence took it, more clinical practices rose to the top as potentially ready for implementation.
6
Clear Opportunities for Research Until recently, patient safety research has had few champions, and even fewer champions with resources to bring to bear. The recent initiatives from AHRQ and other funders are a promising shift in this historical situation, and should yield important benefits. In terms of the research agenda for patient safety, the following 12 practices rated most highly, as follows: •
Improved perioperative glucose control to decrease perioperative infections;
•
Localizing specific surgeries and procedures to high volume centers;
•
Use of supplemental perioperative oxygen to decrease perioperative infections;
•
Changes in nursing staffing to decrease overall hospital morbidity and mortality;
•
Use of silver alloy-coated urinary catheters to prevent urinary tract infections;
•
Computerized physician order entry with computerized decision support systems to decrease medication errors and adverse events primarily due to the drug ordering process;
•
Limitations placed on antibiotic use to prevent hospital-acquired infections due to antibiotic-resistant organisms;
•
Appropriate use of antibiotic prophylaxis in surgical patients to prevent perioperative infections;
•
Appropriate use of prophylaxis to prevent venous thromboembolism in patients at risk;
•
Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill and post-surgical patients;
•
Use of analgesics in the patient with an acutely painful abdomen without compromising diagnostic accuracy; and
•
Improved handwashing compliance (via education/behavior change; sink technology and placement; or the use of antimicrobial washing substances).
Of course, the vast majority of the 79 practices covered in this report would benefit from additional research. In particular, some practices with longstanding success outside of medicine (e.g., promoting a culture of safety) deserve further analysis, but were not explicitly ranked due to their unique nature and the present weakness of the evidentiary base in the health care literature.
7
Conclusions This report represents a first effort to approach the field of patient safety through the lens of evidence-based medicine. Just as To Err is Human sounded a national alarm regarding patient safety and catalyzed other important commentaries regarding this vital problem, this review seeks to plant a seed for future implementation and research by organizing and evaluating the relevant literature. Although all those involved tried hard to include all relevant practices and to review all pertinent evidence, inevitably some of both were missed. Moreover, the effort to grade and rank practices, many of which have only the beginnings of an evidentiary base, was admittedly ambitious and challenging. It is hoped that this report provides a template for future clinicians, researchers, and policy makers as they extend, and inevitably improve upon, this work. In the detailed reviews of the practices, the editors have tried to define (to the extent possible from the literature) the associated costs—financial, operational, and political. However, these considerations were not factored into the summary ratings, nor were judgments made regarding the appropriate expenditures to improve safety. Such judgments, which involve complex tradeoffs between public dollars and private ones, and between saving lives by improving patient safety versus doing so by investing in other health care or non-health care practices, will obviously be critical. However, the public reaction to the IOM report, and the media and legislative responses that followed it, seem to indicate that Americans are highly concerned about the risks of medical errors and would welcome public and private investment to decrease them. It seems logical to infer that Americans value safety during a hospitalization just as highly as safety during a transcontinental flight.
8
Evidence Report
10
PART I. OVERVIEW Chapter 1. An Introduction to the Report 1.1. General Overview 1.2. How to Use this Report 1.3. Acknowledgments Chapter 2. Drawing on Safety Practices from Outside Health Care Chapter 3. Evidence-based Review Methodology
11
12
Chapter 1. An Introduction to the Report 1.1. General Overview The Institute of Medicine’s (IOM) report, To Err is Human: Building a Safer Health System,1 highlighted the risks of medical care in the United States. Although its prose was measured and its examples familiar to many in the health professions (for example, the studies estimating that up to 98,000 Americans die each year from preventable medical errors were a decade old), the report shocked the sensibilities of many Americans. More importantly, the report undermined the fundamental trust that many previously had in the health care system. The IOM report prompted a number of legislative and regulatory initiatives designed to document errors and begin the search for solutions. These initiatives were further catalyzed by a second IOM report entitled Crossing the Quality Chasm: A New Health System for the 21st Century,2 which highlighted safety as one of the fundamental aims of an effective system. But Americans, who now wondered whether their next health care encounter might harm rather than help them, began to demand concerted action. Making Health Care Safer represents an effort to determine what it is we might do in an effort to improve the safety of patients. In January 2001, the Agency for Healthcare Research and Quality (AHRQ), the Federal agency taking the lead in studying and promoting patient safety, commissioned the UCSF-Stanford Evidence-based Practice Center (EPC) to review the literature as it pertained to improving patient safety. In turn, the UCSF-Stanford EPC engaged 40 authors at 11 institutions around the United States to review more than 3000 pieces of literature regarding patient safety practices. Although AHRQ expected that this evidence-based review would have multiple audiences, the National Quality Forum (NQF)—a public-private partnership formed in the Clinton Administration to promote a national quality agenda—was particularly interested in the results as it began its task of recommending and implementing patient safety practices supported by the evidence. A Definition of “Patient Safety Practices” One of our first tasks was to define “patient safety practices” in a manner that would allow us and our reviewers to assess the relevant evidence. Given our task—producing a full report in less than six months—a complete review of all practices associated with improving health care quality was both impossible and off-point. Working closely with AHRQ and NQF, we chose the following definition: A Patient Safety Practice is a type of process or structure whose application reduces the probability of adverse events resulting from exposure to the health care system across a range of diseases and procedures. A few elements of the definition deserve emphasis. First, our focus on processes and structure allowed us to emphasize changing the system to make it safer rather than targeting and removing individual “bad apples.” We recognize that when individuals repeatedly perform poorly and are unresponsive to education and remediation, action is necessary. Nevertheless, there is virtual unanimity among patient safety experts that a focus on systemic change will be far more productive than an emphasis on finding and punishing poor performers. Second, looking at crosscutting diseases and procedures allowed us to distinguish patient safety activities from more targeted quality improvement practices. Admittedly, this dichotomization is imprecise. All would agree that a practice that makes it less likely that a
13
patient will receive the wrong medication or have the wrong limb amputated is a patient safety practice. Most would also agree that practices designed to increase the use of beta-blockers in patients admitted to the hospital after myocardial infarction or to improve the technical performance of hernia repair would be quality improvement strategies rather than patient safety practices. When there was a close call, we generally chose to be inclusive. For example, we included practices designed to increase the rate of appropriate prophylaxis against venous thromboembolism, the appropriateness of pain management, and the ascertainment of patient preferences regarding end-of-life care. We recognize that these practices blur the line somewhat between safety and quality, but we believe that they are reasonable examples of ways to address potential patient safety hazards. Third, we realized it would be impossible to review every potential safety practice and recognized that some gaps in the evidence were inevitable, so at times we reviewed illustrative examples that might be broadly generalizable. For example: • • • •
Methods to avoid misread radiographs (Chapter 35); where the content could be relevant to analogous efforts to avoid misread electrocardiograms or laboratory studies Decreasing the risk of dangerous drugs (Chapter 9), where the focus was on anticoagulants, but similar considerations might be relevant for chemotherapy and other high-risk drugs Localizing care to specialized providers reviews geriatric units and intensivists (Chapters 30 and 38), but similar evidence may be relevant for the rapidly growing field of hospitalists3-5 The use of ultrasound guidance for central line placement (Chapter 21); the premise (decreasing the risk of an invasive procedure through radiologic localization) may also be relevant for ultrasound guidance while performing other challenging procedures, such as thoracentesis
A Focus on Hospital Care Most of the literature regarding medical errors has been drawn from hospital care.6-22 For example, the two seminal studies on medical error22,23 from which the oft-cited extrapolations of yearly deaths from medical error were derived, have highlighted the risks of inpatient care. We applaud recent studies examining the risks of errors in the ambulatory setting24 but believe that the hospital is an appropriate initial focus for an evidence-based review because the risks associated with hospitalization are high, strategies for improvement are better documented, and the importance of patient trust is paramount. That said, the reader will see that we allowed the evidence to take us to other sites of care. For example, although much of the literature regarding the occurrence and prevention of adverse drug events is hospital-based, more recent literature highlights outpatient issues and is included in this review. An example is the chapter on decreasing the risk of anticoagulant treatment (Chapter 9), in which two of the most promising practices involve outpatient anticoagulation clinics and patient self-monitoring at home. Similarly, strategies to prevent falls or pressure ulcers are relevant to nursing home patients as well as those in hospitals, and many studies that shed light on these issues come from the former setting.
14
The Evidence Chapter 3 describes our strategy for evidence review. As in other evidence-based reviews, we set the bar high. One would not want to endorse a practice unsupported by evidence, nor withhold one substantiated by a high level of proof. In the end, we aimed to identify practices whose supporting evidence was so robust that immediate widespread implementation would lead to major improvements in patient safety. Additionally, we hoped to identify several practices whose promise merited a considerable investment in additional research, but whose evidentiary base was insufficient for immediate endorsement. The results of this effort are summarized in Part V of the Report. Readers familiar with the state of the evidence regarding quality improvement in areas where this has been a research priority (eg, cardiovascular care) may be surprised and even disappointed by the paucity of high quality evidence for many patient safety practices. The field is young. Just as there had been little public recognition of the risks of health care prior to the first IOM report, there has been relatively little attention paid to such risks—and strategies to mitigate them—among health professionals and researchers. Nevertheless, we found a number of practices supported by high quality evidence for which widespread implementation would save many thousands of lives. Moreover, there are important methodologic reasons why research in patient safety is particularly challenging. First is the problem of blinding. The physician who has begun to use a new computerized order entry system cannot be blinded to the intervention or its purpose. Second, it is sometimes difficult to measure important outcomes. As in aviation, enormous benefits can be reaped from analyzing “near misses” (with no ultimate harm to patients),25,26 and yet these outcomes cannot be reliably counted in the absence of potentially obtrusive, and often very expensive observation. Third, many effective practices are multidimensional, and sorting out precisely which part of the intervention works is often quite challenging. Fourth, many of the patient safety problems that generate the most concern (wrong-site surgery, for example) are probably uncommon. This makes demonstrating the success of a “safety practice” in a statistically meaningful manner with respect to outcomes all but impossible. Finally, establishing firm epidemiologic links between presumed (and accepted) causes and adverse events is critical, and frequently difficult. For instance, verbal orders from doctors to nurses are regarded as a cause of medication errors almost as matter of dogma, with many hospitals prohibiting or strongly discouraging this practice except in emergency situations.27 Yet, the one study that we could identify that specifically and comprehensively addressed this issue28 actually reported fewer errors among verbal medication orders compared with written medication orders. A similar relationship might be found studying other intuitively plausible “risk factors” for errors, such as “fatigue.” Because many health care providers work long hours and/or late at night, analyses of errors will commonly reveal fatigued providers. The question is whether or not fatigue is over-represented among situations that lead to errors. As discussed in Chapter 46, the evidence supporting fatigue as a contributor to adverse events is surprisingly mixed. The point is not that the problem of long work-hours should be ignored, but rather that strong epidemiologic methods need to be applied before concluding that an intuitive cause of errors is in fact causal. These methodologic issues are further explored in Chapters 3 (methods for analyzing the individual practices) and 56 (methods for summarizing the overall evidence). Improving patient safety is a team effort, and the playbook is often drawn from fields outside of health care. Most medical errors cannot be prevented by perfecting the technical work of individual doctors, nurses or pharmacists. Improving patient safety often involves the 15
coordinated efforts of multiple members of the health care team, who may adopt strategies from outside health care. Thus, our teams of authors and advisors included physicians, pharmacists, nurses, and experts from non-medical fields. The literature we reviewed was often drawn from journals, books, or Web sites that will not be on most doctors’ reading lists. We reviewed several promising practices whose evidence came from the domains of commercial aviation, nuclear safety, and aerospace, and the disciplines of human factors engineering and organizational theory. In reviewing these practices, we tried to be flexible regarding standards of evidence. For example, the randomized trial that is appropriately hailed as the “gold standard” in health care is rarely used in aviation, which instead relies on analyses of detailed case studies and industrial engineering research approaches. (Examples and additional discussion of this issue can be found in Chapter 2.) We also limited our discussion to the existing practices, recognizing that future technology may make the ones we reviewed obsolete. For example, much of the struggle to find safe ways to administer warfarin (Chapter 9) would be rendered moot by the development of a much safer, but equally effective oral anticoagulant that did not require monitoring. Similarly, the evidence regarding changing the flooring of rooms to decrease falls (Subchapter 26.4) indicated that present options may decrease the harm from falls but actually increase their rate. Clearly, a better surface would make falls both less likely and less harmful. Such a surface has not yet been tested. Finally, we have tried to define (to the extent possible from the literature) the costs— financial, operational, and political—associated with the patient safety practices we considered. However, we have not made judgments regarding the appropriate expenditures to improve safety. These judgments, which involve complex tradeoffs between public dollars and private ones, and between saving lives by improving patient safety versus doing so by investing in other health care or non-health care practices, will obviously be critical. However, the public reaction to the IOM report, and the media and legislative responses that followed it, seem to indicate that Americans are highly concerned about the risks of medical errors and would welcome public and private investment to decrease them. It seems logical to infer that Americans value safety during a hospitalization just as highly as safety during a transcontinental flight. The Decision to Include and Exclude Practices The patient safety/quality interface was only one of several areas that called for judgments regarding which practices to include or exclude from the Report. In general (and quite naturally for an evidence-based review), we excluded those practices for which we found little or no supporting evidence. However, we recognize that patient safety is of great public and professional interest, and that the informed reader might expect to find certain topics in such a review. Therefore, we included several areas notwithstanding their relatively meager evidentiary base. For such high profile topics (such as bar coding to prevent misidentifications, Subchapter 43.1), we tried to fairly present the practice’s background, the experience with the practice thus far, and the evidence (and gaps in the evidence) regarding its value. In many of these cases, we end by encouraging additional study or demonstration projects designed to prove whether the practices live up to their promise. Conversely, another very different group of practices lacked evidence and were excluded from the review. These practices were characterized by their largely self-evident value (in epidemiologic terms, their “face validity”). For example, large randomized studies of the removal of concentrated potassium chloride from patient care floors surely are not necessary in order to recommend this practice as a sensible way of preventing egregious errors that should 16
never occur. Although some of these types of practices were not included in this “evidencebased” Report, the reader should not infer their exclusion as a lack of endorsement. A cautionary note is in order when considering such “obviously beneficial” practices. Even an apparently straightforward practice like “signing the site” to prevent surgery or amputation of the wrong body part may lead to unexpected opportunities for error. As mentioned in Subchapter 43.2, some surgeons adopt the practice of marking the intended site, while others mark the site to avoid. The clinical research literature furnishes enough examples of practices that everyone “knew” to be beneficial but proved not to be (or even proved to be harmful) once good studies were conducted (antiarrhythmic therapy for ventricular ectopy29 or hormone replacement therapy to prevent cardiac deaths,30 for example) that it is reasonable to ask for high-quality evidence for most practices. This is particularly true when practices are expensive, complex to implement, or carry their own risks. Other Content Issues There may appear to some readers to be an inordinate focus on clinical issues versus more general patient safety practices. In this and other matters, we went where the evidence took us. Although more than a dozen chapters of the Report consider general safety practices that have been the focus of many patient safety experts for decades (ie, computerized order entry, simulators, crew resource management), most research on patient safety, in fact, has focused on more clinical matters. It is likely that some of this is explained by the previous “disconnect” between research in patient safety and its application. We are hopeful that the Report helps to bridge this gap. We also think it likely that clinical research has been promoted by the significant resources applied to it through Federal, foundation, and industry support. Until recently, patient safety research has had few champions, and even fewer champions with resources. The recent initiatives from AHRQ and other funders are a promising shift in this historical situation, and should yield important benefits. The reader will notice that there is relatively little specific coverage of issues in pediatrics, obstetrics, and psychiatry. Most of the patient safety practices we reviewed have broad applicability to those fields as well as larger fields such as surgery and medicine. Much of the research in the former fields was too disease-specific to include in this volume. For example, practices to improve the safety of childbirth, although exceptionally important, were excluded because they focused on the care of patients with a single “condition,” just as we excluded research focused specifically on the care of patients with pneumonia or stroke. Readers may also be surprised by the relatively small portion of the Report devoted to the prevention of high-profile and “newsworthy” errors. Even if much of the national attention to patient safety stemmed from concerns about wrong-site surgery or transfusion mix-ups, in fact these are not the dominant patient safety problems today. If widespread use of hip protectors (Subchapter 26.5) leads to a marked decrease in injuries from patient falls, implementing this safety practice would be more important than preventing the few wrong-site surgeries each year, although the former seem far less likely to garner attention in a tabloid. Conclusions Making Health Care Safer represents a first effort to approach the field of patient safety through the lens of evidence-based medicine. Just as To Err is Human sounded a national alarm regarding patient safety and catalyzed other important commentaries regarding this vital problem, this review is a germinal effort to mine the relevant literature. Although we and the authors tried hard to include all relevant practices and to review all pertinent evidence, we 17
inevitably missed some of both. Moreover, our effort to rank practices (Part V), many of which have only the beginnings of an evidentiary base, was admittedly ambitious and challenging. We hope that the Report provides a template for future clinicians, researchers, and policy makers as they extend, and inevitably improve upon, our work. 1.2. How to Use this Report Organizational Framework This document is divided into five parts: Part I – The overview introduces many of the methodologic, content, and policy issues. Part II – We describe, and present the evidence regarding 2 practices that are used to report and respond to patient safety problems: incident reporting and root cause analysis. Since both these “practices” have relevance for all of the patient safety targets and practices covered in Part III, we neither grade them nor rank them. Part III – In 45 chapters, we review the evidence regarding the utility of 79 patient safety practices. Each chapter is structured in a standard fashion, as follows: •
Background – of the patient safety problem and the practice;
•
Practice Description – in which we try to present the practice at a level of detail that would allow a reader to determine the practice’s applicability to their setting;
•
Prevalence and Severity of the Target Safety Problem – Here, we try to answer the following questions: How common is the safety problem the practice is meant to address? How often does the problem lead to harm? How bad is the harm when it occurs?;
•
Opportunities for Impact – In this section, we consider the present-day use of the patient safety practice. For example, we found that the use of “unit-dose” drug dispensing was quite common in US hospitals, and thus the opportunity to make an impact with wider dissemination of this practice was relatively low. Conversely, computerized physician order entry is still relatively uncommon, and therefore (assuming it is effective), its widespread implementation could have a far larger impact;
•
Study Designs – We review the designs of the major studies evaluating the practice. Similarly, Study Outcomes looks at the kinds of outcomes (eg, adverse drug events, surgical complications, mortality) that were considered. Our criteria for grading the evidence related to both design and outcomes (more information on other methodologic issues appears in Chapter 3);
•
Evidence for Effectiveness of the Practice – Here, the authors summarize the findings of the studies and comment on any methodologic concerns that might effect the strength of these findings. This section is often accompanied by tables summarizing the studies and their findings;
18
•
Potential for Harm – Many practices that are effective in improving patient safety nonetheless carry the potential for harm. More widespread use of antibiotic prophylaxis or antibiotic-impregnated urinary or vascular catheters could prevent individual hospital-acquired infections yet breed antibiotic resistance. Increasing the use of barrier precautions could also prevent infections, but might lead caregivers to visit patients less often. These sections do not imply that harm is inevitable; rather they highlight the issues that require vigilance during the implementation of effective practices;
•
Costs and Implementation – Here we consider the costs and other challenges of implementing the practice. We tried to uncover data related to the true costs of implementation (How much does an automatic drug dispensing machine cost a pharmacy?), but also considered some of the potential offsets when there were data available. We also considered issues of feasibility: How much behavior change would be necessary to implement the practice? Would there be major political concerns or important shifts in who pays for care or is compensated for providing it? We tried not to assign values to such issues, but rather to present them so that policy makers could consider them; and
•
Comment – Here, the authors highlight the state of the evidence, elucidate key implementation issues, and define a potential research agenda.
Part IV – In many ways a mirror of Part II, Part IV considers the ways in which patient safety practices can be implemented. The evidence is reviewed, and some of the benefits and limitations of various strategies are analyzed. As with Part II, we neither grade nor rank these “practices” in Part V since each of these strategies can be applied to most of the patient safety targets and practices covered in Part III. Part V – Here we analyze the practices. Using methods described in Chapter 56, we synthesize the evidence in Part III to grade and rank the patient safety practices across two major dimensions: • •
Does the evidence support implementation of the practice to improve patient safety? Does the evidence support additional research into the practice?
Tips for Users of the Report We envision that this evidence-based report of patient safety practices will be useful to a wide audience. Policy makers may use its contents and recommendations to promote or fund the implementation of certain practices. Similarly, local health care organization leaders (including leaders of hospitals, medical groups, or integrated delivery systems) may use the data and analysis to choose which practices to consider implementing or further promoting at their institutions. Researchers will identify a wealth of potential research opportunities. This document is, in many ways, a road map for future research into patient safety. Those who fund research, including (but not limited to) AHRQ, which sponsored this report, will find literally dozens of areas ripe for future studies. In some cases, such studies may be expensive randomized controlled trials, while other practices may require a simple meta-analysis or cost-effectiveness analysis to tip the scales toward or away from recommending a practice. 19
Clinicians and trainees will, we hope, find the material both interesting and relevant to their practices. One of the salutary consequences of the IOM’s reports has been their impact on the attitudes of our future health care providers. We have noticed at our institutions that students and post-graduate trainees in medicine, nursing, and pharmacy are increasingly taking a systems approach to health care. Several of us have heard medical residents refer to issues as “patient safety problems” that beg for a “systems solution” over the past two years, terms that were absent from the medical ward a few years earlier. Clinicians must be part of the solutions to patient safety problems, and their increasing interest in the field is an exceedingly hopeful sign. Finally, although not primarily written for patients and their families, we recognize the broad public interest in, and concern about patient safety and believe that much of the material will be compelling and potentially useful to the public. For years quality advocates have lamented the relatively small impact that “quality report cards” appear to have on patients’ choices of health care providers and institutions. One study demonstrated that patients were more likely to respond to a newspaper report of an egregious error than such quality report cards.31 These data indicate that patients may be interested in knowing whether their institutions, providers, and health plans are proactive in implementing practices that demonstrably decrease the risk of adverse events. Also, any general reader is likely to come away from this Report with heightened sensitivity to the unique challenges that the health care industry—which aims to provide compassionate, individualized care in a dynamic, organizationally and politically complex, and technologically fluid environment—faces in improving safety, and the significant strides that have already been made. Continued improvement will require the infusion of substantial resources, and the public debate about their source, quantity, and target is likely to be lively and very important. 1.3. Acknowledgments We are grateful to our authors, whose passion for the evidence overcame the tremendous time pressure driven by an unforgiving timeline. They succumbed to our endless queries, as together we attempted to link the literature from a variety of areas into an evidence-based repository whose presentation respected an overarching framework, notwithstanding the variety of practices reviewed. Our Managing Editor, Amy J. Markowitz, JD, edited the manuscript with incredible skill and creativity. Each part was reviewed and improved multiple times because of her unending dedication to the project. Susan B. Nguyen devoted countless hours assisting the team in every imaginable task, and was critical in keeping the project organized and on track. Invaluable editorial and copyediting assistance was provided by Kathleen Kerr, Talia Baruth, and Mary Whitney. We thank Phil Tiso for helping produce the electronic version of the Report. We also thank A. Eugene Washington, MD, MSc, Director of the UCSF-Stanford Evidence-based Practice Center, for his guidance and support. We were aided by a blue-ribbon Advisory Panel whose suggestions of topics, resources, and ideas were immensely helpful to us throughout the life of the project. Given the time constraints, there were doubtless many constructive suggestions that we were unable to incorporate into the Report. The responsibility for such omissions is ours alone. We want to recognize the important contribution of our advisors to our thinking and work. The Advisory Panel’s members are: •
David M. Gaba, MD, Director, Patient Safety Center of Inquiry at Veterans Affairs Palo Alto Health Care System, Professor of Anesthesia, Stanford University School of Medicine
20
• • • • •
John W. Gosbee, MD, MS, Director of Patient Safety Information Systems, Department of Veterans Affairs National Center for Patient Safety Peter V. Lee, JD, President and Chief Executive Officer, Pacific Business Group on Health Arnold Milstein, MD, MPH, Medical Director, Pacific Business Group on Health; National Health Care Thought Leader, William M. Mercer, Inc. Karlene H. Roberts, PhD, Professor, Walter A. Haas School of Business, University of California, Berkeley Stephen M. Shortell, PhD, Blue Cross of California Distinguished Professor of Health Policy and Management and Professor of Organization Behavior, University of California, Berkeley
We thank the members of the National Quality Forum’s (NQF) Safe Practices Committee, whose co-chairs, Maureen Bisognano and Henri Manasse, Jr., PhD, ScD, and members helped set our course at the outset of the project. NQF President and Chief Executive Officer Kenneth W. Kizer, MD, MPH, Vice President Elaine J. Power, and Program Director Laura N. Blum supported the project and coordinated their committee members’ helpful suggestions as the project proceeded. We appreciate their patience during the intensive research period as they waited for the final product. Finally, we are indebted to the Agency for Healthcare Research and Quality (AHRQ), which funded the project and provided us the intellectual support and resources needed to see it to successful completion. Jacqueline Besteman, JD of AHRQ’s Center for Practice and Technology Assessment, and Gregg Meyer, MD, and Nancy Foster, both from AHRQ’s Center for Quality Improvement and Patient Safety, were especially important to this project. John Eisenberg, MD, MBA, AHRQ’s Administrator, was both a tremendous supporter of our work and an inspiration for it. The Editors San Francisco and Palo Alto, California June, 2001 References 1.
2.
3. 4.
Kohn L, Corrigan J, Donaldson M, editors. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine. National Academy Press; 2000. Committee on Quality of Health Care in America, Institute of Medicine, editor. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. Wachter RM, Goldman L. The emerging role of "hospitalists" in the American health care system. N Engl J Med. 1996;335:514-517. Wachter RM, Katz P, Showstack J, Bindman AB, Goldman L. Reorganizing an academic medical service: impact on cost, quality, patient satisfaction, and education. JAMA. 1998;279:1560-1565.
21
5.
6. 7.
8. 9. 10. 11.
12. 13.
14.
15.
16.
17. 18. 19. 20.
21. 22.
Meltzer DO, Manning WG, Shah M, Morrison J, Jin L, Levinson W. Decreased length of stay, costs and mortality in a randomized trial of academic hospitalists. Paper presented at: Society of General Internal Medicine-24th Annual Meeting; May 2-5, 2001; San Diego, CA. Safren MA, Chapanis A. A critical incident study of hospital medication errors. Hospitals. 1960;34:32-34. Mills DH, editor. Report on the medical insurance feasibility study / sponsored jointly by California Medical Association and California Hospital Association. San Francisco: Sutter Publications, Inc.; 1977. Bosk CL. Forgive and remember : managing medical failure. Chicago: University of Chicago Press; 1979. Abramson NS, Wald KS, Grenvik AN, Robinson D, Snyder JV. Adverse occurrences in intensive care units. JAMA. 1980;244:1582-1584. Barker KN, Mikeal RL, Pearson RE, Illig NA, Morse ML. Medication errors in nursing homes and small hospitals. Am J Hosp Pharm. 1982;39:987-991. Cooper JB, Newbower RS, Kitz RJ. An analysis of major errors and equipment failures in anesthesia management: considerations for prevention and detection. Anesthesiology. 1984;60:34-42. Rubins HB, Moskowitz MA. Complications of care in a medical intensive care unit. J Gen Intern Med. 1990;5:104-109. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30:615-629. Runciman WB, Sellen A, Webb RK, Williamson JA, Currie M, Morgan C, et al. The Australian Incident Monitoring Study. Errors, incidents and accidents in anaesthetic practice. Anaesth Intensive Care. 1993;21:506-519. Donchin Y, Gopher D, Olin M, Badihi Y, Biesky M, Sprung CL, et al. A look into the nature and causes of human errors in the intensive care unit. Critical Care Medicine. 1995;23:294-300. Bates DW, Cullen DJ, Laird N, Petersen LA, Small SD, Servi D, et al. Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group. JAMA. 1995;274:29-34. Dean BS, Allan EL, Barber ND, Barker KN. Comparison of medication errors in an American and a British hospital. Am J Health Syst Pharm. 1995;52:2543-2549. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163:458-471. Andrews LB, Stocking C, Krizek T, Gottlieb L, Krizek C, Vargish T, et al. An alternative strategy for studying adverse events in medical care. Lancet. 1997;349:309-313. Classen DC, Pestotnik SL, Evans RS, Lloyd JF, Burke JP. Adverse drug events in hospitalized patients. Excess length of stay, extra costs, and attributable mortality. JAMA. 1997;277:301-306. Lesar TS, Lomaestro BM, Pohl H. Medication-prescribing errors in a teaching hospital. A 9-year experience. Arch Intern Med. 1997;157:1569-1576. Thomas EJ, Studdert DM, Burstin HR, Orav EJ, Zeena T, Williams EJ, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261271.
22
23.
24. 25. 26. 27. 28.
29.
30.
31.
Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-376. Gandhi TK, Burstin HR, Cook EF, Puopolo AL, Haas JS, Brennan TA, et al. Drug complications in outpatients. J Gen Intern Med. 2000;15:149-154. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ. 2000;320:759-763. Leape LL. Reporting of medical errors: time for a reality check. Qual Health Care. 2000;9:144-145. Dahl FC, Davis NM. A survey of hospital policies on verbal orders. Hosp Pharm. 1990;25:443-447. West DW, Levine S, Magram G, MacCorkle AH, Thomas P, Upp K. Pediatric medication order error rates related to the mode of order transmission. Arch Pediatr Adolesc Med. 1994;148:1322-1326. Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. N Engl J Med. 1989;321:406-412. Hulley S, Grady D, Bush T, Furberg C, Herrington D, Riggs B, et al. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. Heart and Estrogen/progestin Replacement Study (HERS) Research Group. JAMA. 1998;280:605-613. Mennemeyer ST, Morrisey MA, Howard LZ. Death and reputation: how consumers acted upon HCFA mortality information. Inquiry. 1997;34:117-128.
23
24
Chapter 2. Drawing on Safety Practices from Outside Health Care The medical profession’s previous inattention to medical error, along with other publicized deficiencies (such as a notable lag in adopting sophisticated information technologies) have invited unfavorable comparisons between health care and other complex industries.1-5 The first of the two recent Institute of Medicine (IOM) reports on the quality of health care in America, To Err is Human: Building a Safer Health System,3 states that “health care is a decade or more behind other high-risk industries in its attention to ensuring basic safety.” Consequently, one of the goals of this project was to search these other industries for evidence-based safety strategies that might be applied to health care. The relatively short timeline of this project necessitated a focused approach to the search for potentially applicable patient safety practices from non-health care writings. Fortunately, many relevant practices have received at least some analysis or empirical study in the health care literature. As a practical solution we present original articles from outside health care as foundational and background material, rather than as a primary source of evidence. Specific topics and practices reviewed in Making Health Care Safer that clearly derive from fields outside health care include: •
Incident reporting (Chapter 4)
•
Root cause analysis (Chapter 5)
•
Computerized physician order entry and decision support as a means of reducing medication errors (Chapter 6)
•
Automated medication dispensing systems (Chapter 11)
•
Bar coding technology to avoid misidentification errors (Subchapter 43.1)
•
Aviation-style preoperative checklists for anesthesia equipment (Chapter 23 and Subchapter 41.3)
•
Promoting a “culture of safety” (Chapter 40)
•
Crew resource management, a model for teamwork training and crisis response modeled after training approaches in aviation (Chapter 44)
•
Simulators (of patients or clinical scenarios) as a training tool (Chapter 45)
•
Human factors theory in the design of medical devices and alarms (Chapter 41)
Many readers may still wonder at the relative paucity of safety practices drawn from nonhealth care sources. While the headline-grabbing assessments of medicine’s safety have been criticized by researchers and likely overstate the hazard to patients,6-8 it is undeniable that some industries, most notably commercial aviation, have safety records far superior to that of health care. One issue we faced in compiling this evidence-based review was the extent to which specific practices could be identified as playing a direct and measurable role in this achievement. Interestingly, the same issue—ascertaining a causative variable—arose in reviewing the literature on anesthesia, likely the one field of medicine with a safety record that rivals aviation’s (see also Chapter 56).
25
As outlined in Chapter 24, significant complications attributable to anesthesia have decreased9-12 to the point that major morbidity and mortality are now too rare to serve as practical endpoints for measuring the quality of anesthesia care, even in large multicenter studies.13,14 In attempting to account for this decrease, however, it is very difficult to find evidence supporting a causative role for even the most plausible candidates, such as widely utilized intraoperative monitoring standards.15 In other words, while the field of anesthesia has clearly made tremendous strides in improving patient safety over the past 50 years, it is hard to discern a particular, isolated practice that accounts for the clear and dramatic secular change in its safety. While at one level, a pragmatist might argue, “who cares, as long as it’s safe,” trying to adopt the lessons of anesthesia (or for that matter aviation) to the rest of health care is made more challenging by tenuous causality. Some might argue that, rather than pinpointing specific practices to embrace from other industries, health care institutions should emulate organizational models that promote safety in complex, high-risk industries that manage to operate with high reliability.16 Analysis of detailed and interesting case studies17-22 have fueled a school of thought known as high reliability theory, whose proponents suggest a number of organizational features that likely reduce the risk of “organizational accidents” and other hazards. A cogently argued alternative position, often called normal accident theory, questions not only these prescriptions for organizational change, but fundamentally challenges the idea of high reliability in certain kinds of complex, “tightly coupled” organizations.23,24 These competing schools of thought offer interesting and valuable insights into the ways that organizational strategies foster safety, while cautioning about the ever-present threat of new sources of error that come with increasingly complex human and technical organizations. Unfortunately, this rich literature does not permit ready synthesis within the framework of evidence-based medicine, even using the less stringent standards we adopted in evaluating non-medical literature (see Chapters 1 and 3). Even the more engineering-oriented of the disciplines with potential relevance to patient safety yielded a surprising lack of empirical evaluation of safety practices. For instance, numerous techniques for “human error identification” and “error mode prediction” purport to anticipate important errors and develop preventive measures prospectively.25-27 Their basic approach consists of breaking down the task of interest into component processes, and then assigning a measure of the likelihood of failure to each process. Many of the techniques mentioned in the literature have received little detailed description25,26 and few have received any formal validation (eg, by comparing predicted failures modes with observed errors).28,29 Even setting aside demands for validation, the impact of applying these techniques has not been assessed. Total quality management and continuous quality improvement techniques were championed as important tools for change in health care based on their presumed success in other industries, but evaluations of their impact on health care have revealed little evidence of success.30-33 In the end, we are left with our feet firmly planted in the middle of competing paradigms. One argues that an evidence-based, scientific approach has served health care well and should not be relaxed simply because a popular practice from a “safer” industry sounds attractive. The other counters that medicine’s slavish devotion to the scientific and epidemiologic method has placed us in a patient safety straightjacket, unable to consider the value of practices developed in other fields because of our myopic traditions and “reality.” We see the merits in both arguments. Health care clearly has much to learn from other industries. Just as physicians must learn the “basic sciences” of immunology and molecular biology, providers and leaders interested in making health care safer must learn the “basic 26
sciences” of organizational theory and human factors engineering. Moreover, the “cases” presented on rounds should, in addition to classical clinical descriptions, also include the tragedy of the Challenger and the successes of Motorola. On the other hand, an unquestioning embrace of dozens of promising practices from other fields is likely to be wasteful, distracting, and potentially dangerous. We are drawn to a dictum from the Cold War era—“Trust, but verify.” References 1. 2. 3.
Leape LL. Error in medicine. JAMA. 1994;272:1851-1857. Chassin MR. Is health care ready for Six Sigma quality? Milbank Q. 1998;76:565-591,510. Kohn L, Corrigan J, Donaldson M, editors. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine. National Academy Press; 2000. 4. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ. 2000;320:759-763. 5. Helmreich RL. On error management: lessons from aviation. BMJ. 2000;320:781-785. 6. Brennan TA. The Institute of Medicine report on medical errors–could it do harm? N Engl J Med. 2000;342:1123-1125. 7. McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA. 2000;284:93-95. 8. Sox HCJ, Woloshin S. How many deaths are due to medical error? Getting the number right. Eff Clin Pract. 2000;3:277-283. 9. Phillips OC, Capizzi LS. Anesthesia mortality. Clin Anesth. 1974;10:220-244. 10. Deaths during general anesthesia: technology-related, due to human error, or unavoidable? An ECRI technology assessment. J Health Care Technol. 1985;1:155-175. 11. Keenan RL, Boyan CP. Cardiac arrest due to anesthesia. A study of incidence and causes. JAMA. 1985;253:2373-2377. 12. Eichhorn JH. Prevention of intraoperative anesthesia accidents and related severe injury through safety monitoring. Anesthesiology. 1989;70:572-577. 13. Cohen MM, Duncan PG, Pope WD, Biehl D, Tweed WA, MacWilliam L, et al. The Canadian four-centre study of anaesthetic outcomes: II. Can outcomes be used to assess the quality of anaesthesia care? Can J Anaesth. 1992;39:430-439. 14. Moller JT, Johannessen NW, Espersen K, Ravlo O, Pedersen BD, Jensen PF, et al. Randomized evaluation of pulse oximetry in 20,802 patients: II. Perioperative events and postoperative complications. Anesthesiology. 1993;78:445-453. 15. Orkin FK. Practice standards: the Midas touch or the emperor's new clothes? Anesthesiology. 1989;70:567-571. 16. Reason J. Human error: models and management. BMJ. 2000;320:768-770. 17. Weick KE. Organizational culture as a source of high reliability. California management review. 1987;29:112-127. 18. Roberts KH. Some characteristics of high reliability organizations. Berkeley, CA: Produced and distributed by Center for Research in Management, University of California, Berkeley Business School; 1988. 19. LaPorte TR. The United States air traffic control system : increasing reliability in the midst of rapid growth. In: Mayntz R, Hughes TP, editors. The Development of Large technical Systems. Boulder, CO: Westview Press; 1988. 20. Roberts KH. Managing High Reliability Organizations. California Management Review. 1990;32:101-113. 27
21. 22.
23. 24. 25. 26. 27. 28.
29. 30. 31. 32.
33.
Roberts KH, Libuser C. From Bhopal to banking: Organizational design can mitigate risk. Organizational Dynamics. 1993;21:15-26. LaPorte TR, Consolini P. Theoretical and operational challenges of "high-reliability organizations": Air-traffic control and aircraft carriers. International Journal of Public Administration. 1998;21:847-852. Perrow C. Normal accidents : Living with High-Risk Technologies. With a New Afterword and a Postscript on the Y2K Problem. Princeton, NJ: Princeton University Press; 1999. Sagan SD. The Limits of Safety: Organizations, Accidents and Nuclear Weapons. Princeton, NJ: Princeton University Press; 1993. Kirwan B. Human error identification techniques for risk assessment of high risk systems– Part 1: Review and evaluation of techniques. Appl Ergon. 1998;29:157-177. Kirwan B. Human error identification techniques for risk assessment of high risk systems– Part 2: towards a framework approach. Appl Ergon. 1998;29:299-318. Hollnagel E, Kaarstad M, Lee HC. Error mode prediction. Ergonomics. 1999;42:14571471. Kirwan B, Kennedy R, Taylor-Adams S, Lambert B. The validation of three human reliability quantification techniques–THERP, HEART and JHEDI: Part II–Results of validation exercise. Appl Ergon. 1997;28:17-25. Stanton NA, Stevenage SV. Learning to predict human error: issues of acceptability, reliability and validity. Ergonomics. 1998;41:1737-1756. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76:625-48,511. Gerowitz MB. Do TQM interventions change management culture? Findings and implications. Qual Manag Health Care. 1998;6:1-11. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76:593-624,510. Shortell SM, Jones RH, Rademaker AW, Gillies RR, Dranove DS, Hughes EF, et al. Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care. 2000;38:207-217.
28
Chapter 3. Evidence-based Review Methodology Definition and Scope For this project the UCSF-Stanford Evidence-based Practice Center (EPC) defined a patient safety practice as “a type of process or structure whose application reduces the probability of adverse events resulting from exposure to the health care system across a range of conditions or procedures.” Examples of practices that meet this definition include computerized physician order entry (Chapter 6), thromboembolism prophylaxis in hospitalized patients (Chapter 31), strategies to reduce falls among hospitalized elders (Chapter 26), and novel education strategies such as the application of “crew resource management” to train operating room staff (Chapter 44). By contrast, practices that are disease-specific and/or directed at the underlying disease and its complications (eg, use of aspirin or beta-blockers to treat acute myocardial infarction) rather than complications of medical care are not included as “patient safety practices.” Further discussion of these issues can be found in Chapter 1. In some cases, the distinction between patient safety and more general quality improvement strategies is difficult to discern. Quality improvement practices may also qualify as patient safety practices when the current level of quality is considered “unsafe,” but standards to measure safety are often variable, difficult to quantify, and change over time. For example, what constitutes an “adequate” or “safe” level of accuracy in electrocardiogram or radiograph interpretation? Practices to improve performance to at least the “adequate” threshold may reasonably be considered safety practices because they decrease the number of diagnostic errors of omission. On the other hand, we considered practices whose main intent is to improve performance above this threshold to be quality improvement practices. An example of the latter might be the use of computer algorithms to improve the sensitivity of screening mammography.1 We generally included practices that involved acute hospital care or care at the interface between inpatient and outpatient settings. This focus reflects the fact that the majority of the safety literature relates to acute care and the belief that systems changes may be more effectively achieved in the more controlled environment of the hospital. However, practices that might be applicable in settings in addition to the hospital were not excluded from consideration. For example, the Report includes practices for preventing decubitus ulcers (Chapter 27) that could be applied in nursing homes as well as hospitals. The EPC team received input regarding the scope of the Report from the Agency for Healthcare Research and Quality (AHRQ), which commissioned the report. In addition, the EPC team participated in the public meeting of the National Quality Forum (NQF) Safe Practices Committee on January 26, 2001. The NQF was formed in 1999 by consumer, purchaser, provider, health plan, and health service research organizations to create a national strategy for quality improvement. Members of the Safe Practices Committee collaborated with the EPC team to develop the scope of work that would eventually become this Report. Organization by Domains To facilitate identification and evaluation of potential patient safety practices, we divided the content for the project into different domains. Some cover “content areas” (eg, adverse drug events, nosocomial infections, and complications of surgery). Other domains involve identification of practices within broad (primarily “non-medical”) disciplines likely to contain promising approaches to improving patient safety (eg, information technology, human factors research, organizational theory). The domains were derived from a general reading of the 29
literature and were meant to be as inclusive as possible. The list underwent review for completeness by patient safety experts, clinician-researchers, AHRQ, and the NQF Safe Practices Committee. For each domain we selected a team of author/collaborators with expertise in the relevant subject matter and/or familiarity with the techniques of evidence-based review and technology appraisal. The authors, all of whom are affiliated with major academic centers around the United States, are listed on page 9-13. Identification of Safety Practices for Evaluation Search Strategy The UCSF-Stanford Evidence-based Practice Center Coordinating Team (“The Editors”) provided general instructions (Table 3.1) to the teams of authors regarding search strategies for identifying safety practices for evaluation. As necessary, the Editors provided additional guidance and supplementary searches of the literature. Table 3.1. Search strategy recommended by coordinating team to participating reviewers a) Electronic bibliographic databases. All searches must include systematic searches of MEDLINE and the Cochrane Library. For many topics it will be necessary to include other databases such as the Cumulative Index to Nursing & Allied Health (CINAHL), PsycLit (PsycINFO), the Institute for Scientific Information’s Science Citation Index Expanded, Social Sciences Citation Index, Arts & Humanities Citation Index, INSPEC (physics, electronics and computing), and ABI/INFORM (business, management, finance, and economics). b) Hand-searches of bibliographies of retrieved articles and tables of contents of key journals. c) Grey literature. For many topics it will necessary to review the “grey literature,” such as conference proceedings, institutional reports, doctoral theses, and manufacturers’ reports. d) Consultation with experts or workers in the field. To meet the early dissemination date mandated by AHRQ, the Editors did not require authors to search for non-English language articles or to use EMBASE. These were not specifically excluded, however, and authoring teams could include non-English language articles that addressed important aspects of a topic if they had translation services at their disposal. The Editors did not make recommendations on limiting database searches based on publication date. For this project it was particularly important to identify systematic reviews related to patient safety topics. Published strategies for retrieving systematic reviews have used proprietary MEDLINE interfaces (eg, OVID, SilverPlatter) that are not uniformly available. Moreover, the performance characteristics of these search strategies is unknown.2,3 Therefore, these strategies were not explicitly recommended. The Editors provided authors with a search algorithm (available upon request) that uses PubMed, the freely available search interface from the National Library of Medicine, designed to retrieve systematic reviews with high sensitivity without overwhelming users with “false positive” hits.4
30
The Editors also performed independent searches of bibliographic databases and grey literature for selected topics.5 Concurrently, the EPC collaborated with NQF to solicit information about evidence-based practices from NQF members, and consulted with the project’s Advisory Panel (page 31), whose members provided additional literature to review. Inclusion and Exclusion Criteria The EPC established criteria for selecting which of the identified safety practices warranted evaluation. The criteria address the applicability of the practice across a range of conditions or procedures and the available evidence of the practices’ efficacy or effectiveness. Table 3.2. Inclusion/Exclusion criteria for practices Inclusion Criteria 1. The practice can be applied in the hospital setting or at the interface between inpatient and outpatient settings AND can be applied to a broad range of health care conditions or procedures. 2. Evidence for the safety practice includes at least one study with a Level 3 or higher study design AND a Level 2 outcome measure. For practices not specifically related to diagnostic or therapeutic interventions, a Level 3 outcome measure is adequate. (See Table 3.3 for definition of “Levels”). Exclusion Criterion 1. No study of the practice meets the methodologic criteria above. Practices that have only been studied outside the hospital setting or in patients with specific conditions or undergoing specific procedures were included if the authors and Editors agreed that the practices could reasonably be applied in the hospital setting and across a range of conditions or procedures. To increase the number of potentially promising safety practices adapted from outside the field of medicine, we included evidence from studies that used less rigorous measures of patient safety as long as the practices did not specifically relate to diagnostic or therapeutic interventions. These criteria facilitated the inclusion of areas such as teamwork training (Chapter 44) and methods to improve information transfer (Chapter 42). Each practice’s level of evidence for efficacy or effectiveness was assessed in terms of study design (Table 3.3) and study outcomes (Table 3.4). The Editors created the following hierarchies by modifying existing frameworks for evaluating evidence6-8 and incorporating recommendations from numerous other sources relevant to evidence synthesis.9-25
31
Table 3.3. Hierarchy of study designs* Level 1. Randomized controlled trials – includes quasi-randomized processes such as alternate allocation Level 2. Non-randomized controlled trial – a prospective (pre-planned) study, with predetermined eligibility criteria and outcome measures. Level 3. Observational studies with controls – includes retrospective, interrupted time series (a change in trend attributable to the intervention), case-control studies, cohort studies with controls, and health services research that includes adjustment for likely confounding variables Level 4. Observational studies without controls (eg, cohort studies without controls and case series) * Systematic reviews and meta-analyses were assigned to the highest level study design included in the review, followed by an “A” (eg, a systematic review that included at least one randomized controlled trial was designated “Level 1A”) Table 3.4. Hierarchy of outcome measures Level 1. Clinical outcomes - morbidity, mortality, adverse events Level 2. Surrogate outcomes - observed errors, intermediate outcomes (eg, laboratory results) with well-established connections to the clinical outcomes of interest (usually adverse events). Level 3. Other measurable variables with an indirect or unestablished connection to the target safety outcome (eg, pre-test/post-test after an educational intervention, operator self-reports in different experimental situations) Level 4. No outcomes relevant to decreasing medical errors and/or adverse events (eg, study with patient satisfaction as only measured outcome; article describes an approach to detecting errors but reports no measured outcomes) Implicit in this hierarchy of outcome measures is that surrogate or intermediate outcomes (Level 2) have an established relationship to the clinical outcomes (Level 1) of interest.26 Outcomes that are relevant to patient safety but have not been associated with morbidity or mortality were classified as Level 3. Exceptions to EPC Criteria Some safety practices did not meet the EPC inclusion criteria because of the paucity of evidence regarding efficacy or effectiveness, but were included in the Report because of their face validity (ie, an informed reader might reasonably expect them to be evaluated; see also Chapter 1). The reviews of these practices clearly identify the quality of evidence culled from medical and non-medical fields.
32
Evaluation of Safety Practices For each practice, authors were instructed to research the literature for information on: • prevalence of the problem targeted by the practice • severity of the problem targeted by the practice • the current utilization of the practice • evidence on efficacy and/or effectiveness of the practice • the practice’s potential for harm • data on cost if available • implementation issues These elements were incorporated into a template in an effort to create as much uniformity across chapters as possible, especially given the widely disparate subject matter and quality of evidence. Since the amount of material for each practice was expected to, and did, vary substantially, the Editors provided general guidance on what was expected for each element, with particular detail devoted to the protocol for searching and reporting evidence related to efficacy and/or effectiveness of the practice. The protocol outlined the search, the threshold for study inclusion, the elements to abstract from studies, and guidance on reporting information from each study. Authors were asked to review articles from their search to identify practices, and retain those with the better study designs. More focused searches were performed depending on the topic. The threshold for study inclusion related directly to study design. Authors were asked to use their judgment in deciding whether the evidence was sufficient at a given level of study design or whether the evidence from the next level needed to be reviewed. At a minimum, the Editors suggested that there be at least 2 studies of adequate quality to justify excluding discussion of studies of lower level designs. Thus inclusion of 2 adequate clinical trials (Level 1 design) were necessary in order to exclude available evidence from prospective, non-randomized trials (Level 2) on the same topic. The Editors provided instructions for abstracting each article that met the inclusion criteria based on study design. For each study, a required set of 10 abstraction elements (Table 3.5) was specified. Authors received a detailed explanation of each required abstraction element, as well as complete abstraction examples for the 3 types of study design (Levels 1A, 1, and 3; Level 2 was not included since the information collected was same as Level 1). Research teams were encouraged to abstract any additional elements relevant to the specific subject area.
33
Table 3.5. Ten Required Abstraction Elements 1. Bibliographic information according to AMA Manual of Style: title, authors, date of publication, source 2. Level of study design (eg, Level 1-3 for studies providing information for effectiveness; Level 4 if needed for relevant additional information) with descriptive material as follows: Descriptors For Level 1 Systematic Reviews Only (a) Identifiable description of methods indicating sources and methods of searching for articles. (b) Stated inclusion and exclusion criteria for articles: yes/no. (c) Scope of literature included in study. For Level 1 or 2 Study Designs (Not Systematic Reviews) (a) Blinding: blinded, blinded (unclear), or unblinded (b) Describe comparability of groups at baseline – ie, was distribution of potential confounders at baseline equal? If no, which confounders were not equal? (c) Loss to follow-up overall: percent of total study population lost to follow-up. For Level 3 Study Design (Not Systematic Reviews) (a) Description of study design (eg, case-control, interrupted time series). (b) Describe comparability of groups at baseline – ie, was distribution of potential confounders at baseline equal? If no, which confounders were not equal? (c) Analysis includes adjustment for potential confounders: yes/no. If yes, adjusted for what confounders? 3. Description of intervention (as specific as possible) 4. Description of study population(s) and setting(s) 5. Level of relevant outcome measure(s) (eg, Levels 1-4) 6. Description of relevant outcome measure(s) 7. Main Results: effect size with confidence intervals 8. Information on unintended adverse (or beneficial) effects of practice 9. Information on cost of practice 10. Information on implementation of practice (information that might be of use in whether to and/or how to implement the practice – eg, known barriers to implementation) We present the salient elements of each included study (eg, study design, population/setting, intervention details, results) in text or tabular form. In addition, we asked authors to highlight weaknesses and biases of studies where the interpretation of the results might be substantially affected. Authors were not asked to formally synthesize or combine (eg, perform a meta-analysis) the evidence across studies for the Report.
34
Review Process Authors submitted work to the Editors in 2 phases. In the first phase (“Identification of Safety Practices for Evaluation”), which was submitted approximately 6 weeks after authors were commissioned, authoring teams provided their search strategies, citations, and a preliminary list of patient safety practices to be reviewed. In the subsequent phase (“Evaluation of Safety Practices”), due approximately 12 weeks after commissioning, authors first submitted a draft chapter for each topic, completed abstraction forms, and—after iterative reviews and revisions—a final chapter. Identification of Safety Practices for Evaluation The Editors and the Advisory Panel reviewed the list of domains and practices to identify gaps in coverage. In addition, the Editors reviewed final author-submitted lists of excluded practices along with justifications for exclusion (eg, insufficient research design, insufficient outcomes, practice is unique to a single disease process). When there were differences in opinion as to whether a practice actually met the inclusion or exclusion criteria, the Editors made a final disposition after consulting with the author(s). The final practice list, in the form of a Table of Contents for the Report, was circulated to AHRQ and the NQF Safe Practices Committee for comment. Evaluation of Safety Practices Chapters were reviewed by the editorial team (The EPC Coordinating Team Editors and our Managing Editor) and queries were relayed to the authors, often requesting further refinement of the analysis or expansion of the results and conclusions. After all chapters were completed, the entire Report was edited to eliminate redundant material and ensure that the focus remained on the evidence regarding safety practices. Near the end of the review process, chapters were distributed to the Advisory Panel for comments, many of which were incorporated. Once the content was finalized, the Editors analyzed and ranked the practices using a methodology described in Chapter 56. The results of these summaries and rankings are presented in Part V of the Report.
35
References 1.
Thurfjell E, Thurfjell MG, Egge E, Bjurstam N. Sensitivity and specificity of computerassisted breast cancer detection in mammography screening. Acta Radiol. 1998;39:384388. 2. Hunt DL, McKibbon KA. Locating and appraising systematic reviews Ann Intern Med. 1997;126:532-538. 3. Glanville J, Lefebvre C. Identifying systematic reviews: key resources. ACP J Club. 2000;132:A11-A12. 4. Shojania KG, Bero L. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Eff Clin Pract. 2001;4: in press. 5. McAuley L, Pham B, Tugwell P, Moher D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 2000;356:12281231. 6. Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, et al. Current methods of the U.S. Preventive Services Task Force. A review of the process. Am J Prev Med. 2001;20:21-35. 7. Guyatt G, Schunemann H, Cook D, Jaeschke R, Pauker S, Bucher H. Grades of recommendation for antithrombotic agents. Chest.2001;119:3S-7S. 8. Muir Gray JA, Haynes RB, Sackett DL, Cook DJ, Guyatt GH. Transferring evidence from research into practice: 3. Developing evidence-based clinical policy. ACP J Club. 1997;126:A14-A16. 9. Guyatt GH, Tugwell PX, Feeny DH, Drummond MF, Haynes RB. The role of before-after studies of therapeutic impact in the evaluation of diagnostic technologies. J Chronic Dis. 1986;39:295-304. 10. Davis CE. Generalizing from clinical trials. Control Clin Trials. 1994;15:11-4. 11. Bailey KR. Generalizing the results of randomized clinical trials. Control Clin Trials. 1994;15:15-23. 12. Oxman AD, Cook DJ, Guyatt GH. Users' guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA. 1994;272:1367-1371. 13. Levine M, Walter S, Lee H, Haines T, Holbrook A, Moyer V. Users' guides to the medical literature. IV. How to use an article about harm. Evidence-Based Medicine Working Group. JAMA. 1994;271:1615-1619. 14. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users' guides to the medical literature. IX. A method for grading health care recommendations. EvidenceBased Medicine Working Group. JAMA. 1995;274:1800-1804. 15. Cook DJ, Guyatt GH, Laupacis A, Sackett DL, Goldberg RJ. Clinical recommendations using levels of evidence for antithrombotic agents. Chest. 1995;108:227S-230S. 16. Naylor CD, Guyatt GH. Users' guides to the medical literature. X. How to use an article reporting variations in the outcomes of health services. The Evidence-Based Medicine Working Group. JAMA. 1996;275:554-558. 17. Moher D, Jadad AR, Tugwell P. Assessing the quality of randomized controlled trials. Current issues and future directions. Int J Technol Assess Health Care. 1996;12:195-208. 18. Mulrow C, Langhorne P, Grimshaw J. Integrating heterogeneous pieces of evidence in systematic reviews. Ann Intern Med. 1997;127:989-995. 19. Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordant systematic reviews. CMAJ. 1997;156:1411-1416. 36
20. 21.
22.
23. 24. 25.
26.
Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282:1054-1060. McKee M, Britton A, Black N, McPherson K, Sanderson C, Bain C. Methods in health services research. Interpreting the evidence: choosing between randomised and nonrandomised studies. BMJ. 1999;319:312-315. Bucher HC, Guyatt GH, Cook DJ, Holbrook A, McAlister FA. Users' guides to the medical literature: XIX. Applying clinical trial results. A. How to use an article measuring the effect of an intervention on surrogate end points. Evidence-Based Medicine Working Group. JAMA. 1999;282:771-778. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:1887-1892. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;342:1878-1886. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283:2008-2012. Gotzsche PC, Liberati A, Torri V, Rossetti L. Beware of surrogate outcome measures. Int J Technol Assess Health Care. 1996;12:238-246.
37
38
PART II. REPORTING AND RESPONDING TO PATIENT SAFETY PROBLEMS Chapter 4. Incident Reporting Chapter 5. Root Cause Analysis
39
40
Chapter 4. Incident Reporting Heidi Wald, MD University of Pennsylvania School of Medicine
Kaveh G. Shojania, MD University of California, San Francisco School of Medicine
Background Errors in medical care are discovered through a variety of mechanisms. Historically, medical errors were revealed retrospectively through morbidity and mortality committees and malpractice claims data. Prominent studies of medical error have used retrospective chart review to quantify adverse event rates.1,2 While collection of data in this manner yields important epidemiologic information, it is costly and provides little insight into potential error reduction strategies. Moreover, chart review only detects documented adverse events and often does not capture information regarding their causes. Important errors that produce no injury may go completely undetected by this method.3-6 Computerized surveillance may also play a role in uncovering certain types of errors. For instance, medication errors may be discovered through a search for naloxone orders for hospitalized patients, as they presumably reflect the need to reverse overdose of prescribed narcotics.7,8 Several studies have demonstrated success with computerized identification of adverse drug events.9-11 Complex, high-risk industries outside of health care, including aviation, nuclear power, petrochemical processing, steel production, and military operations, have successfully developed incident reporting systems for serious accidents and important “near misses.”6 Incident reporting systems cannot provide accurate epidemiologic data, as the reported incidents likely underestimate the numerator, and the denominator (all opportunities for incidents) remains unknown. Given the limited availability of sophisticated clinical computer systems and the tremendous resources required to conduct comprehensive chart reviews, incident reporting systems remain an important and relatively inexpensive means of capturing data on errors and adverse events in medicine. Few rigorous studies have analyzed the benefits of incident reporting. This chapter reviews only the literature evaluating the various systems and techniques for collecting error data in this manner, rather than the benefit of the practice itself. This decision reflects our acknowledgment that incident reporting has clearly played a beneficial role in other high-risk industries.6 The decision also stems from our recognition that a measurable impact of incident reporting on clinical outcomes is unlikely because there is no standard practice by which institutions handle these reports. Practice Description Flanagan first described the critical incident technique in 1954 to examine military aircraft training accidents.12 Critical incident reporting involves the identification of preventable incidents (ie, occurrences that could have led, or did lead, to an undesirable outcome13) reported by personnel directly involved in the process in question at the time of discovery of the event. The goal of critical incident monitoring is not to gather epidemiologic data per se, but rather to gather qualitative data. Nonetheless, if a pattern of errors seems to emerge, prospective studies can be undertaken to test epidemiologic hypotheses.14
41
Incident reports may target events in any or all of 3 basic categories: adverse events, “no harm events,” and “near misses.” For example, anaphylaxis to penicillin clearly represents an adverse event. Intercepting the medication order prior to administration would constitute a near miss. By contrast, if a patient with a documented history of anaphylaxis to penicillin received a penicillin-like antibiotic (eg, a cephalosporin) but happened not to experience an allergic reaction, it would constitute a no harm event, not a near miss. In other words, when an error does not result in an adverse event for a patient, because the error was “caught,” it is a near miss; if the absence of injury is owed to chance it is a no harm event. Broadening the targets of incident reporting to include no harm events and near misses offers several advantages. These events occur 3 to 300 times more often than adverse events,5,6 they are less likely to provoke guilt or other psychological barriers to reporting,6 and they involve little medico-legal risk.14 In addition, hindsight bias15 is less likely to affect investigations of no harm events and near misses.6,14 Barach and Small describe the characteristics of incident reporting systems in nonmedical industries.6 Established systems share the following characteristics: • • • •
they focus on near misses they provide incentives for voluntary reporting; they ensure confidentiality; and they emphasize systems approaches to error analysis.
The majority of these systems were mandated by Federal regulation, and provide for voluntary reporting. All of the systems encourage narrative description of the event. Reporting is promoted by providing incentives including: • • • • •
immunity; confidentiality; outsourcing of report collation; rapid feedback to all involved and interested parties; and sustained leadership support.6
Incident reporting in medicine takes many forms. Since 1975, the US Food and Drug Administration (FDA) has mandated reporting of major blood transfusion reactions, focusing on preventable deaths and serious injuries.16 Although the critical incident technique found some early applications in medicine,17,18 its current use is largely attributable to Cooper’s introduction of incident reporting to anesthesia in 1978,19 conducting retrospective interviews with anesthesiologists about preventable incidents or errors that occurred while patients were under their care. Recently, near miss and adverse event reporting systems have proliferated in single institution settings (such as in intensive care units (ICUs)20,21), regional settings (such as the New York State transfusion system22), and for national surveillance (eg, the National Nosocomial Infections Surveillance System administered by the Federal Centers for Disease Control and Prevention.)23 All of the above examples focus on types of events (transfusion events or nosocomial infections) or areas of practice (ICUs). Incident reporting in hospitals cuts a wider swath, capturing errors and departures from expected procedures or outcomes (Table 4.1). However, because risk management departments tend to oversee incident reporting systems in some capacity, these systems more often focus on incident outcomes, not categories. Few data describe the operation of these institution-specific systems, but underreporting appears endemic.24
42
In 1995, hospital-based surveillance was mandated by the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO)26 because of a perception that incidents resulting in harm were occurring frequently.28 JCAHO employs the term sentinel event in lieu of critical incident, and defines it as follows: An unexpected occurrence involving death or serious physical or psychological injury, or the risk thereof. Serious injury specifically includes loss of limb or function. The phrase “or the risk thereof” includes any process variation for which a recurrence would carry a significant chance of a serious adverse outcome.26 As one component of its Sentinel Event Policy, JCAHO created a Sentinel Event Database. The JCAHO database accepts voluntary reports of sentinel events from member institutions, patients and families, and the press.26 The particulars of the reporting process are left to the member health care organizations. JCAHO also mandates that accredited hospitals perform root cause analysis (see Chapter 5) of important sentinel events. Data on sentinel events are collated, analyzed, and shared through a website,29 an online publication,30 and its newsletter Sentinel Event Perspectives.31 Another example of a national incident reporting system is the Australian Incident Monitoring Study (AIMS), under the auspices of the Australian Patient Safety Foundation.13 Investigators created an anonymous and voluntary near miss and adverse event reporting system for anesthetists in Australia. Ninety participating hospitals and practices named on-site coordinators. The AIMS group developed a form that was distributed to participants. The form contained instructions, definitions, space for narrative of the event, and structured sections to record the anesthesia and procedure, demographics about the patient and anesthetist, and what, when, why, where, and how the event occurred. The results of the first 2000 reports were published together, following a special symposium.32 The experiences of the JCAHO Sentinel Event Database and the Australian Incident Monitoring Study are explored further below. Prevalence and Severity of the Target Safety Problem The true prevalence of events appropriate for incident reporting is impossible to estimate with any accuracy, as it includes actual adverse events as well as near misses and no harm events. The Aviation Safety Reporting System (ASRS), a national reporting system for near misses in the airline industry,33,34 currently processes approximately 30,000 reports annually,35 exceeding by many orders of magnitude the total number of airline accidents each year.34 The number of reports submitted to a comparable system in health care would presumably number in the millions if all adverse events, no harm events, and near misses were captured. By contrast, over 6 years of operation, the JCAHO Sentinel Event Database has captured only 1152 events, 62% of which occurred in general hospitals. Two-thirds of the events were self-reported by institutions, with the remainder coming from patient complaints, media stories and other sources.29 These statistics are clearly affected by underreporting and consist primarily of serious adverse events (76% of events reported resulted in patient deaths), not near misses. As discussed in the chapter on wrong-site surgeries (Subchapter 43.2), comparing JCAHO reports with data from the mandatory incident reporting system maintained by the New York State Department of Health36 suggests that the JCAHO statistics underestimate the true incidence of target events by at least a factor of 20.
43
Opportunities for Impact Most hospitals’ incident reporting systems fail to capture the majority of errors and near misses.24 Studies of medical services suggest that only 1.5% of all adverse events result in an incident report37 and only 6% of adverse drug events are identified through traditional incident reporting or a telephone hotline.24 The American College of Surgeons estimates that incident reports generally capture only 5-30% of adverse events.38 A study of a general surgery service showed that only 20% of complications on a surgical service ever resulted in discussion at Morbidity and Mortality rounds.39 Given the endemic underreporting revealed in the literature, modifications to the configuration and operation of the typical hospital reporting system could yield higher capture rates of relevant clinical data. Study Designs We analyzed 5 studies that evaluated different methods of critical incident reporting. Two studies prospectively investigated incident reporting compared with observational data collection24,39 and one utilized retrospective chart review.37 Two additional studies looked at enhanced incident reporting by active solicitation of physician input compared with background hospital quality assurance (QA) measures.40,41 In addition, we reviewed JCAHO’s report of its Sentinel Event Database, and the Australian Incident Monitoring Study, both because of the large sizes and the high profiles of the studies.13,26 Additional reports of critical incident reporting systems in the medical literature consist primarily of uncontrolled observational trials42-44 that are not reviewed in this chapter. Study Outcomes In general, published studies of incident reporting do not seek to establish the benefit of incident reporting as a patient safety practice. Their principal goal is to determine if incident reporting, as it is practiced, captures the relevant events.40 In fact, no studies have established the value of incident reporting on patient safety outcomes. The large JCAHO and Australian databases provide data about reporting rates, and an array of quantitative and qualitative information about the reported incidents, including the identity of the reporter, time of report, severity and type of error.13,26 Clearly these do not represent clinical outcomes, but they may be reasonable surrogates for the organizational focus on patient safety. For instance, increased incident reporting rates may not be indicative of an unsafe organization,45 but may reflect a shift in organizational culture to increased acceptance of quality improvement and other organizational changes.3,5 None of the studies reviewed captured outcomes such as morbidity or error rates. The AIMS group published an entire symposium which reported the quantitative and qualitative data regarding 2000 critical incidents in anesthesia.13 However, only a small portion of these incidents were prospectively evaluated.14 None of the studies reviewed for this chapter performed formal root cause analyses on reported errors (Chapter 5). Evidence for Effectiveness of the Practice As described above, 6 years of JCAHO sentinel event data have captured merely 1152 events, none of which include near misses.29 Despite collecting what is likely to represent only a fraction of the target events, JCAHO has compiled the events, reviewed the root cause analyses and provided recommendations for procedures to improve patient safety for events ranging from wrong-site surgeries to infusion pump-related adverse events (Table 4.2). This information may
44
prove to be particularly useful in the case of rare events such as wrong-site surgery, where national collection of incidents can yield a more statistically useful sample size. The first 2000 incident reports to AIMS from 90 member institutions were published in 1993.13 In contrast to the JCAHO data, all events were self-reported by anesthetists and only 2% of events reported resulted in patient deaths. A full 44% of events had negligible effect on patient outcome. Ninety percent of reports had identified systems failures, and 79% had identified human failures. The AIMS data were similar to those of Cooper19 in terms of percent of incidents with reported human failures, timing of events with regard to phase of anesthesia, and type of events (breathing circuit misconnections were between 2% and 3% in both studies).13,19 The AIMS data are also similar to American “closed-claims” data in terms of pattern, nature and proportion of the total number of reports for several types of adverse events,13 which lends further credibility to the reports. The AIMS data, although also likely to be affected by underreporting because of its voluntary nature, clearly captures a higher proportion of critical incidents than the JCAHO Sentinel Event Database. Despite coming from only 90 participating sites, AIMS received more reports over a similar time frame than the JCAHO did from the several thousand accredited United States hospitals. This disparity may be explained by the fact that AIMS institutions were self-selected, and that the culture of anesthesia is more attuned to patient safety concerns.47 The poor capture rate of incident reporting systems in American hospitals has not gone unnoticed. Cullen et al24 prospectively investigated usual hospital incident reporting compared to observational data collection for adverse drug events (ADE logs, daily solicitation from hospital personnel, and chart review) in 5 patient care units of a tertiary care hospital. Only 6% of ADEs were identified and only 8% of serious ADEs were reported. These findings are similar to those in the pharmacy literature,48,49 and are attributed to cultural and environmental factors. A similar study on a general surgical service found that 40% of patients suffered complications.39 While chart documentation was excellent (94%), only 20% of complications were discussed at Morbidity and Mortality rounds. Active solicitation of physician reporting has been suggested as a way to improve adverse event and near miss detection rates. Weingart et al41 employed direct physician interviews supplemented by email reminders to increase detection of adverse events in a tertiary care hospital. The physicians reported an entirely unique set of adverse events compared with those captured by the hospital incident reporting system. Of 168 events, only one was reported by both methods. O’Neil et al37 used e-mail to elicit adverse events from housestaff and compared these with those found on retrospective chart review. Of 174 events identified, 41 were detected by both methods. The house officers appeared to capture preventable adverse events at a higher rate (62.5% v. 32%, p=0.003). In addition, the hospital’s risk management system detected only 4 of 174 adverse events. Welsh et al40 employed prompting of house officers at morning report to augment hospital incident reporting systems. There was overlap in reporting in only 2.6% of 341 adverse events that occurred during the study. In addition, although the number of events house officers reported increased with daily prompting, the quantity rapidly decreased when prompting ceased. In summary, there is evidence that active solicitation of critical incident reports by physicians can augment existing databases, identifying incidents not detected through other means, although the response may not be durable.37,40,41 Potential for Harm Users may view reporting systems with skepticism, particularly the system’s ability to maintain confidentiality and shield participants from legal exposure.28 In many states, critical 45
incident reporting and analysis count as peer review activities and are protected from legal discovery.28,50 However, other states offer little or no protection, and reporting events to external agencies (eg, to JCAHO) may obliterate the few protections that do exist. In recognition of this problem, JCAHO’s Terms of Agreement with hospitals now includes a provision identifying JCAHO as a participant in each hospital’s quality improvement process.28 Costs and Implementation Few estimates of costs have been reported in the literature. In general, authors remarked that incident reporting was far less expensive than retrospective review. One single center study estimated that physician reporting was less costly ($15,000) than retrospective record review ($54,000) over a 4-month period.37 A survey of administrators of reporting systems from nonmedical industries reported a consensus that costs were far offset by the potential benefits.6 Comment The wide variation in reporting of incidents may have more to do with reporting incentives and local culture than with the quality of medicine practiced there.24 When institutions prioritize incident reporting among medical staff and trainees, however, the incident reporting systems seem to capture a distinct set of events from those captured by chart review and traditional risk management40,41 and events captured in this manner may be more preventable.37 The addition of anonymous or non-punitive systems is likely to increase the rates of incident reporting and detection.51 Other investigators have also noted increases in reporting when new systems are implemented and a culture conducive to reporting is maintained.40,52 Several studies suggest that direct solicitation of physicians results in reports that are more likely to be distinct, preventable, and more severe than those obtained by other means.8,37,41 The nature of incident reporting, replete with hindsight bias, lost information, and lost contextual clues makes it unlikely that robust data will ever link it directly with improved outcomes. Nonetheless, incident reporting appears to be growing in importance in medicine. The Institute of Medicine report, To Err is Human,53 has prompted calls for mandatory reporting of medical errors to continue in the United States.54-57 England’s National Health Service plans to launch a national incident reporting system as well, which has raised concerns similar to those voiced in the American medical community.58 While the literature to date does not permit an evidence-based resolution of the debate over mandatory versus voluntary incident reporting, it is clear that incident reporting represents just one of several potential sources of information about patient safety and that these sources should be regarded as complementary. In other industries incident reporting has succeeded when it is mandated by regulatory agencies or is anonymous and voluntary on the part of reporters, and when it provides incentives and feedback to reporters.6 The ability of health care organizations to replicate the successes of other industries in their use of incident reporting systems6 will undoubtedly depend in large part on the uses to which they put these data. Specifically, success or failure may depend on whether health care organizations use the data to fuel institutional quality improvement rather than to generate individual performance evaluations.
46
Table 4.1. Examples of events reported to hospital incident reporting systems25-27 Adverse Outcomes
Procedural Breakdowns
Catastrophic Events
• Unexpected death or disability
• Errors or unexpected complications related to the administration of drugs or transfusion
• Performance of a procedure on the wrong body part (“wrong-site surgery”)
• Inpatient falls or “mishaps” • Institutionally-acquired burns • Institutionally-acquired pressure sores
• Discharges against medical advice (“AMA”) • Significant delays in diagnosis or diagnostic testing • Breach of confidentiality
• Performance of a procedure on the wrong patient • Infant abduction or discharge to wrong family • Rape of a hospitalized patient • Suicide of a hospitalized patient
Table 4.2. Sentinel event alerts published by JCAHO following analysis of incident reports46 •
Medication Error Prevention – Potassium Chloride
•
Lessons Learned: Wrong Site Surgery
•
Inpatient Suicides: Recommendations for Prevention
• • •
Preventing Restraint Deaths Infant Abductions: Preventing Future Occurrences High-Alert Medications and Patient Safety
•
Operative and Postoperative Complications: Lessons for the Future
•
Fatal Falls: Lessons for the Future
•
Infusion Pumps: Preventing Future Adverse Events
•
Lessons Learned: Fires in the Home Care Setting
•
Kernicterus Threatens Healthy Newborns
47
References 1.
Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-376. 2. Studdert D, Thomas E, Burstin H, Zbar B, Orav J, Brennan T. Negligent care and malpractice claiming behavior in Utah and Colorado. Medical Care. 2000;38:250-260. 3. Van der Schaaf T. Near miss reporting in the Chemical Process Industry [Doctoral Thesis]. Eindhoven, The Netherlands: Eindhoven University of Technology; 1992. 4. Ibojie J, Urbaniak S. Comparing near misses with actual mistransfusion events a more accurate reflection of transfusion errors. Br J Haematol. 2000;108:458-460. 5. Battles J, Kaplan H, Van der Schaaf T, Shea C. The attributes of medical event-reporting systems. Arch Pathol Lab Med. 1998;122:231-238. 6. Barach P, Small S. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ. 2000;320:759-763. 7. Whipple J, Quebbeman E, Lewis K, Gaughan L, Gallup E, Ausman R. Identification of patient controlled analgesia overdoses in hospitalized patients: a computerized method of monitoring adverse events. Ann Pharmacother. 1994;28:655-658. 8. Bates D, Makary M, Teich J, Pedraza L, Ma'luf N, Burstin H, et al. Asking residents about adverse events in a computer dialogue: how accurate are they? Jt Comm J Qual Improv. 1998;24:197-202. 9. Jha A, GJ K, Teich J. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc. 1998;5:305-314. 10. Naranjo C, Lanctot K. Recent developments in computer-assisted diagnosis of putative adverse drug reactions. Drug Saf. 1991;6:315-322. 11. Classen D, Pestotnik S, Evans R, Burke J. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266:2847-2851. 12. Flanagan J. The critical incident technique. Psychol Bull. 1954;51:327-358. 13. Webb R, Currie M, Morgan C, Williamson J, Mackay P, Russell W, et al. The Australian Incident Monitoring Study: an analysis of 2000 incident reports. Anaesth Intens Care. 1993;21:520-528. 14. Runciman W, Sellen A, Webb R, Williamson J, Currie M, Morgan C, et al. Errors, incidents and accidents in anaesthetic practice. Anaesth Intens Care. 1993;21:506-519. 15. Fischoff B. Hindsight does not equal foresight: the effect of outcome knowledge on judgment under uncertainty. J Exp Psych: Human Perform and Percept. 1975;1:288-299. 16. Food and Drug Administration: Biological products; reporting of errors and accidents in manufacturing. Federal Register. 1997;62:49642-49648. 17. Safren MA, Chapanis A. A critical incident study of hospital medication errors. Hospitals. 1960;34:32-34. 18. Graham JR, Winslow WW, Hickey MA. The critical incident technique applied to postgraduate training in psychiatry. Can Psychiatr Assoc J. 1972;17:177-181. 19. Cooper JB, Newbower RS, Long CD, McPeek B. Preventable anesthesia mishaps: a study of human factors. Anesthesiology. 1978;49:399-406. 20. Buckley T, Short T, Rowbottom Y, OH T. Critical incident reporting in the intensive care unit. Anaesthesia. 1997;52:403-409.
48
21. 22. 23.
24.
25. 26.
27. 28. 29. 30. 31. 32. 33. 34. 35. 36.
37.
38. 39. 40.
Hart G, Baldwin I, Gutteridge G, Ford J. Adverse incident reporting in intensive care. Anaesth Intens Care. 1994;22:556-561. Linden JV, Paul B, Dressler KP. A report of 104 transfusion errors in New York State. Transfusion. 1992;32:601-606. Emori T, Edwards J, Culver D, Sartor C, Stroud L, Gaunt E, et al. Accuracy of reporting nosocomial infections in intensive-care-unit patients to the National Nosocomial Infections Surveillance System: a pilot study. Infect Control Hosp Epidemiol. 1998;19:308-316. Cullen D, Bates D, Small S, Cooper J, Nemeskal A, Leape L. The incident reporting system does not detect adverse events: a problem for quality improvement. Jt Comm J Qual Improv. 1995;21:541-548. Incident reports. A guide to legal issues in health care. Philadelphia, PA: Trustees of the University of Pennsylvania. 1998:127-129. Joint Commission on the Accreditation of Healthcare Organizations. Sentinel event policy and procedures. Available at: http://JCAHO.org/sentinel/se_pp.html. Accessed March 15, 2001. Fischer G, Fetters M, Munro A, Goldman E. Adverse events in primary care identified from a risk-management database. J Fam Pract. 1997;45:40-46. Berman S. Identifying and addressing sentinel events: an Interview with Richard Croteau. Jt Comm J Qual Improv. 1998;24:426-434. Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Statistics. Available at: http://www.JCAHO.org/sentinel/se_stats.html. Accessed April 16, 2001. Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Alert. Available at: http://JCAHO.org/edu_pub/sealert/se_alert.html. Accessed May 14, 2001. Lessons learned: sentinel event trends in wrong-site surgery. Joint Commission Perspectives. 2000;20:14. Symposium: The Australian Incident Monitoring Study. Anaesth Intens Care. 1993;21:506-695. Billings C, Reynard W. Human factors in aircraft incidents: results of a 7-year study. Aviat Space Environ Med. 1984;55:960-965. National Aeronautics & Space Agency (NASA). Aviation Safety Reporting System. Available at: http://www-afo.arc.nasa.gov/ASRS/ASRS.html. Accessed March 14, 2000. Leape L. Reporting of medical errors: time for a reality check. Qual Health Care. 2000;9:144-145. New York State Department of Health. NYPORTS - The New York Patient Occurrence and Tracking System. Available at: http://www.health.state.ny.us/nysdoh/commish/2001/nyports/nyports.htm. Accessed May 31, 2001. O'Neil A, Petersen L, Cook E, Bates D, Lee T, Brennan T. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med. 1993;119:370-376. Data sources and coordination. In: Surgeons ACo, editor. Patient safety manual. Rockville, MD: Bader & Associates, Inc. 1985. Wanzel K, Jamieson C, Bohnen J. Complications on a general surgery service: incidence and reporting. CJS. 2000;43:113-117. Welsh C, Pedot R, Anderson R. Use of morning report to enhance adverse event detection. J Gen Intern Med. 1996;11:454-460.
49
41.
Weingart SN, Ship AN, Aronson MD. Confidential clinician-reported surveillance of adverse events among medical inpatients. J Gen Intern Med. 2000;15:470-477. 42. Frey B, Kehrer B, Losa M, Braun H, Berweger L, Micallef J, et al. Comprehensive critical incident monitoring in a neonatal-pediatric intensive care unit: experience with the system approach. Intensive Care Med. 2000;26:69-74. 43. Findlay G, Spittal M, Radcliffe J. The recognition of critical incidents: quantification of monitor effectiveness. Anaesthesia. 1998;53:589-603. 44. Flaatten H, Hevroy O. Errors in the intensive care unit (ICU). Act Anaesthesiol Scand. 1999;43:614-617. 45. Connolly C. Reducing error, improving safety: relation between reported mishaps and safety is unclear. BMJ. 2000;321:505. 46. The Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Alert. Available at: http://www.JCAHO.org/edu_pub/sealert/se_alert.html. Accessed May 31, 2001. 47. Gaba D. Anaesthesiology as a model for patient safety in health care. BMJ. 2000;320. 48. Shannon R, De Muth J. Comparison of medication error detection methods in the long term care facility. Consulting Pharmacy. 1987;2:148-151. 49. Allan E, Barker K. Fundamentals of medication error research. Am J Hosp Pharm. 1990;47:555-571. 50. Rex J, Turnbull J, Allen S, Vande Voorde K, Luther K. Systematic root cause analysis of adverse drug events in a tertiary referral hospital. Jt Comm J Qual Improv. 2000;26:563575. 51. Kaplan H, Battles J, Van Der Schaaf T, Shea C, Mercer S. Identification and classification of the causes of events in transfusion medicine. Transfusion. 1998;38:1071-1081. 52. Hartwig S, Denger S, Schneider P. Severity-indexed, incident report-based medication error-reporting program. Am J Hosp Pharm. 1991;48:2611-2616. 53. Kohn L, Corrigan J, Donaldson M, editors. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine. National Academy Press. 2000. 54. Pear R. U.S. health officials reject plan to report medical mistakes. New York Times Jan 24 2000: A14(N), A14(L). 55. Prager LO. Mandatory reports cloud error plan: supporters are concerned that the merits of a plan to reduce medical errors by 50% are being obscured by debate over its most controversial component. American Medical News March 13 2000:1,66-67. 56. Murray S. Clinton to call on all states to adopt systems for reporting medical errors. Wall Street Journal Feb 22 2000:A28(W), A6(E). 57. Kaufman M. Clinton Seeks Medical Error Reports; Proposal to Reduce Mistakes Includes Mandatory Disclosure, Lawsuit Shield. Washington Post Feb 22 2000:A02. 58. Mayor S. English NHS to set up new reporting system for errors. BMJ. 2000;320:1689.
50
Chapter 5. Root Cause Analysis Heidi Wald, MD University of Pennsylvania School of Medicine
Kaveh G. Shojania, MD University of California, San Francisco School of Medicine
Background Historically, medicine has relied heavily on quantitative approaches for quality improvement and error reduction. For instance, the US Food and Drug Administration (FDA) has collected data on major transfusion errors since the mid-1970s.1,2 Using the statistical power of these nationwide data, the most common types of errors have been periodically reviewed and systems improvements recommended.3 These epidemiologic techniques are suited to complications that occur with reasonable frequency, but not for rare (but nonetheless important) errors. Outside of medicine, high-risk industries have developed techniques to address major accidents. Clearly the nuclear power industry cannot wait for several Three Mile Island-type events to occur in order to conduct valid analyses to determine the likely causes. A retrospective approach to error analysis, called root cause analysis (RCA), is widely applied to investigate major industrial accidents.4 RCA has its foundations in industrial psychology and human factors engineering. Many experts have championed it for the investigation of sentinel events in medicine.5-7 In 1997, the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) mandated the use of RCA in the investigation of sentinel events in accredited hospitals.8 The most commonly cited taxonomy of human error in the medical literature is based on the work of James Reason.4,9,10 Reason describes 2 major categories of error: active error, which generally occurs at the point of human interface with a complex system, and latent error, which represents failures of system design. RCA is generally employed to uncover latent errors underlying a sentinel event.6,7 RCA provides a structured and process-focused framework with which to approach sentinel event analysis. Its cardinal tenet is to avoid the pervasive and counterproductive culture of individual blame.11,12 Systems and organizational issues can be identified and addressed, and active errors are acknowledged.6 Systematic application of RCA may uncover common root causes that link a disparate collection of accidents (ie, a variety of serious adverse events occurring at shift change). Careful analysis may suggest system changes designed to prevent future incidents.13 Despite these intriguing qualities, RCA has significant methodologic limitations. RCAs are in essence uncontrolled case studies. As the occurrence of accidents is highly unpredictable, it is impossible to know if the root cause established by the analysis is the cause of the accident.14 In addition, RCAs may be tainted by hindsight bias.4,15,16 Other biases stem from how deeply the causes are probed and influenced by the prevailing concerns of the day.16,17 The fact that technological failures (device malfunction), which previously represented the focus of most accident analyses, have been supplanted by staffing issues, management failures, and information systems problems may be an example of the latter bias.17 Finally, RCAs are timeconsuming and labor intensive.
51
Despite legitimate concerns about the place of RCA in medical error reduction, the JCAHO mandate ensures that RCA will be widely used to analyze sentinel events.8 Qualitative methods such as RCA should be used to supplement quantitative methods, to generate new hypotheses, and to examine events not amenable to quantitative methods (for example, those that occur rarely).18 As such, its credibility as a research tool should be judged by the standards appropriate for qualitative research, not quantitative.19,20 Yet, the outcomes and costs associated with RCA are largely unreported. This chapter reviews the small body of published literature regarding the use of RCA in the investigation of medical errors. Practice Description To be credible, RCA requires rigorous application of established qualitative techniques. Once a sentinel event has been identified for analysis (eg, a major chemotherapy dosing error, a case of wrong-site surgery, or major ABO incompatible transfusion reaction), a multidisciplinary team is assembled to direct the investigation. The members of this team should be trained in the techniques and goals of RCA, as the tendency to revert to personal biases is strong.13,14 Multiple investigators allow triangulation or corroboration of major findings and increase the validity of the final results.19 Based on the concepts of active and latent error described above, accident analysis is generally broken down into the following steps6,7: 1. Data collection: establishment of what happened through structured interviews, document review, and/or field observation. These data are used to generate a sequence or timeline of events preceeding and following the event; 2. Data analysis: an iterative process to examine the sequence of events generated above with the goals of determining the common underlying factors: (i) Establishment of how the event happened by identification of active failures in the sequence; (ii) Establishment of why the event happened through identification of latent failures in the sequence which are generalizable. In order to ensure consideration of all potential root causes of error, one popular conceptual framework for contributing factors has been proposed based on work by Reason.7 Several other frameworks also exist.21,22 The categories of factors influencing clinical practice include institutional/regulatory, organizational/management, work environment, team factors, staff factors, task factors, and patient characteristics. Each category can be expanded to provide more detail. A credible RCA considers root causes in all categories before rejecting a factor or category of factors as non-contributory. A standardized template in the form of a tree (or “Ishikawa”) may help direct the process of identifying contributing factors, with such factors leading to the event grouped (on tree “roots”) by category. Category labels may vary depending on the setting.23 At the conclusion of the RCA, the team summarizes the underlying causes and their relative contributions, and begins to identify administrative and systems problems that might be candidates for redesign.6 Prevalence and Severity of the Target Safety Problem JCAHO’s 6-year-old sentinel event database of voluntarily reported incidents (see Chapter 4) has captured a mere 1152 events, of which 62% occurred in general hospitals. Twothirds of the events were self-reported by institutions, with the remainder coming from patient
52
complaints, media stories and other sources.24 These statistics are clearly affected by underreporting and consist primarily of serious adverse events (76% of events reported resulted in patient deaths), not near misses. The number of sentinel events appropriate for RCA is likely to be orders of magnitude greater. The selection of events for RCA may be crucial to its successful implementation on a regular basis. Clearly, it cannot be performed for every medical error. JCAHO provides guidance for hospitals about which events are considered “sentinel,”8 but the decision to conduct RCA is at the discretion of the leadership of the organization.12 If the number of events is large and homogeneous, many events can be excluded from analysis. In a transfusion medicine reporting system, all events were screened after initial report and entered in the database, but those not considered sufficiently unique did not undergo RCA.25 Opportunities for Impact While routine RCA of sentinel events is mandated, the degree to which hospitals carry out credible RCAs is unknown. Given the numerous demands on hospital administrators and clinical staff, it is likely that many hospitals fail to give this process a high profile, assigning the task to a few personnel with minimal training in RCA rather than involving trained leaders from all relevant departments. The degree of underreporting to JCAHO suggests that many hospitals are wary of probationary status and the legal implications of disclosure of sentinel events and the results of RCAs.12,26 Study Designs As RCA is a qualitative technique, most reports in the literature are case studies or case series of its application in medicine.6,27-30 There is little published literature that systematically evaluates the impact of formal RCA on error rates. The most rigorous study comes from a tertiary referral hospital in Texas that systematically applied RCA to all serious adverse drug events (ADEs) considered preventable. The time series contained background data during the initial implementation period of 12 months and a 17-month follow-up phase.13 Study Outcomes Published reports of the application of RCA in medicine generally present incident reporting rates, categories of active errors determined by the RCA, categories of root causes (latent errors) of the events, and suggested systems improvements. While these do not represent clinical outcomes, they are reasonable surrogates for evaluation. For instance, increased incident reporting rates may reflect an institution’s shift toward increased acceptance of quality improvement and organizational change.5,21
53
Evidence for Effectiveness of the Practice The Texas study revealed a 45% decrease in the rate of voluntarily reported serious ADEs between the study and follow-up periods (7.2 per 100,000 to 4.0 per 100,000 patient-days, p
View more...
Comments