The rapid adoption of AI-enabled technologies reflects both the severity of long-standing challenges in healthcare and the promise these tools hold in addressing them. From improving documentation efficiency to supporting clinical diagnosis, AI is no longer the technology of the future—it is the technology of the present. As described in the introduction, AI systems are now influencing every stage of the diagnostic process, from patient triage to image interpretation to care planning. These tools are already being commercialized, deployed, and integrated into hospitals and clinics nationwide.
The speed of this transformation has left many stakeholders struggling to keep pace. Hospital leaders may be unsure which AI tools are most appropriate for their institution, how to evaluate product claims, how to govern these systems over time, or how to prepare staff to use them responsibly. Clinicians may lack clarity on how safe or accurate these tools are, or how to incorporate them into practice in ways that maximize benefit while minimizing risk. Researchers may be unaware of the system-level effects these technologies have or what questions are most urgent to study. And patients, often unaware that AI is involved in their care at all, may be caught off guard when decisions affecting their health are partially automated. Each group is being asked to navigate this AI wave, yet few have been given the tools to do so.
While this moment may feel unprecedented, it is not the first time a major technological transformation has redefined professional roles and system design. Insights from other high-risk industries that have successfully adapted to automation offer valuable lessons on how healthcare can navigate the challenges ahead.
Lessons From Automation in High-Risk Industries
Other industries that have successfully integrated automation into high-risk environments—such as aviation, nuclear power, and manufacturing—offer important lessons that can guide healthcare’s approach to adopting AI safely and effectively. For example, many AI-enabled technologies fundamentally shift the role of clinicians from being the direct decision-maker to serving as a supervisor of automated decisions. This transformation may be new in healthcare, but it mirrors a similar transition that occurred in aviation during the 1970s and 1980s.54 The introduction of autopilot systems changed the pilot’s responsibilities from manual control to monitoring and intervening when needed. Initially, this shift led to new types of errors, including missed warnings, over-trust, and loss of manual skills. But through research, redesign, and training, aviation developed strategies to mitigate these challenges. These include system transparency, clear accountability structures, and training which emphasizes both human judgment and the strengths and limitations of automation.
Healthcare can take a similar path. As AI continues to evolve, the focus should not be on resisting or blindly adopting the wave of new technologies but on designing systems that foster effective human-AI collaboration. Doing so will require not only evidence-based implementation but also investment in education, policy guidance, and interdisciplinary collaboration across medicine, computer science, and human factors engineering. As healthcare moves to integrate AI more deeply into clinical care, applying these lessons requires deliberate organizational planning. While thorough guidance documents for how to integrate AI responsibility are available,55 this Issue Brief offers some guidance on how hospital systems, clinicians, and patients can navigate this rapidly evolving landscape.
Guidance for Hospital Leaders and Health System Decision-Makers
Hospitals and health systems vary in size, resources, and readiness to implement AI technologies. Some may already have dedicated informatics or clinical AI governance teams, while others are beginning to explore how these tools could support care delivery. Regardless of the starting point, all institutions will need to consider how to safely evaluate, integrate, and monitor these tools as they become more prevalent. To move forward responsibly, hospitals must build organizational capacity to manage AI implementation systematically. This includes developing internal processes to assess new technologies, ensuring ongoing oversight, and supporting clinical staff in adapting to evolving roles. The following considerations are grouped into three domains relevant to hospital decision-making: product selection and evaluation, safety monitoring and risk mitigation, and workforce readiness and governance.
Product Selection and Evaluation
The increasing availability of commercial AI products presents hospital leaders with complex decisions about which tools to adopt and how to assess their clinical value. Choosing which AI tools to adopt requires more than evaluating vendor promises or assessing novel functionality. It involves determining whether a product is safe, effective, and appropriate for the institution’s specific patient population and clinical workflows. Hospitals should request detailed performance information from vendors, including clinical validation data, known failure modes, and any population-level limitations in accuracy or generalizability. AI tools should ideally be evaluated not only for aggregate performance but also for subgroup effects and specific use conditions. To guide procurement, hospitals may consider developing a standardized framework for evaluating AI products based on criteria such as these:
- Accuracy and validation across relevant patient populations.
- Known risks or hallucination rates.
- Transparency regarding data sources, model updates, and usage limitations.
- Integration requirements and alignment with existing workflows and technologies.
- Impact on equity, safety, and clinical decision making.
These considerations support more informed procurement and reduce the risk of misaligned or ineffective implementations.
Safety Monitoring and Risk Mitigation
As AI tools are introduced into clinical workflows, hospitals will need to ensure that safety monitoring systems are equipped to detect and respond to errors associated with their use. These tools may fail in ways that differ from traditional technologies, including generating outputs that appear plausible but are clinically inappropriate or biased. Existing safety and quality systems may not be configured to capture these issues without adjustment.
Hospitals should consider incorporating AI-specific categories into their incident reporting infrastructure and training clinical risk managers to recognize how AI might contribute to harm—or obscure its source. Key elements of an effective monitoring approach include defining what constitutes an AI-related safety event, establishing clear internal pathways to report and investigate such events, and ensuring mechanisms are in place for frontline staff to flag concerning outputs. Institutions should also actively monitor real-world performance and compare it against vendor-reported metrics or initial validation data. These efforts will be critical for closing the feedback loop, identifying performance issues early, and supporting continuous improvement after deployment.
Workforce Readiness and Governance
Effective use of AI implementation in clinical settings depends not only on the quality of the technology but also on the readiness of the workforce to use it safely. Hospital leaders will need to invest in clinician education and interdisciplinary governance structures that support thoughtful integration. This includes defining roles and responsibilities for selecting, reviewing, and monitoring AI tools and ensuring clinicians receive adequate training to understand when and how to use these tools effectively. Hospitals should ensure that clinicians receive targeted training not only on how specific tools are intended to function but also on their known limitations and appropriate oversight practices. Education should include guidance on when to rely on the tool, when to question it, and how to intervene when outputs appear inconsistent with clinical judgment. These competencies may need to be incorporated into onboarding, continuing education, and quality improvement activities. In addition, hospitals should consider how and when to inform patients about the use of AI tools in their care, particularly when those tools generate documentation or influence decision-making in ways that affect patient experience. Given the complexity of the topic, this issue brief provides only a high-level overview; more comprehensive guidance is available for those seeking deeper insight.56–58
Guidance for Clinicians
Ideally, hospital systems would provide clinicians with effective education and training prior to implementing new AI technology. In practice, clinicians may be introduced to new technologies with minimal explanation or training, leaving clinicians to determine how to integrate them into their practice responsibly. While AI-enabled systems may offer meaningful improvements—such as enhanced diagnostic accuracy, increased efficiency, and reduced cognitive burden—they also present new risks, including biased outputs, hallucinations, and clinically inappropriate recommendations. As the use of these tools expands, clinicians will be tasked with developing a calibrated level of trust: sufficient to make use of their strengths, but cautious enough to recognize and mitigate potential limitations.
When presented with a new AI-enabled tool, clinicians should request any available information on the tool’s intended use, validated performance, and known limitations. This includes basic details on accuracy, reliability, applicable patient populations, and known sources of error. The Limitations of AI section earlier in this Issue Brief offers a framework for what clinicians may consider asking when presented with a new technology:
- What is the documented accuracy of this tool?
- Has the tool been evaluated for its impact on clinical outcomes?
- Has it been validated on diverse populations?
- Is the model less accurate for certain patient populations?
- Is it prone to hallucinations or biased predictions?
- Are there scenarios or edge cases where performance is less reliable?
The goal of this inquiry is not to acquire technical expertise but to establish informed expectations. Understanding when a tool performs reliably and when it may not can help clinicians avoid known pitfalls (e.g., automation complacency, automation bias) while still leveraging the tool’s intended benefits. In cases where such information is not available, clinicians should proceed cautiously, monitor the tool’s output closely, and share their observations with hospital leadership and colleagues to help build institutional understanding.
Another emerging consideration is how clinicians should communicate the use of AI to their patients. This communication can be challenging, particularly when patients are unfamiliar with AI and clinicians themselves may not have complete information about how a tool operates or its limitations. While it is not realistic to expect providers to serve as the sole educators for every AI system in use—institutions should provide patient-facing materials with clear disclosures—clinicians should still make efforts to inform patients when patient-facing AI tools are being used. Informing patients of their agency to review AI-assisted content provides an additional “human in the loop,” helping to identify potential errors and adding another layer of safety and accountability.
Guidance for Patients
Outside of clinical settings, patients may encounter consumer-facing AI tools such as symptom checkers, wearable devices, or generative AI platforms like ChatGPT. These tools can provide general health information, support basic symptom assessment, or assist with decision making about when to seek care. While they are not designed to replace medical evaluation, they may serve as a useful starting point for patients seeking to better understand their health. However, the accuracy of these tools varies, and they may produce information that is incomplete or misleading. Patients should be aware of these limitations and consult with a healthcare provider before making decisions based solely on AI-generated information.
Within clinical care, AI tools are being integrated into many parts of the healthcare process. While this Issue Brief recommends that hospitals communicate when AI is involved in care delivery, this does not always occur in practice. Patients may interact with content created by AI without being explicitly informed. Patients who wish to learn more can consider asking their providers whether AI tools were involved in summarizing their visit, generating a recommendation, or communicating clinical information. It may also be appropriate to ask how the clinician reviews and validates outputs generated by these systems and whether there are known risks or limitations associated with their use. It is important to recognize that clinicians may not have detailed information about the tool’s performance or limitations, particularly if the technology was implemented with limited transparency or support. Nonetheless, these questions can prompt important conversations and help both patients and providers remain alert to potential concerns. Patients also play an important role in ensuring AI is used safely. If something in a visit summary, message, or treatment plan seems inaccurate or unclear, patients should feel encouraged to speak up. Patient feedback can help identify errors, clarify misunderstandings, and contribute to the safe and responsible integration of AI technologies in healthcare.
