Integrating artificial intelligence in clinical practice, hospital management, and health policy: literature review
Integrating artificial intelligence in clinical practice, hospital management, and health policy: literature review
Review Article
Integrating artificial intelligence in clinical practice, hospital management, and health policy: literature review
Daniel Nasef1, Demarcus Nasef1, Viola Sawiris1, Brett Weinstein2, Jodan Garcia3, Milan Toma1
1Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY, USA;
2Department of Clinical Sciences, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY, USA;
3Byrdine F. Lewis College of Nursing and Health Professions, Georgia State University, Urban Life Building, Atlanta, GA, USA
Contributions: (I) Conception and design: All authors; (II) Administrative support: M Toma; (III) Provision of study materials or patients: All authors; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: All authors; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.
Correspondence to: Milan Toma, PhD. Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Serota Academic Center, room 138, Northern Boulevard, P.O. Box 8000, Old Westbury, NY 11568, USA. Email: tomamil@tomamil.com.
Background and Objective: Artificial intelligence (AI) is a rapidly evolving field in healthcare, with new publications emerging daily, each contributing new perspectives and outputs. AI has become increasingly important in healthcare, offering the potential to enhance patient outcomes, streamline hospital operations, and inform health policy. Its integration into clinical practice, hospital management, and health policy presents both opportunities and challenges that necessitate comprehensive understanding and responsible implementation. This paper consists of a literature review conducted with the goal of assessing the implications of AI in healthcare.
Methods: The articles selected for this study were peer-reviewed papers, published in English from January 2023 to December 2024, with a focus on AI applications in clinical practice, hospital management, or health policy. The articles were selected from several databases including PubMed/Medline, Scopus, EMBASE, Web of Science, Google Scholar, and ResearchGate. After the filtration process, a total of 159 articles were selected for the review.
Key Content and Findings: Results revealed that AI significantly enhances clinical practice by improving diagnostic accuracy, offering personalized treatment recommendations, and aiding patient monitoring. In hospital management, AI has been shown to contribute to operational efficiency by automating administrative tasks, optimizing resource allocation, and supporting decision-making through predictive analytics. Regarding health policy, AI has been shown to facilitate evidence-based policymaking, data-driven insights for governance, and has the potential to improve health equity by identifying and addressing healthcare disparities. Despite these benefits, several limitations were identified, including risks of false positives and negatives in clinical applications, model overfitting due to inadequate validation, ethical concerns related to data privacy and fairness, and challenges in regulatory compliance.
Conclusions: AI holds substantial promise for advancing healthcare across multiple domains, but realizing its full potential requires addressing ethical considerations, ensuring robust model validation, and fostering collaboration between healthcare professionals and AI experts. Establishing appropriate regulatory frameworks and integrating AI education into medical training are important steps toward responsible and effective AI integration in healthcare.
Received: 17 February 2025; Accepted: 03 April 2025; Published online: 13 June 2025.
doi: 10.21037/jhmhp-24-138
Introduction
A report by the National Academy of Medicine highlighted three potential advantages of artificial intelligence (AI) in healthcare: enhancing outcomes for patients and clinical teams, reducing healthcare costs, and promoting better population health (1). Hence, in the complex world of healthcare, AI serves as a pivotal hub, extending its influence into three important areas (Figure 1): clinical practice, hospital management, and health policy. This review focuses on the literature discussing the application of AI in these settings. The Clinical Practice element elucidates the multifaceted role of AI in patient care (2). It commences with diagnosis, where AI, specifically machine learning (ML), is instrumental in imaging and pathology, providing accurate and timely results. This extends further into treatment, where AI’s capabilities in personalized medicine become apparent in tailoring treatments to individual patient needs. The clinical practice element culminates with patient monitoring, underscoring the possibility of AI integration into wearable technology that enables real-time tracking of patient health metrics. The hospital management component portrays AI’s significant contribution to efficient hospital operations. It starts at resource allocation, where AI’s predictive capabilities are leveraged to anticipate patient flow, ensuring optimal utilization of resources. Next, AI tackles staff scheduling, where it optimizes shifts, contributing to improved staff productivity and patient care. And finally, patient data management, emphasizing AI’s role in managing electronic health records (EHRs), ensuring secure and efficient handling of patient data. Lastly, in health policy AI demonstrates its profound influence on policymaking and regulation. It begins with policymaking, where AI’s predictive modeling is utilized for anticipating policy outcomes, aiding in the formulation of effective healthcare policies. Next up is regulation, where AI’s monitoring capabilities ensure compliance with health regulations. This component wraps up with public health, where AI exhibits its role in tracking disease outbreaks, contributing to timely and effective public health responses
Figure 1 Encapsulation of the multifaceted role of AI in the healthcare sector. At the center of the map is the concept of ‘AI’, which branches out into three key areas: ‘Clinical Practice’, ‘Hospital Management’, and ‘Health Policy’. AI, artificial intelligence.
Despite the rapid advancements in healthcare, several challenges persist, including diagnostic inaccuracies, inefficiencies in hospital management, and the lack of data-driven policymaking. These obstacles hinder the delivery of optimal patient care and the efficient allocation of resources. AI has emerged as a potential solution, offering to enhance diagnostic accuracy, streamline administrative processes, and inform evidence-based health policies. However, the integration of AI into healthcare also faces significant challenges, such as the need for standardized reporting aligned with regulatory benchmarks, ensuring model generalizability, and addressing ethical concerns related to data privacy and fairness. This study aims to explore the multifaceted role of AI in addressing these critical challenges and to provide a comprehensive understanding of its implications in clinical practice, hospital management, and health policy, while highlighting the current issues with AI integration and the necessity for robust regulatory frameworks.
The integration of AI into healthcare is not just a technological shift but also a significant change in how clinical practices are conducted and how health policies are formulated. AI can enhance diagnostic accuracy, streamline hospital management, and improve patient outcomes. However, it also raises ethical concerns and challenges regarding data privacy, implementation barriers, and the need for regulatory frameworks to ensure safe and effective use. AI’s role in healthcare has been extensively discussed in recent literature. For instance, a study by Abubaker Bagabir et al. discusses the integration of AI in clinical settings, focusing on its applications in genome sequencing, drug development, and vaccine discovery, particularly in the context of coronavirus disease 2019 (COVID-19) (3). A review by Giordano et al. highlights various roles AI plays in healthcare, including its impact on patient management and clinical decision-making (4,5). A publication by Secinaro et al. examines how AI technologies are reshaping hospital operations and clinical practices, emphasizing efficiency and patient care improvements (6). An article by Amini et al. reviews the ethical considerations and challenges faced in implementing AI in nursing and healthcare settings (7). A qualitative interview study with healthcare leaders by Petersson et al. discusses the benefits and challenges of AI in healthcare, focusing on administrative and medical processes (8).
Considering these contributions, it is clear that AI, despite being a driving force for innovation in healthcare, also requires a balanced strategy for its ethical and responsible use. The ongoing discussions and research highlight the importance of continuous evaluation and adaptation in managing the complexities that AI introduces into the healthcare field. We present this article in accordance with the Narrative Review reporting checklist (available at https://jhmhp.amegroups.com/article/view/10.21037/jhmhp-24-138/rc).
Methods
The literature search for this review was designed to capture the most relevant and recent publications on the implications of AI in healthcare, with a specific focus on hospital management and health policy. The inclusion criteria required that each publication have a Digital Object Identifier (DOI) to ensure traceability and credibility. Only articles published in the English language between January 2023 and December 2024 were considered to ensure the relevance of the information. The types of publications included were peer-reviewed research papers, reviews, and clinical studies that directly addressed the use of AI in the specified domains of healthcare.
A flowchart shown in Figure 2 illustrates the search and selection process. The initial search yielded a total of 1,616 articles, from which 263 duplicates were removed, leaving 1,353 unique articles. Of the initial search results, 69.9% (1,129 articles) were excluded during title and abstract screening. These exclusions were primarily due to irrelevance to AI in hospital management/policy (n=750, 66.4% of excluded articles), followed by publication dates before 2023 (n=262, 23.2%), lack of DOI (n=86, 7.6%), and non-English language publications (n=31, 2.8%). The remaining 224 articles underwent full-text evaluation, which excluded 65 additional articles primarily due to inadequate validation methods (n=23), absence of AI applications in clinical/hospital/policy domains (n=19), insufficient traceability (n=12), and other reasons (n=11). A total of 159 articles met all inclusion criteria and were included in the final review. The detailed search strategy, including the databases searched, search terms, and inclusion criteria, is summarized in Table 1.
Figure 2 Flowchart illustrating the search and selection process. Numbers in parentheses indicate initial search results from each database.DOI, Digital Object Identifier.
Table 1
Summary of the search strategy employed in the literature review outlining the key components of the search process, including the date of the search, the databases and sources consulted, the specific search terms and filters used, the timeframe for inclusion, and the selection process
Item
Specification
Date of search
15/01/2025
Databases and other sources searched
PubMed/Medline, Scopus, EMBASE, Web of Science, Google Scholar, ResearchGate
Search terms used
Keywords and phrases such as “Artificial intelligence in healthcare”, “AI in hospital management”, and “AI implications for health policy”
Timeframe
01/2023–12/2024
Inclusion criteria
Peer-reviewed research papers, reviews, and clinical studies with a DOI, published in English
Selection process
The authors worked together to review and assess the relevance of each publication. Consensus was reached through group discussions, ensuring that all selected studies met the inclusion criteria and were relevant to the specified domains of healthcare
AI, artificial intelligence; DOI, Digital Object Identifier.
Critical advancements in AI-driven healthcare
The recent literature underscores the transformative potential of AI in healthcare across clinical practice (9,10), hospital management (11), and health policy (6,12,13). AI technologies have demonstrated the capacity to enhance diagnostic accuracy, personalize treatment, improve operational efficiency, and inform policymaking. However, realizing these benefits requires careful attention to ethical considerations, robust validation of AI models, and collaborative efforts among healthcare professionals, AI experts, and policymakers to ensure responsible and effective implementation. As summarized in Table 2, the review identifies critical advancements in diagnostics, operational efficiency, and evidence-based policymaking across clinical practice, hospital management, and health policy domains.
Table 2
Summary of major outcomes from AI integration in healthcare
Domain
Key findings/results
References
Clinical practice
AI enhances diagnostic accuracy in medical imaging and pathology analysis
AI, artificial intelligence; EHR, electronic health record; ML, machine learning.
AI-driven innovations in clinical practice
In clinical practice, AI applications, such as ML algorithms and neural networks, have improved diagnostic accuracy in areas like medical imaging (9,25), pathology, and chronic disease management (10,26,27). For instance, AI has been utilized in cancer screenings to produce faster and more accurate results (15,28,29), and in predicting declines in kidney function by analyzing total kidney volume (16). Additionally, AI supports personalized medicine by optimizing medication dosages and tailoring treatment plans to individual patient profiles (9,12), leading to improved patient outcomes and reduced healthcare costs.
AI has also contributed to advancements in drug discovery and development. Through AI-driven autonomous experimentation systems and generative chemistry platforms, researchers have accelerated the identification of new drug candidates and materials (17,30-45). Moreover, AI has enhanced patient safety by reducing the potential for human error and aiding in the management of chronic illnesses such as asthma, diabetes, and hypertension (46-49).
Hence, AI is making significant strides in clinical practice (10,50), offering enhanced diagnostic capabilities (51-54), personalized treatment recommendations (55,56), and improved patient safety (57-59). The articles reviewed demonstrate the diverse applications of AI (60), from drug discovery (31-34) and chronic disease management (61) to surgical assistance (62-66) and medical imaging (67-69). As AI technology continues to evolve, its integration into clinical practice promises to revolutionize patient care and health management (9,10,46).
AI is used in cancer screenings, such as mammograms (15) and lung cancer screenings (29), to produce faster results. In chronic kidney disease, AI helps predict the decline in kidney function by analyzing total kidney volume (16). AI also identifies individuals at risk of left ventricular dysfunction (70), even without noticeable symptoms (71). Additionally, AI assists in managing chronic illnesses like asthma (47), diabetes (48), and high blood pressure (49) by connecting patients with relevant screenings and therapies (46). AI’s role extends to predicting disease outbreaks (72) and aiding in communication (73) and decision-making (74) to prevent spread (75,76). It has also shown higher accuracy than traditional pathology methods in predicting survival rates for malignant mesothelioma (14) and improving colonoscopy accuracy (77) by identifying colon polyps (78).
AI, along with algorithm optimization and high-throughput experiments, has enabled rapid discovery of new chemicals (31,32,35-37) and materials (38). Autonomous experimentation systems, powered by AI, enhance research and development by running numerous chemical experiments autonomously (39-41). A notable breakthrough was the discovery of a protein kinase inhibitor, designed using AI, which entered clinical trials for its anti-tumor properties (17). The study utilized AlphaFold (42), an AI program for protein structure prediction (43), integrated with a biocomputational engine and a generative chemistry platform to identify new drugs for hepatocellular carcinoma. AI-driven autonomous experimentation systems are expected to significantly impact biomedical research, particularly in drug discovery and molecular systems engineering (44,45).
AI-driven transformations in hospital management
AI has been instrumental in transforming hospital management by enhancing operational efficiency and resource allocation. The reviewed literature indicates that AI-driven systems automate routine administrative tasks such as scheduling, billing, and record-keeping, thereby reducing administrative burdens and minimizing errors (11,18,20,60,79-82). AI tools analyze large datasets to provide evidence-based recommendations and predictive analytics, enabling hospital administrators to optimize resource allocation, forecast patient flow, and improve workforce management (6,9,12,26). For example, AI applications in staff scheduling and capacity planning have contributed to improved productivity and patient care (19,83).
Furthermore, AI has been utilized to enhance patient data management, ensuring secure handling of EHRs and compliance with regulations (20,21,80). Predictive analytics help in identifying compliance risks and preventing costly mistakes (84-88). Overall, AI integration in hospital management leads to streamlined processes, cost savings, and enhanced patient experiences (6,11,60).
The integration of AI-powered solutions in hospital management is significantly transforming operations (11,19,89,90). AI technologies automate repetitive tasks such as scheduling (79), billing (18), and record-keeping (20,80), which helps to reduce administrative burdens (60) and minimize errors (81,82). AI tools are capable of analyzing large datasets quickly (91), providing evidence-based recommendations (92) and identifying patterns that may not be obvious to healthcare professionals (93). Additionally, these tools assist in recognizing compliance risks (84) and anomalies in billing (94) and coding, ensuring that healthcare facilities remain compliant with regulations (21) and avoid costly mistakes (95). By streamlining processes (96) and lessening manual workloads (97-99), AI enhances efficiency (100,101) and can lead to substantial cost savings for hospitals (102). Predictive analytics tools can identify areas of potential compliance (85,86) and audit risk (87), enabling proactive measures to mitigate expensive errors (88,103). Overall, AI-driven systems significantly reduce the likelihood of human error (104), improve productivity, and free up staff to concentrate on more high-value responsibilities, including the detection and prevention of healthcare fraud (105).
AI systems have demonstrated their efficiency in reviewing EHRs (106) and updating treatment guidelines (107), significantly accelerating healthcare information processing (108). ML techniques are being leveraged to analyze unstructured data (109), enhancing decision-making (110) and operational efficiency in healthcare organizations (111). AI’s diverse applications extend to patient diagnosis (112,113), medical document transcription (114), and drug development (17,30-45), with various AI types like neural networks (115,116) and natural language processing (117,118) playing a role in healthcare management (11,20,26,57,79,80,119). Finally, AI-driven advancements promise to enhance healthcare with technologies like whole-body magnetic resonance imaging (MRI) scans (120-122) for preventative screenings (123-125) and the potential to significantly improve provider productivity while reducing costs (126-128).
AI has also shown potential to aid in personalized patient flow optimization (129,130), predictive capacity planning (131), and advanced workforce management tools (11). AI scheduling algorithms are being leveraged to customize individual patient pathways in real-time (132), considering specific patient profiles and treatment timelines (133), which can decrease length of stay and improved patient throughput (19). Recent studies have highlighted the potential of AI to predict and manage peak demand by integrating hospital data with external factors such as social events (134), seasonal patterns (135,136), and epidemiological trends (137), allowing administrators to deploy resources more effectively (89). The integration of reinforcement learning models for inventory and pharmaceutical supply chain management is enabling hospitals to predict shortages and surpluses (138), maintaining optimal stock levels without overburdening their storage systems (90). Workforce allocation has also been improved, with AI identifying patterns of staff burnout (99,139,140) and suggesting optimal rotation schedules to minimize fatigue (26,83). With the ability to constantly learn and adapt from evolving data, AI’s role in hospital management is moving beyond basic task automation to decision-making and strategic planning (60,141), ultimately enhancing operational resilience and patient-centered care (142,143).
AI-driven insights for health policy
In the context of health policy, AI offers new opportunities for evidence-based policymaking, data-driven insights, and promotion of health equity. The literature highlights that AI facilitates accurate data collection and analysis, supporting the formulation of effective health policies (6,12,13,144). AI models assist policymakers in simulating potential policy outcomes and adjusting strategies in response to emerging health crises, such as pandemic preparedness and vaccine distribution (23,145). Additionally, AI helps identify healthcare disparities by analyzing patterns in health data, guiding interventions to address systemic inequities and improve access to healthcare services (7,22,24,146).
AI is changing healthcare policymaking by providing new opportunities to address complex health challenges more effectively. For instance, integrating AI into health policy has enabled more accurate data collection and analysis, which directly supports the formulation of evidence-based policies (144). By automating the analysis of health data, AI reduces human error and provides real-time insights that allow policymakers to adjust strategies swiftly, such as in responding to emerging health crises (145). This capability is particularly crucial in addressing public health concerns like pandemic preparedness and vaccine distribution (23), as AI models can optimize logistics and resource allocation (147). However, challenges such as data privacy and ethical issues, including ensuring compliance with regulations like General Data Protection Regulation (GDPR) (148), remain central to policy discussions, requiring well-structured AI governance frameworks (7).
AI also contributes to more equitable health policies by uncovering healthcare disparities (22). ML algorithms can identify patterns in healthcare data, revealing gaps in service provision to underserved populations (24), which can then be addressed through targeted health interventions (146). For example, AI’s ability to analyze large datasets enables the identification of social determinants of health (149), guiding policymakers in formulating more inclusive policies that address systemic inequities (7). Furthermore, AI-driven insights help develop proactive policies aimed at reducing healthcare inequalities across different demographic groups (24,150), ensuring that resources are allocated where they are most needed (151) based on emerging trends and outcomes (152).
Moreover, AI plays a pivotal role in evaluating the effectiveness of health policies. Through predictive modeling, AI allows policymakers to simulate the impact of potential policy changes before implementation (153), thereby optimizing decisions and minimizing risks (154). This approach is particularly useful in assessing the long-term effects of policies related to chronic disease management (155) or public health initiatives (156,157). AI also provides a mechanism for continuous feedback, enabling policies to be adjusted in real-time based on actual health outcomes (7). However, the legal and ethical implications of AI in healthcare policy remain a concern, especially regarding accountability and transparency in decision-making processes, highlighting the need for comprehensive regulations to guide AI’s role in policymaking (145).
Despite these benefits, ethical considerations related to data privacy, fairness, and regulatory compliance remain critical challenges that need to be addressed to ensure responsible AI integration in health policy (6,7,145). The development of appropriate regulatory frameworks and global convergence on AI governance in healthcare are essential steps toward mitigating risks and leveraging AI’s full potential (7,158).
Challenges and limitations of AI in healthcare
In the field of healthcare, the integration of reliable AI systems is essential, particularly in supporting clinical decision-making processes. These AI systems leverage patient data, including various forms of medical imaging, to aid healthcare professionals in diagnosing and treating diseases more effectively. A comprehensive meta-analysis of AI tools in healthcare highlights their diverse applications, from disease diagnosis to medical education, while also emphasizing the importance of addressing ethical considerations in their implementation (159). However, the successful implementation of AI in clinical settings requires a collaborative approach between healthcare professionals and AI experts. Failing to establish this collaboration can lead to significant risks, including the potential for false positives and false negatives, which can have dire consequences for patient care.
False positives/negatives in AI diagnostic systems
When an AI system incorrectly identifies a disease that is not present, it can lead to unnecessary anxiety for patients, additional invasive testing, and potentially harmful treatments. For instance, if an AI model misclassifies a benign tumor as malignant, the patient may undergo unnecessary surgery or chemotherapy, exposing them to the risks and side effects of these interventions without any real benefit. This not only affects the patient’s physical health but can also have profound psychological impacts, leading to stress and diminished quality of life.
Conversely, a false negative occurs when an AI system fails to detect a disease that is present. This can be particularly dangerous in cases of serious conditions such as cancer, where early detection is crucial for effective treatment. If a model overlooks a malignant tumor, the patient may miss the opportunity for timely intervention, leading to disease progression and potentially fatal outcomes. The implications of false negatives can be devastating, resulting in a loss of trust in medical professionals and the healthcare system.
To mitigate these risks, healthcare professionals must work closely with AI experts who understand the intricacies of ML algorithms and their limitations. AI systems are not infallible; they require careful tuning, validation, and continuous monitoring to ensure their accuracy and reliability. By collaborating with AI specialists, healthcare providers can better interpret AI-generated insights, understand the context of the data, and make informed decisions that prioritize patient safety. Recent research underscores the potential of AI applications in clinical risk management, demonstrating their capacity to enhance patient safety outcomes through improved decision support and error prevention (160).
Moreover, the integration of AI into clinical workflows should be accompanied by robust training programs for healthcare professionals. This training should focus on understanding how AI systems operate, their potential pitfalls, and the importance of human oversight in the decision-making process. By fostering a culture of collaboration and continuous learning, healthcare organizations can enhance the effectiveness of AI tools while minimizing the risks associated with their use.
Assessing data quality and model validation
Many studies in AI healthcare report impressively high accuracy rates, often in the high 90s. However, without proper data splitting and validation, these results may be overoptimistic due to the risk of overfitting. Overfitting occurs when a model learns the training data too well, including the noise, and fails to perform adequately on new, unseen data.
Proper data splitting is essential to develop models that generalize well to new, unseen data. This involves partitioning a dataset into different subsets, such as training, validation, and test sets (161-163). A common practice is to split the dataset into 80% for training and 20% for testing, but many studies omit a separate validation set. Without a validation set, hyperparameters may be tuned based on the test set, which should remain untouched until the final evaluation. This practice risks data leakage and overfitting, as it precludes the ability to detect overfitting during training and raises concerns about the optimization and tuning of the model (164).
The graph in Figure 3 illustrates the number of studies published each year using 2-way and 3-way data splitting strategies from 2007 to 2022. It highlights a significant shift in the research community’s approach to data splitting in ML studies. In the earlier years, particularly from 2007 to 2017, the majority of studies employed 2-way splitting, where the dataset is divided into a training set and a testing set. This method lacks a validation set, which is essential for tuning hyperparameters and preventing overfitting. Without a validation set, models may not generalize well to unseen data, leading to inefficient ML training.
Figure 3 Yearly trend in the number of studies employing 2-way (training and testing) and 3-way (training, validation, and testing) data splitting strategies in AI-assisted bone fracture detection research (2007–2022). The graph highlights the increasing adoption of 3-way splitting, reflecting improved validation practices and model generalizability in ML. Based on the systematic review and meta-analysis by Jung et al. (165). AI, artificial intelligence; ML, machine learning.
Starting around 2018, the graph shows a number of studies adopting 3-way splitting. This approach involves splitting the data into three sets: training, validation, and testing. The validation set is used during model development to fine-tune hyperparameters and select the best model before final evaluation on the test set. By 2022, the number of studies using 3-way splitting surpasses those using 2-way splitting, indicating a positive trend towards more robust ML practices.
The increasing adoption of 3-way splitting reflects a growing awareness of the pitfalls of overfitting and the importance of model validation. Without a validation set, there’s a risk of inadvertently tuning the model to perform well on the test set, which can lead to overly optimistic performance estimates and poor generalization (162,163,166). When studies do not demonstrate that their models are properly converged and well validated, it becomes difficult to trust their reported metrics (Figure 4). This is because overfitting may occur, where the model fits too closely to the training data and fails to generalize to new, unseen data. Without robust validation practices, such as using a separate validation set to monitor model performance during training, overfitting can remain undetected. Consequently, the reported high accuracy may not reflect the model’s true performance in real-world applications, undermining the reliability of the study’s findings (167-169). Therefore, it’s essential for studies to adopt proper validation strategies to ensure that their models generalize well and that their reported metrics are trustworthy.
Figure 4 The impact of rigorous versus non-rigorous AI model training on the reliability of accuracy metrics. Rigorous validation processes yield meaningful accuracy results suitable for clinical decision-making, whereas non-rigorous approaches risk inflated and unreliable performance estimates, compromising clinical utility. Hence, comparing performance metrics across AI models is not meaningful unless they undergo similarly rigorous training and validation protocols. AI, artificial intelligence.
Additionally, without testing on external data, the model’s generalizability cannot be assessed. Patient data can vary significantly between individuals due to differences in physiology, device placements, geography, socioeconomics, noise characteristics, and many other factors. A model that generalizes well should perform consistently across diverse datasets. However, many studies do not include the use of an external testing set (or hold-out set) (170). An external testing set, ideally sourced from a different dataset or collected under different conditions, is vital for assessing the model’s generalizability. Without it, models may perform well on a specific dataset but fail to generalize to data from different patient populations or recording environments. Therefore, by not including an external testing set, it is difficult to ascertain how the proposed model would perform in real-world clinical settings.
Furthermore, many AI models, especially deep learning models, are often considered “black boxes”, making it difficult to interpret their decision-making process. This lack of transparency can hinder clinical adoption (171-173). Integrating AI tools into existing clinical workflows can be challenging, requiring changes in how healthcare professionals operate. There is a need for standardized performance metrics to evaluate and compare different AI models effectively. Current studies often use varied metrics, making it difficult to assess their relative performance (174,175). Table 3 summarizes the key challenges in AI adoption in healthcare and provides recommendations for addressing these issues.
Table 3
Summary of key challenges in AI adoption in healthcare and recommendations for addressing them
Challenge
Recommendations for addressing the challenge
Limited availability of high-quality annotated data
Invest in the creation of comprehensive, diverse, and well-annotated datasets. Ensure data preprocessing and standardization to improve model performance
Overfitting and suboptimal model design
Develop robust AI architectures with techniques such as batch normalization, dropout layers, and regularization to prevent overfitting. Use cross-validation to ensure model reliability
Skewed dataset splits affecting reliability
Employ rigorous data splitting strategies, such as K-fold cross-validation, to ensure balanced and representative training, validation, and testing datasets
Lack of generalizability to unseen data
Perform external validation on independent datasets to confirm the model's applicability across diverse clinical settings and patient populations
Ensuring high performance and reliability
Establish standardized performance metrics (e.g., sensitivity, specificity, accuracy) and conduct rigorous testing on external datasets to validate model performance.
Integration into clinical workflows
Focus on user-friendly AI tools that align with existing clinical workflows. Engage healthcare professionals in the design and implementation process to ensure seamless integration
Model interpretability and transparency
Develop explainable AI (XAI) models that provide clear insights into decision-making processes. This will help build trust among clinicians and improve adoption
Ethical and regulatory concerns
Address ethical issues such as data privacy, bias, and fairness. Establish clear regulatory frameworks to ensure responsible AI use in healthcare
The table highlights the limitations of AI in healthcare, including data quality, model interpretability, generalizability, and integration into clinical workflows. AI, artificial intelligence.
By addressing critical aspects of AI in healthcare, this review highlights the importance of developing reliable AI systems that can be effectively integrated into clinical practice. It emphasizes the need for rigorous validation, transparency, and ethical considerations to ensure that AI tools are both effective and trustworthy. The subsequent sections provide a detailed account of the implications of these challenges and discuss the future of AI in healthcare. A recent review outlines the rapid evolution of AI technologies in healthcare, emphasizing their transformative potential while stressing the need for responsible implementation to ensure patient safety and ethical compliance (176).
The integration of AI in healthcare presents numerous challenges that must be addressed to ensure its effective and ethical integration. One of the primary limitations is the limited availability of high-quality annotated data, which is essential for training reliable AI models. Without comprehensive and diverse datasets, AI systems may struggle to generalize across different patient populations and clinical settings. Additionally, overfitting remains a significant concern, where models perform well on training data but fail to generalize to new, unseen data. This can be mitigated through robust model design and cross-validation techniques. Another critical issue is the lack of generalizability of AI models. Many models are trained on specific datasets and may not perform well when applied to different healthcare environments or patient demographics. External validation on independent datasets is crucial to confirm the model’s applicability in diverse clinical settings. Furthermore, ensuring high performance requires the establishment of standardized performance metrics and rigorous testing protocols to validate the model’s reliability.
Integration into clinical workflows is another major challenge. AI tools must be designed to align with existing clinical practices and workflows to ensure seamless adoption by healthcare professionals. This requires collaboration between AI developers and clinicians to create user-friendly tools that enhance, rather than disrupt, clinical processes. Model interpretability is also a significant barrier to AI adoption in healthcare. Many AI models, particularly deep learning models, are often considered “black boxes”, making it difficult for clinicians to understand and trust their decision-making processes. Developing explainable AI (XAI) models that provide clear insights into how decisions are made can help build trust and improve adoption.
Finally, ethical and regulatory concerns must be addressed to ensure the responsible use of AI in healthcare. Issues such as data privacy, bias, and fairness need to be carefully managed, and clear regulatory frameworks must be established to guide the development and deployment of AI systems. A systematic review highlights that biased data sources and insufficient representation in AI training datasets can exacerbate inequalities in care delivery, underscoring the necessity for more inclusive approaches in healthcare AI development (177). In contrast, a scoping review suggests methods to address the ethical and societal challenges associated with AI in healthcare, providing insights into strategies for responsible integration (178).
Challenges in cross-study comparability
A persistent methodological limitation in evaluating AI’s clinical performance lies in the flawed practice of comparing accuracy metrics across studies. Such comparisons lack validity because most AI models are developed without adherence to standardized validation frameworks, such as those outlined in recently issued Food and Drug Administration’s (FDA) guidance (179). This guidance emphasizes rigorous documentation of model development processes (including the rationale for algorithmic choices, provenance and preprocessing of training data, dataset splitting protocols, model evaluation process and external validation strategies) yet few studies even acknowledge this guidance let alone comply. Without transparency in these areas, reported metrics risk conflating genuine generalizability with artifactual results from overfitting, data leakage, or biased validation. For instance, a model claiming 98% accuracy after improper cross-validation on non-representative data may appear superior to a rigorously validated counterpart tested on diverse cohorts, but its clinical utility is fundamentally compromised. This variability obscures distinctions between reproducible breakthroughs and inflated claims, particularly when studies omit failure mode analyses or calibration steps. Until standardized reporting aligned with regulatory benchmarks becomes widespread, cross-study performance comparisons remain speculative at best and clinically misleading at worst, hampering efforts to identify truly effective AI tools for healthcare implementation.
Discussion
The integration of AI in healthcare goes beyond mere technology, encompassing significant moral and ethical considerations (180,181). To make AI more transparent, regulated, and usable, explainable AI serves as a pivotal advancement in ML models, enhancing their clarity and applicability in various domains (182,183). In the current global regulatory landscape, most regulations governing AI primarily focus on software as a Medical Device, which falls under the category of digital health products (158). However, it is crucial to recognize that these existing regulations may be insufficient. AI technologies possess the ability to operate autonomously, adapt their algorithms, and enhance their performance over time based on new real-world data they encounter. To address these challenges, establishing a global regulatory convergence for AI in healthcare would be advantageous. This approach could mirror the voluntary AI code of conduct being developed by the US-EU Trade and Technology Council (184). The use of AI for decision-making presents ethical challenges due to its complex characteristics, which can result in errors, a loss of human control, and difficulties in assigning responsibility, leading to a need for a careful evaluation of the costs and benefits in high-stakes situations (185). The integration of AI in healthcare raises concerns about unfairly placing legal liability on clinicians for errors and adverse outcomes, as they may be held responsible for system malfunctions over which they have limited control (186). To address the complexities of liability in AI-integrated healthcare systems, several potential solutions have been proposed. These include recognizing that liability should not rest solely on clinicians, as many individuals are involved in the AI’s design, implementation, and operation. Risk pooling between clinicians and software development companies through insurance schemes has been suggested to cover AI-related damages (187). Additionally, there is a call for a shift in how AI systems are treated legally, potentially viewing them as part of the clinical team rather than mere products (188), which could help clarify responsibility. However, current legal frameworks present significant challenges, making it difficult to establish clear liability for AI-related errors. Hence, a standardized validation framework for clinical AI models is important for establishing clear accountability, improving trust, and facilitating regulatory compliance in healthcare applications.
Standardized validation framework for clinical AI models
The diagram in Figure 5 illustrates a standardized validation framework for clinical AI models aligned with FDA guidance (179), addressing critical methodological flaws in cross-study performance comparisons. The workflow begins by defining the clinical question of interest and rigorously bounding the AI’s context of use (COU); specifying target populations, decision-making scope, and operational limitations. These initial steps inform a risk matrix assessment combining model influence (the AI’s relative impact on decisions) and decision consequence (potential harm from errors), determining whether risk stratification yields high, medium, or low model risk classifications. Following risk profiling, the framework mandates creation of a credibility plan with five interlinked components: justification of algorithmic architecture choices relative to COU requirements, documentation of training data provenance including bias mitigation strategies, predefined dataset splitting protocols to prevent leakage, evaluation metrics with uncertainty quantification, and prospective external validation strategies. Vertical dependencies connect these components; algorithm choices dictate required data preprocessing, which informs validation splitting rules, in turn shaping performance metrics and external testing parameters. Execution of this plan feeds into formal documentation of deviations and outcomes, culminating in go/no-go adequacy decisions. Closed-loop elements enforce traceability, requiring updates to COU definitions if model retraining occurs. By institutionalizing transparent reporting of data workflows, validation boundaries, and failure mode analyses, this framework directly counters risks of overfitting claims from improper cross-validation, batch effects in non-representative training data, and performance inflation via uncontrolled hyperparameter tuning. The horizontal progression from problem definition through risk stratification to lifecycle documentation creates auditable chains of evidence, enabling meaningful comparison of AI tools across studies while meeting emerging regulatory benchmarks for clinical AI validation.
Figure 5 This workflow outlines a systematic approach to validating AI models in healthcare, emphasizing regulatory compliance and methodological rigor. Starting with defining the clinical question and COU, the framework progresses through risk stratification (combining model influence and decision consequence) and the development of a credibility plan. Critical components include algorithmic justification, training data provenance, dataset splitting protocols, evaluation metrics, and external validation strategies. The framework mandates documentation of deviations and culminates in a go/no-go decision for clinical deployment. Designed to address challenges in cross-study comparability, it ensures transparency, mitigates risks of overfitting and data leakage, and strengthens reproducibility, enabling meaningful evaluation of AI tools’ clinical utility and regulatory alignment. AI, artificial intelligence; COU, context of use.
Preparing future healthcare professionals for an AI-driven era
Fortunately, medical colleges are recognizing the importance of preparing future healthcare professionals for this new era. Many institutions are proactively integrating AI education into their curricula, ensuring that students are equipped with the knowledge and skills necessary to navigate the complexities of AI in clinical practice. By incorporating AI into medical education, students learn how these systems work, including the algorithms and data inputs that drive their functionality. This foundational knowledge is essential for interpreting AI-generated insights accurately and making informed clinical decisions. Education on AI also includes discussions about its limitations and potential biases. Understanding these factors helps future healthcare professionals critically evaluate AI recommendations and avoid over-reliance on technology, which can lead to errors in diagnosis and treatment. Training programs must emphasize the importance of human oversight in the decision-making process. Students learn that while AI can enhance diagnostic accuracy and efficiency, it should complement, not replace, the expertise and judgment of healthcare providers. By fostering a collaborative mindset, medical colleges prepare students to work alongside AI experts and data scientists (189). This interdisciplinary approach is vital for optimizing the use of AI in clinical settings and ensuring that patient care remains the primary focus. A recent systematic review emphasized the critical role of regulatory bodies in developing AI competencies among healthcare professionals, highlighting the need for comprehensive training programs to ensure effective integration of AI in clinical practice (190). The field of AI is rapidly evolving, and ongoing education is necessary to keep pace with new developments. Medical colleges traditionally instill a culture of continuous learning, encouraging future healthcare professionals to stay informed about advancements in healthcare technology and its applications in medicine.
Strengths and limitations of this review
This review adopts a methodologically rigorous approach to evaluate AI’s role in contemporary healthcare systems. The search strategy, conducted across six academic databases (PubMed/Medline, Scopus, EMBASE, Web of Science, Google Scholar, ResearchGate), prioritized recent peer-reviewed studies published from 2023 onward, ensuring alignment with rapidly evolving AI advancements. Requiring DOI numbers for all included articles enhanced traceability and scholarly credibility while reducing reliance on non-reproducible sources. The analysis uniquely synthesizes AI’s impact across three interconnected domains: clinical practice, hospital management, and health policy. Clinically, the review documents AI’s transformative potential in accelerating drug discovery through autonomous experimentation systems (30-37) and improving chronic disease management via predictive analytics (46-49). Operationally, it highlights AI’s capacity to optimize hospital workflows, such as real-time patient flow management (129-131) and predictive staffing models to reduce burnout (139,140). Policy-wise, the review explores how AI-driven simulations enable evidence-based decision-making for pandemic preparedness and equitable resource allocation (23,145). By interlinking technical innovations with systemic reforms, this work bridges the gap between AI’s theoretical promise and its practical healthcare applications.
Despite its scope, this review identifies critical challenges inherent to AI adoption. Diagnostic reliability remains precarious due to risks of false positives (e.g., unnecessary interventions from misclassified benign tumors) and false negatives (missed malignancies), stemming from biased training data or insufficient clinician-AI collaboration. Validation shortcomings further undermine trust: many studies report inflated accuracy metrics (95–100%) but lack external testing cohorts, raising concerns about overfitting and real-world generalizability. Heterogeneous patient demographics and device interoperability issues compound these challenges, particularly in AI models analyzing socioeconomically diverse populations. While the review discusses ethical tensions, such as the risk of clinicians becoming liable for AI-induced errors (185) and challenges in establishing appropriate liability frameworks (186), unresolved questions persist about equitable access to AI-driven care across underserved communities. Finally, the focus on post-2023 literature, while ensuring timeliness, may underrepresent longitudinal insights into AI’s evolving regulatory landscape.
Conclusions
The integration of AI in healthcare is creating a new era of medical practice, hospital management, and health policy. This technological revolution is significantly enhancing diagnostic accuracy, operational efficiency, and evidence-based decision-making. By leveraging AI’s capabilities, healthcare providers can offer personalized treatment recommendations, leading to improved patient outcomes. Furthermore, the implementation of AI in hospital operations, through automation and predictive analytics, is allowing for more efficient processes. These advancements are not just incremental improvements but represent a fundamental shift in how healthcare is delivered and managed.
However, it is important to acknowledge that the path to fully integrating AI in healthcare is not without its challenges. Ethical concerns, particularly regarding data privacy and the potential for diagnostic errors, stress the need for robust model validation and careful implementation. To navigate these complex issues, close collaboration between healthcare professionals and AI experts is essential. Additionally, the establishment of comprehensive regulatory frameworks and the incorporation of AI education into medical training programs are vital steps to ensure responsible use of this technology. By addressing these challenges, the healthcare industry can harness the full potential of AI while maintaining the high standards of patient care and ethical practice.
Acknowledgments
We confirm that no chatbot or AI tools were used to generate text, figures, data, or analyses for this manuscript. However, we acknowledge that grammar checking and corrections were performed using tools that may utilize AI technology. We take full responsibility for the content of this manuscript and ensure compliance with publication ethics.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
Matheny M, Israni ST, Ahmed M, and Whicher D, Eds. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academies Press, Aug. 2019. Available online: 10.17226/2711110.17226/27111
Toma M, Wei OC. Predictive modeling in medicine. Encyclopedia 2023;3:590-601. [Crossref]
Abubaker Bagabir S, Ibrahim NK, Abubaker Bagabir H, et al. Covid-19 and Artificial Intelligence: Genome sequencing, drug development and vaccine discovery. J Infect Public Health 2022;15:289-96. [Crossref] [PubMed]
Giordano C, Brennan M, Mohamed B, et al. Accessing Artificial Intelligence for Clinical Decision-Making. Front Digit Health 2021;3:645232. [Crossref] [PubMed]
Lamanna C, Byrne L. Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm. AMA J Ethics 2018;20:E902-910. [Crossref] [PubMed]
Secinaro S, Calandra D, Secinaro A, et al. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak 2021;21:125. [Crossref] [PubMed]
Mohammad Amini M, Jesus M, Fanaei Sheikholeslami D, et al. Artificial intelligence ethics and challenges in healthcare applications: A comprehensive review in the context of the european GDPR mandate. Mach Learn Knowl Extr 2023;5:1023-35. [Crossref]
Petersson L, Larsson I, Nygren JM, et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res 2022;22:850. [Crossref] [PubMed]
Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 2023;23:689. [Crossref] [PubMed]
Karalis VD. The integration of artificial intelligence into clinical practice. Appl Biosci 2024;3:14-44. [Crossref]
Ponsiglione AM, Zaffino P, Ricciardi C, et al. Combining simulation models and machine learning in healthcare management: strategies and applications. Prog Biomed Eng (Bristol) 2024;6: [Crossref] [PubMed]
Murphy K, Di Ruggiero E, Upshur R, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics 2021;22:14. [Crossref] [PubMed]
Silcox C, Zimlichmann E, Huber K, et al. The potential for artificial intelligence to transform healthcare: perspectives from international health leaders. NPJ Digit Med 2024;7:88. [Crossref] [PubMed]
Lynch GA, Maskell NA, Bibby A. Recent advances in mesothelioma. Current Pulmonology Reports 2024;13:256-65. [Crossref]
Shamir SB, Sasson AL, Margolies LR, et al. New Frontiers in Breast Cancer Imaging: The Rise of AI. Bioengineering (Basel) 2024;11:451. [Crossref] [PubMed]
Zhao D, Wang W, Tang T, et al. Current progress in artificial intelligence-assisted medical image analysis for chronic kidney disease: A literature review. Comput Struct Biotechnol J 2023;21:3315-26. [Crossref] [PubMed]
Ren F, Aliper A, Chen J, et al. A small-molecule TNIK inhibitor targets fibrosis in preclinical and clinical models. Nat Biotechnol 2025;43:63-75. [Crossref] [PubMed]
Zhu C, Attaluri PK, Wirth PJ, et al. Current Applications of Artificial Intelligence in Billing Practices and Clinical Plastic Surgery. Plast Reconstr Surg Glob Open 2024;12:e5939. [Crossref] [PubMed]
Bhagat SV, Kanyal D. Navigating the Future: The Transformative Impact of Artificial Intelligence on Hospital Management- A Comprehensive Review. Cureus 2024;16:e54518. [Crossref] [PubMed]
Ahmadi A. Artificial intelligence revolution: A comprehensive review of its transformative impact on hospital data management in the future. International Journal of BioLife Sciences 2024;3:115-33. (IJBLS).
Mennella C, Maniscalco U, De Pietro G, et al. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024;10:e26297. [Crossref] [PubMed]
Chen RJ, Wang JJ, Williamson DFK, et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023;7:719-42. [Crossref] [PubMed]
Chirico F, Teixeira da Silva JA. Evidence-based policies in public health to address COVID-19 vaccine hesitancy. Future Virol 2023; [Crossref] [PubMed]
Goralski MA, Tan TK. Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced Inequalities in a Post-COVID World. In: Mazzi F, Floridi L. (eds). The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer International Publishing, 2023:97-113.
Toma M, Chong L, Husain G, et al. Machine learning strategies for improved cardiovascular disease detection. Medical Research Archives 2024;13: [Crossref]
Mi D, Li Y, Zhang K, et al. Exploring intelligent hospital management mode based on artificial intelligence. Front Public Health 2023;11:1182329. [Crossref] [PubMed]
Dankwa-Mullan I. Health Equity and Ethical Considerations in Using Artificial Intelligence in Public Health and Medicine. Prev Chronic Dis 2024;21:E64. [Crossref] [PubMed]
Abraham A, Jose R, Farooqui N, et al. The Role of ArtificiaI Intelligence in Brain Tumor Diagnosis: An Evaluation of a Machine Learning Model. Cureus 2024;16:e61483. [Crossref] [PubMed]
Kenaan N, Hanna G, Sardini M, et al. Advances in early detection of non-small cell lung cancer: A comprehensive review. Cancer Med 2024;13:e70156. [Crossref] [PubMed]
Mak KK, Wong YH, Pichika MR. Artificial Intelligence in Drug Discovery and Development. In: Hock FJ, Pugsley MK. (eds) Drug Discovery and Evaluation: Safety and Pharmacokinetic Assays 2023:1-38.
Abbas MKG, Rassam A, Karamshahi F, et al. The Role of AI in Drug Discovery. Chembiochem 2024;25:e202300816. [Crossref] [PubMed]
Gangwal A, Lavecchia A. Unleashing the power of generative AI in drug discovery. Drug Discov Today 2024;29:103992. [Crossref] [PubMed]
Hasselgren C, Oprea TI. Artificial Intelligence for Drug Discovery: Are We There Yet? Annu Rev Pharmacol Toxicol 2024;64:527-50. [Crossref] [PubMed]
Visan AI, Negut I. Integrating Artificial Intelligence for Drug Discovery in the Context of Revolutionizing Drug Delivery. Life (Basel) 2024;14:233. [Crossref] [PubMed]
Saifi I, Bhat BA, Hamdani SS, et al. Artificial intelligence and cheminformatics tools: a contribution to the drug development and chemical science. J Biomol Struct Dyn 2024;42:6523-41. [Crossref] [PubMed]
Back S, Aspuru-Guzik A, Ceriotti M, et al. Accelerated chemical science with AI. Digit Discov 2023;3:23-33. [Crossref] [PubMed]
Cichońska A, Ravikumar B, Rahman R. AI for targeted polypharmacology: The next frontier in drug discovery. Curr Opin Struct Biol 2024;84:102771. [Crossref] [PubMed]
Cheetham AK, Seshadri R. Artificial Intelligence Driving Materials Discovery? Perspective on the Article: Scaling Deep Learning for Materials Discovery. Chem Mater 2024;36:3490-5. [Crossref] [PubMed]
da Silva RGL. The advancement of artificial intelligence in biomedical research and health innovation: challenges and opportunities in emerging economies. Global Health 2024;20:44. [Crossref] [PubMed]
Tom G, Schmid SP, Baird SG, et al. Self-Driving Laboratories for Chemistry and Materials Science. Chem Rev 2024;124:9633-732. [Crossref] [PubMed]
Lu JM, Pan JZ, Mo YM, et al. Automated intelligent platforms for high-throughput chemical synthesis. Artificial Intelligence Chemistry 2024;2:100057. [Crossref]
Shi Y. Drug development in the AI era: AlphaFold 3 is coming! Innovation (Camb) 2024;5:100685. [Crossref] [PubMed]
Ohno S, Manabe N, Yamaguchi Y. Prediction of protein structure and AI. J Hum Genet 2024;69:477-80. [Crossref] [PubMed]
McGibbon M, Shave S, Dong J, et al. From intuition to AI: evolution of small molecule representations in drug discovery. Brief Bioinform 2023;25:bbad422. [Crossref] [PubMed]
Tang X, Dai H, Knight E, et al. A survey of generative AI for de novo drug design: new frontiers in molecule and protein generation. Brief Bioinform 2024;25:bbae338. [Crossref] [PubMed]
Khalifa M, Albadawy M. Artificial intelligence for clinical prediction: Exploring key domains and essential functions. Computer Methods and Programs in Biomedicine Update 2024;5:100148. [Crossref]
Ruchonnet-Métrailler I, Siebert JN, Hartley MA, et al. Automated Interpretation of Lung Sounds by Deep Learning in Children With Asthma: Scoping Review and Strengths, Weaknesses, Opportunities, and Threats Analysis. J Med Internet Res 2024;26:e53662. [Crossref] [PubMed]
Jose R, Syed F, Thomas A, et al. Cardiovascular health management in diabetic patients with machine-learning-driven predictions and interventions. Appl Sci 2024;14:2132. [Crossref]
Cho JS, Park JH. Application of artificial intelligence in hypertension. Clin Hypertens 2024;30:11. [Crossref] [PubMed]
Joseph P, Ali H, Matthew D, et al. Regressive machine learning for real-time monitoring of bed-based patients. App Sci 2024;14:9978. [Crossref]
Khalifa M, Albadawy M. Ai in diagnostic imaging: Revolutionising accuracy and efficiency. Computer Methods and Programs in Biomedicine Update 2024;5:100146. [Crossref]
Lindroth H, Nalaie K, Raghu R, et al. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024;10:81. [Crossref] [PubMed]
Jose R, Thomas A, Guo J, et al. Evaluating machine learning models for prediction of coronary artery disease. Global Translational Medicine 2024;3:2669. [Crossref]
Thomas A, Jose R, Syed F, et al. Machine learning-driven predictions and interventions for cardiovascular occlusions. Technol Health Care 2024;32:3535-56. [Crossref] [PubMed]
Udegbe FC, Ebulue OR, Ebulue CC, et al. AI’s impact on personalized medicine: tailoring treatments for improved health outcomes. Engineering Science & Technology Journal 2024;5:1386-94. [Crossref]
Li YH, Li YL, Wei MY, et al. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep 2024;14:18994. [Crossref] [PubMed]
Ferrara M, Bertozzi G, Di Fazio N, et al. Risk Management and Patient Safety in the Artificial Intelligence Era: A Systematic Review. Healthcare (Basel) 2024;12:549. [Crossref] [PubMed]
Arjmandnia F, Alimohammadi E. The value of machine learning technology and artificial intelligence to enhance patient safety in spine surgery: a review. Patient Saf Surg 2024;18:11. [Crossref] [PubMed]
Mayer J, Jose R, Bekbolatova M, et al. Enhancing patient safety through integrated sensor technology and machine learning for bed-based patient movement detection in inpatient care. Artificial Intelligence in Health 2024;1:132. [Crossref]
Maleki Varnosfaderani S, Forouzanfar M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering (Basel) 2024;11:337. [Crossref] [PubMed]
Islam R, Sultana A, Islam MR. A comprehensive review for chronic disease prediction using machine learning algorithms. Journal of Electrical Systems and Information Technology 2024;11:27. [Crossref]
Liu Y, Wu X, Sang Y, et al. Evolution of surgical robot systems enhanced by artificial intelligence: A review. Advanced Intelligent Systems 2024;6:2300268. [Crossref]
Zhang C, Hallbeck MS, Salehinejad H, et al. The integration of artificial intelligence in robotic surgery: A narrative review. Surgery 2024;176:552-7. [Crossref] [PubMed]
Knudsen JE, Ghaffar U, Ma R, et al. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg 2024;18:102. [Crossref] [PubMed]
Duong TV, Vy VPT, Hung TNK. Artificial intelligence in plastic surgery: Advancements, applications, and future. Cosmetics 2024;11:109. [Crossref]
Nasef D, Nasef D, Girgis P, et al. Deep learning for automated kellgren-lawrence grading in knee osteoarthritis severity assessment. Surgeries 2024;6:3. [Crossref]
Nasef D, Nasef D, Sawiris V, et al. Machine-learning-based biomechanical feature analysis for orthopedic patient classification with disc hernia and spondylolisthesis. BioMedInformatics 2025;5:3. [Crossref]
Husain G, Mayer J, Bekbolatova M, et al. Machine learning for medical image classification. Academia Medicine 2024;1.
Toma M, Husain G. Algorithm selection and data utilization in machine learning for medical imaging classification. 2024 IEEE Long Island Systems, Applications and Technology Conference (LISAT), Holtsville, NY, USA, 2024, pp. 1-6.
Barris B, Karp A, Jacobs M, et al. Harnessing the Power of AI: A Comprehensive Review of Left Ventricular Ejection Fraction Assessment With Echocardiography. Cardiol Rev 2024; Epub ahead of print. [Crossref] [PubMed]
Muzammil MA, Javid S, Afridi AK, et al. Artificial intelligence-enhanced electrocardiography for accurate diagnosis and management of cardiovascular diseases. J Electrocardiol 2024;83:30-40. [Crossref] [PubMed]
Zhao AP, Li S, Cao Z, et al. Ai for science: Predicting infectious diseases. Journal of Safety Science and Resilience 2024;5:130-46. [Crossref]
Li C, Ye G, Jiang Y, et al. Artificial Intelligence in battling infectious diseases: A transformative role. J Med Virol 2024;96:e29355. [Crossref] [PubMed]
Langford BJ, Branch-Elliman W, Nori P, et al. Confronting the Disruption of the Infectious Diseases Workforce by Artificial Intelligence: What This Means for Us and What We Can Do About It. Open Forum Infect Dis 2024;11:ofae053. [Crossref] [PubMed]
Siddique MM, Seraj MMB, Adnan MN, et al. Artificial Intelligence for Infectious Disease Detection: Prospects and Challenges. In: Chowdhury MEH, Kiranyaz S. (eds). Surveillance, Prevention, and Control of Infectious Diseases. 2024:1-22.
Zar A, Zar L, Mohsen S, et al. A Comprehensive Review of Algorithms Developed for Rapid Pathogen Detection and Surveillance. In: Chowdhury, MEH, Kiranyaz S. (eds) Surveillance, Prevention, and Control of Infectious Diseases. Springer, Cham 2024:23-49.
Kim HJ, Parsa N, Byrne MF. The role of artificial intelligence in colonoscopy. Seminars in Colon and Rectal Surgery 2024;35:101007. [Crossref]
Abraham A, Jose R, Ahmad J, et al. Comparative Analysis of Machine Learning Models for Image Detection of Colonic Polyps vs. Resected Polyps. J Imaging 2023;9:215. [Crossref] [PubMed]
Bellini V, Russo M, Domenichetti T, et al. Artificial Intelligence in Operating Room Management. J Med Syst 2024;48:19. [Crossref] [PubMed]
Pape HC, Starr AJ, Gueorguiev B, et al. The role of big data management, data registries, and machine learning algorithms for optimizing safe definitive surgery in trauma: a review. Patient Saf Surg 2024;18:22. [Crossref] [PubMed]
Kalra N, Verma P, Verma S. Advancements in AI based healthcare techniques with FOCUS ON diagnostic techniques. Comput Biol Med 2024;179:108917. [Crossref] [PubMed]
Zavaleta-Monestel E, Quesada-Villaseñor R, Arguedas-Chacón S, et al. Revolutionizing Healthcare: Qure.AI's Innovations in Medical Diagnosis and Treatment. Cureus 2024;16:e61585. [Crossref] [PubMed]
Eskandarani R, Almuhainy A, Alzahrani A. Creating a master training rotation schedule for emergency medicine residents and challenges in using artificial intelligence. Int J Emerg Med 2024;17:84. [Crossref] [PubMed]
Geny M, Andres E, Talha S, et al. Liability of Health Professionals Using Sensors, Telemedicine and Artificial Intelligence for Remote Healthcare. Sensors (Basel) 2024;24:3491. [Crossref] [PubMed]
Padmanaban H. Revolutionizing regulatory reporting through ai/ml: Approaches for enhanced compliance and efficiency. Journal of Artificial Intelligence General science (JAIGS) 2024;2:71-90.
Snell R. Meeting present and future challenges - how to build a more effective compliance department. Journal of Health Care Compliance 2024;26:17-22.
Solaiman BCohen IGMalik A, et al. AI in hospital administration and management: ethical and legal implications. 2024.
MaphosaVMpofuB. An artificial intelligence-based random forest model for reducing prescription errors and improving patient safety. Social Science Research Network2024. Doi: .10.2139/ssrn.4842105
MishraAAleemS. Integration of artificial intelligence in hospital management systems: An overview. Social Science Research Network2024. Doi: .10.2139/ssrn.4838066
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024;151:102861. [Crossref] [PubMed]
Kumar S, Datta S, Singh V, et al. Opportunities and challenges in data-centric AI. IEEE Access 2024;12:33173-89.
Evans RP, Bryant LD, Russell G, et al. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024;183:105342. [Crossref] [PubMed]
Perivolaris A, Adams-McGavin C, Madan Y, et al. Quality of interaction between clinicians and artificial intelligence systems. A systematic review. Future Healthc J 2024;11:100172. [Crossref] [PubMed]
Nankya M, Mugisa A, Usman Y, et al. Security and privacy in e-health systems: A review of AI and machine learning techniques. IEEE Access 2024;12:148796-816.
Camacho Clavijo S. AI assessment tools for decision-making on telemedicine: liability in case of mistakes. Discov Artif Intell 2024;4:24. [Crossref]
Olawade DB, David Olawade AC, Wada OZ, et al. Artificial intelligence in healthcare delivery: Prospects and pitfalls. Journal of Medicine, Surgery, and Public Health 2024;3:100108. [Crossref]
Elvas LB, Nunes M, Ferreira JC, et al. Hospital remote care assistance ai to reduce workload. International Journal of Computer Information Systems and Industrial Management Applications 2024;16:13.
Yoon SH, Park S, Jang S, et al. Use of artificial intelligence in triaging of chest radiographs to reduce radiologists' workload. Eur Radiol 2024;34:1094-103. [Crossref] [PubMed]
Hunstein D, Frischen L, Fiebig M. Development of a Data Model to Predict Nursing Workload Using Routine Clinical Data. Stud Health Technol Inform 2024;316:1038-42. [Crossref] [PubMed]
Abbasi N, Hussain HK. Integration of artificial intelligence and smart technology: AI-driven robotics in surgery: Precision and efficiency. Journal of Artificial Intelligence General science (JAIGS) 2024;5:381-90.
Pham P, Zhang H, Gao W, et al. Determinants and performance outcomes of artificial intelligence adoption: Evidence from U.S. hospitals. Journal of Business Research 2024;172:114402. [Crossref]
Pramanik S. AI-Powered Hospital Accounting: Towards Sound Financial Management. In: Exploring Global FinTech Advancement and Applications. IGI Global; 2024:121-42.
Chen CY, Chen YL, Scholl J, et al. Ability of machine-learning based clinical decision support system to reduce alert fatigue, wrong-drug errors, and alert users about look alike, sound alike medication. Comput Methods Programs Biomed 2024;243:107869. [Crossref] [PubMed]
Johnson EA, Dudding KM, Carrington JM. When to err is inhuman: An examination of the influence of artificial intelligence-driven nursing care on patient safety. Nurs Inq 2024;31:e12583. [Crossref] [PubMed]
Sayem MA, Taslima N, Singh Sidhu G, et al. A quantitative analysis of healthcare fraud and utilization of AI for mitigation. International Journal of Business and Management Sciences 2024;4:13-36. [Crossref]
Akhtar ZB. The design approach of an artificial intelligent (ai) medical system based on electronical health records (ehr) and priority segmentations. The Journal of Engineering 2024;2024:e12381. [Crossref]
Li XH, Liao JP, Chen MK, et al. The Application of Computer Technology to Clinical Practice Guideline Implementation: A Scoping Review. J Med Syst 2023;48:6. [Crossref] [PubMed]
Pandian NR, Krishna M. Real-time diagnostics with ai/ml: An assessment of its usefulness in smart health care. 2023 2nd International Conference on Futuristic Technologies (INCOFT), Belagavi, Karnataka, India, 2023, pp. 1-6.
Singh S, Hooda S. A study of challenges and limitations to applying machine learning to highly unstructured data. 2023 7th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 2023, pp. 1-6.
WangYVisweswaranSKapoorSChatGPT-CARE: a superior decision support tool enhancing chatgpt with clinical practice guidelines.medRxiv 2023. doi: .10.1101/2023.08.09.23293890
Mohan AA, Kumar SS, Annam V, et al. Role of AI (artificial intelligence) and machine learning in transforming operations in healthcare industry: An empirical study. International Journal of Membrane Science and Technology 2023;10:2069-76. [Crossref]
Jabbour S, Fouhey D, Shepard S, et al. Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study. JAMA 2023;330:2275-84. [Crossref] [PubMed]
Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. Discov Artif Intell 2023;3:5. [Crossref] [PubMed]
Zhang J, Wu J, Qiu Y, et al. Intelligent speech technologies for transcription, disease diagnosis, and medical equipment interactive control in smart hospitals: A review. Comput Biol Med 2023;153:106517. [Crossref] [PubMed]
Alshanbari HM, Iftikhar H, Khan F, et al. On the Implementation of the Artificial Neural Network Approach for Forecasting Different Healthcare Events. Diagnostics (Basel) 2023;13:1310. [Crossref] [PubMed]
En-Naaoui A, Kaicer M, Aguezzoul A. A novel decision support system for proactive risk management in healthcare based on fuzzy inference, neural network and support vector machine. Int J Med Inform 2024;186:105442. [Crossref] [PubMed]
Yigit Y, Duran K, Moradpoor N, et al. Machine Learning for Smart Healthcare Management Using IoT. In: Namasudra S. (eds) IoT and ML for Information Management: A Smart Healthcare Perspective. Studies in Computational Intelligence 2024:135-66.
Topal Kocc D, Mercan Y. Artificial Intelligence and Digital Transformation in Healthcare Management. In: Akkaya B, Tabak A. (Ed.) Two Faces of Digital Transformation. Emerald Publishing Limited, Leeds, 2023 pp. 87-100.
Zhukovska A, Zheliuk T, Shushpanov D, et al. Management of the development of artificial intelligence in healthcare. 2023 13th International Conference on Advanced Computer Information Technologies (ACIT), Wrocław, Poland, 2023, pp. 241-7.
Barnett M, Wang D, Beadnall H, et al. A real-world clinical validation for AI-based MRI monitoring in multiple sclerosis. NPJ Digit Med 2023;6:196. [Crossref] [PubMed]
Bojsen JA, Elhakim MT, Graumann O, et al. Artificial intelligence for MRI stroke detection: a systematic review and meta-analysis. Insights Imaging 2024;15:160. [Crossref] [PubMed]
Morales MA, Manning WJ, Nezafat R. Present and Future Innovations in AI and Cardiac MRI. Radiology 2024;310:e231269. [Crossref] [PubMed]
Kim K, Faruque SC, Lam S, et al. Implications of Diagnosis Through a Machine Learning Algorithm on Management of People With Familial Hypercholesterolemia. JACC Adv 2024;3:101184. [Crossref] [PubMed]
Afroz M, Nyakwende E, Goswami B. Predictive analytics in oncology: A comprehensive study on lung cancer risk factors and machine learning model performance. 024 IEEE International Conference on Contemporary Computing and Communications (InC4), Bangalore, India, 2024, pp. 1-7.
Wongtangman K, Aasman B, Garg S, et al. Development and validation of a machine learning ASA-score to identify candidates for comprehensive preoperative screening and risk stratification. J Clin Anesth 2023;87:111103. [Crossref] [PubMed]
Abramoff MD, Whitestone N, Patnaik JL, et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. NPJ Digit Med 2023;6:184. [Crossref] [PubMed]
Al Naqbi H, Bahroun Z, Ahmed V. Enhancing work productivity through generative artificial intelligence: A comprehensive literature review. Sustainability 2024;16:1166. [Crossref]
Rajesh AE, Davidson OQ, Lee CS, et al. Artificial Intelligence and Diabetic Retinopathy: AI Framework, Prospective Studies, Head-to-head Validation, and Cost-effectiveness. Diabetes Care 2023;46:1728-39. [Crossref] [PubMed]
Zhai K, Yousef MS, Mohammed S, et al. Optimizing clinical workflow using precision medicine and advanced data analytics. Processes 2023;11:939. [Crossref]
Abuhay TM, Robinson S, Mamuye A, et al. Machine learning integrated patient flow simulation: why and how?. Journal of Simulation 2023;17:580-93. [Crossref]
Ortiz-Barrios M, Arias-Fonseca S, Ishizaka A, et al. Artificial intelligence and discrete-event simulation for capacity management of intensive care units during the Covid-19 pandemic: A case study. J Bus Res 2023;160:113806. [Crossref] [PubMed]
Voola PK, Ayyagiri A, Musunuri A, et al. Leveraging GenAI for clinical data analysis: Applications and challenges in real-time patient monitoring. Modern Dynamics: Mathematical Progressions 2024;1:204-23.
Mendhe D, Dogra A, Nair PS, et al. AI-enabled data-driven approaches for personalized medicine and healthcare analytics. 2024 Ninth International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 2024, pp. 1-5.
Van Yperen J, Campillo-Funollet E, Inkpen R, et al. A hospital demand and capacity intervention approach for COVID-19. PLoS One 2023;18:e0283350. [Crossref] [PubMed]
Susnjak T, Maddigan P. Forecasting patient demand at urgent care clinics using explainable machine learning. CAAI Transactions on Intelligence Technology 2023;8:712-33. [Crossref]
Bekesy M. Forecasting patient arrival trends to the emergency department based on weather: A scoping review. 2023 IEEE 21st Jubilee International Symposium on Intelligent Systems and Informatics (SISY), Pula, Croatia, 2023, pp. 555-8.
Wang J, Xiong Y, Cai Q, et al. A Review of Epidemic Prediction and Control from a POM Perspective. In: Hu Z, Zhang Q, He M. (eds) Advances in Artificial Systems for Logistics Engineering III. ICAILE 2023. Lecture Notes on Data Engineering and Communications Technologies 2023:734-44.
Mariappan MB, Devi K, Venkataraman Y, et al. Using AI and ML to predict shipment times of therapeutics, diagnostics and vaccines in e-pharmacy supply chains during covid-19 pandemic. The International Journal of Logistics Management 2022;34:390-416. [Crossref]
Nashwan AJ, Abujaber AA. Nursing in the Artificial Intelligence (AI) Era: Optimizing Staffing for Tomorrow. Cureus 2023;15:e47275. [Crossref] [PubMed]
Wilton AR, Sheffield K, Wilkes Q, et al. The Burnout PRedictiOn Using Wearable aNd ArtIficial IntelligEnce (BROWNIE) study: a decentralized digital health protocol to predict burnout in registered nurses. BMC Nurs 2024;23:114. [Crossref] [PubMed]
Bertl M, Ross P, Draheim D. Systematic AI support for decision-making in the healthcare sector: Obstacles and success factors. Health Policy and Technology 2023;12:100748. [Crossref]
Khera R, Butte AJ, Berkwits M, et al. AI in medicine–JAMA’s focus on clinical outcomes, patient-centered care, quality, and equity. JAMA 2023;330:818-20. [Crossref] [PubMed]
Sauerbrei A, Kerasidou A, Lucivero F, et al. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak 2023;23:73. [Crossref] [PubMed]
Kim ES. Can data science achieve the ideal of evidence-based decision-making in environmental regulation?. Technology in Society 2024;78:102615. [Crossref]
Lakkimsetti M, Devella SG, Patel KB, et al. Optimizing the Clinical Direction of Artificial Intelligence With Health Policy: A Narrative Review of the Literature. Cureus 2024;16:e58400. [Crossref] [PubMed]
Kluge EH. The ethics of artificial intelligence in healthcare: From hands-on care to policy-making. Healthc Manage Forum 2024;37:406-8. [Crossref] [PubMed]
Cauët E, Schittecatte G, Van Den Bulcke M, et al. Policy brief Belgian EBCP mirror group Artificial Intelligence in cancer care. Arch Public Health 2024;82:142. [Crossref] [PubMed]
Lore F, Basile P, Appice A, et al. An AI framework to support decisions on GDPR compliance. Journal of Intelligent Information Systems 2023;61:541-68. [Crossref]
Guevara M, Chen S, Thomas S, et al. Large language models to identify social determinants of health in electronic health records. NPJ Digit Med 2024;7:6. [Crossref] [PubMed]
Stypińska J, Franke A. AI revolution in healthcare and medicine and the (re-)emergence of inequalities and disadvantages for ageing population. Front Sociol 2023;7:1038854. [Crossref] [PubMed]
Rueda J, Rodriguez JD, Jounou IP, et al. “Just” accuracy? procedural fairness demands explainability in ai-based medical resource allocations. AI & Society 2022;39:1-12. [PubMed]
Apell P, Eriksson H. Artificial intelligence (AI) healthcare technology innovations: the current state and challenges from a life science industry perspective. Technology Analysis & Strategic Management 2021;35:179-93. [Crossref]
Ramezani M, Takian A, Bakhtiari A, et al. The application of artificial intelligence in health policy: a scoping review. BMC Health Serv Res 2023;23:1416. [Crossref] [PubMed]
Shumway DO, Hartman HJ. Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations. J Osteopath Med 2024;124:287-90. [Crossref] [PubMed]
Badidi E. Edge AI for early detection of chronic diseases and the spread of infectious diseases: Opportunities, challenges, and future directions. Future Internet 2023;15:370. [Crossref]
Shah NH, Halamka JD, Saria S, et al. A Nationwide Network of Health AI Assurance Laboratories. JAMA 2024;331:245-9. [Crossref] [PubMed]
Addy A, Sukah Selorm JM, Ahotoh FM, et al. Analysis of ghana’s public health act 2012 and AI’s role in augmenting vaccine supply and distribution challenges in ghana. Journal of Law Policy and Globalization 2024;139: [Crossref]
Palaniappan K, Lin EYT, Vogel S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare (Basel) 2024;12:562. [Crossref] [PubMed]
Younis HA, Eisa TAE, Nasser M, et al. A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges. Diagnostics (Basel) 2024;14:109. [Crossref] [PubMed]
De Micco F, Di Palma G, Ferorelli D, et al. Artificial intelligence in healthcare: transforming patient safety with intelligent systems-A systematic review. Front Med (Lausanne) 2025;11:1522554. [Crossref] [PubMed]
Ektefaie Y, Shen A, Bykova D, et al. Evaluating generalizability of artificial intelligence models for molecular datasets. Nature Machine Intelligence 2024;6:1512-24. [Crossref] [PubMed]
Maleki F, Ovens K, Gupta R, et al. Generalizability of Machine Learning Models: Quantitative Evaluation of Three Methodological Pitfalls. Radiol Artif Intell 2022;5:e220028. [Crossref] [PubMed]
Ho SY, Phua K, Wong L, et al. Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability. Patterns (N Y) 2020;1:100129. [Crossref] [PubMed]
Kapoor S, Narayanan A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns (N Y) 2023;4:100804. [Crossref] [PubMed]
Jung J, Dai J, Liu B, et al. Artificial intelligence in fracture detection with different image modalities and data types: A systematic review and meta-analysis. PLOS Digit Health 2024;3:e0000438. [Crossref] [PubMed]
Buddhiraju A, Chen TL, Subih MA, et al. Validation and Generalizability of Machine Learning Models for the Prediction of Discharge Disposition Following Revision Total Knee Arthroplasty. J Arthroplasty 2023;38:S253-8. [Crossref] [PubMed]
Sarker IH. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput Sci 2021;2:160. [Crossref] [PubMed]
Foody GM. Challenges in the real world use of classification accuracy metrics: From recall and precision to the Matthews correlation coefficient. PLoS One 2023;18:e0291908. [Crossref] [PubMed]
Salehinejad H, Kitamura J, Ditkofsky N, et al. A real-world demonstration of machine learning generalizability in the detection of intracranial hemorrhage on head computerized tomography. Sci Rep 2021;11:17051. [Crossref] [PubMed]
Riley RD, Archer L, Snell KIE, et al. Evaluation of clinical prediction models (part 2): how to undertake an external validation study. BMJ 2024;384:e074820. [Crossref] [PubMed]
Zihni E, Madai VI, Livne M, et al. Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome. PLoS One 2020;15:e0231166. [Crossref] [PubMed]
Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf Fusion 2022;77:29-52. [Crossref] [PubMed]
Felder RM. Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care. Hastings Cent Rep 2021;51:38-45. [Crossref] [PubMed]
Reyna MA, Nsoesie EO, Clifford GD. Rethinking Algorithm Performance Metrics for Artificial Intelligence in Diagnostic Medicine. JAMA 2022;328:329-30. [Crossref] [PubMed]
Thomas RL, Uminsky D. Reliance on metrics is a fundamental challenge for AI. Patterns (N Y) 2022;3:100476. [Crossref] [PubMed]
Matheny ME, Goldsack JC, Saria S, et al. Artificial Intelligence In Health And Health Care: Priorities For Action. Health Aff (Millwood) 2025;44:163-70. [Crossref] [PubMed]
Marko JGO, Neagu CD, Anand PB. Examining inclusivity: the use of AI and diverse populations in health and social care: a systematic review. BMC Med Inform Decis Mak 2025;25:57. [Crossref] [PubMed]
Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int J Med Inform 2022;161:104738. [Crossref] [PubMed]
US Food and Drug Administration. Considerations for the use of artificial intelligence to support regulatory decision-making for drug and biological products. Available online: https://www.fda.gov/media/184830/download. U.S. Department of Health and Human Services. Tech. Rep. 2025; draft Guidance for Industry and Other Interested Parties. Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), Center for Devices and Radiological Health (CDRH)
Bekbolatova M, Mayer J, Ong CW, et al. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare (Basel) 2024;12:125. [Crossref] [PubMed]
Williamson SM, Prybutok V. Balancing privacy and progress: A review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl Sci 2024;14:675. [Crossref]
Avacharmal R. Explainable AI: Bridging the gap between machine learning models and human understanding. Journal of Informatics Education and Research 2024;4.
Ehsan U, Riedl MO. Explainability pitfalls: Beyond dark patterns in explainable AI. Patterns (N Y) 2024;5:100971. [Crossref] [PubMed]
Berber A, Sreckovic S. When something goes wrong: Who is responsible for errors in ML decision-making? AI & Society 2023;39:1891-903. [Crossref]
Lawton T, Morgan P, Porter Z, et al. Clinicians risk becoming 'liability sinks' for artificial intelligence. Future Healthc J 2024;11:100007. [Crossref] [PubMed]
Morgan P. Chapter 6: Tort law and artificial intelligence - vicarious liability. in The Cambridge Handbook of Private Law and Artificial Intelligence. Lim E and Morgan P, Eds. Cambridge: Cambridge University Press, 2024.
Abbott R. The Reasonable Robot: Artificial Intelligence and the Law. Cambridge: Cambridge University Press, 2020.
Toma M, Syed F, McCoy L, et al. Engineering in medicine: Bridging the cognitive and emotional distance between medical and non-medical students. International Journal of Education in Mathematics Science and Technology 2023;12:99-113.
Gazquez-Garcia J, Sánchez-Bocanegra CL, Sevillano JL. AI in the Health Sector: Systematic Review of Key Skills for Future Health Professionals. JMIR Med Educ 2025;11:e58161. [Crossref] [PubMed]
doi: 10.21037/jhmhp-24-138 Cite this article as: Nasef D, Nasef D, Sawiris V, Weinstein B, Garcia J, Toma M. Integrating artificial intelligence in clinical practice, hospital management, and health policy: literature review. J Hosp Manag Health Policy 2025;9:20.