6360abefb0d6371309cc9857

Full Text

Review Article

“Virtual Specialists”: How AI Can Improve Risk Stratification, Clinical Support and Remote Monitoring in Palliative Care


Abstract
Palliative care faces significant challenges, including staff shortages, clinical uncertainty and delayed recognition of patient deterioration. We here explore how artificial intelligence (AI) can act as a “virtual specialist” to assist healthcare providers in making better decisions, assessing patient risk and monitoring symptoms in hospital and home-based settings. It examines how AI tools such as machine learning, natural language processing and device-integrated monitoring can improve predictions about patient decline, guide treatment decisions and enhance symptom management. There is a special focus on how AI can identify and alert transitions from stable to unstable health states and assist mid-level clinicians in managing complex cases where specialist support may be limited.

We discuss key barriers to implementation, including AI’s reduced accuracy in patients with multiple illnesses, the need for transparency in complex ethical decisions and the importance of integrating AI tools into clinical workflows to support rather than replace human judgment. When designed responsibly, AI can help expand access to quality palliative care and improve patient outcomes, particularly in resource-limited environments.

Keywords:
Palliative Care; Hospice Care; Artificial Intelligence; Personalized Medicine; Remote Patient Monitoring; Prognostication; Medical Ethics; Explainable AI

Introduction
Palliative care is specialized medical care to relieve serious illnesses' symptoms, pain and stress, regardless of the patient’s prognosis or disease stage. Its primary goals include improving quality of life, providing relief from physical and emotional suffering and addressing the complex psychosocial needs of both patients and their families1. Unlike hospice care, which is generally provided to patients expected to live six months or less, palliative care can begin early in the disease process and is delivered alongside curative or life-prolonging treatments2. Despite these recognized benefits, not all patients have equal or timely access to palliative care, especially those with multiple chronic conditions or those living in resource-limited areas3,4.

Healthcare providers in the palliative care setting commonly manage patients with substantial clinical complexity, as many patients suffer from multiple simultaneous comorbidities and polypharmacy and experience unpredictable disease progression and difficult-to-control symptoms. Collectively, these issues increase the demand for specialist input and coordinated multidisciplinary care
5,6. These factors increase the need for specialist involvement and coordinated care across different clinical teams. This situation frequently strains already limited healthcare resources, resulting in fragmented care, delayed referrals and inadequate management of distressing symptoms like pain, anxiety and shortness of breath6,7. As a result, there is a growing need for innovative tools to help healthcare providers make quicker, more accurate and more effective clinical decisions in palliative care8.

Artificial Intelligence (AI) systems, including machine learning (ML) and deep learning (DL) algorithms, natural language processing (NLP) and predictive analytics, is quickly becoming a powerful tool in healthcare
9,10. AI has already demonstrated promise across various clinical applications, including diagnostic imaging, chronic disease management, predictive modelling and clinical decision support11,12. Given the complexities of palliative care, well-designed AI-driven systems have the potential to support clinical decision-making. By functioning as scalable "virtual specialists," AI can rapidly analyze vast and ever-expanding patient data to deliver personalized care recommendations at a fraction of the time, financial cost and workforce resources13.

Palliative care faces challenges such as workforce shortages, clinical uncertainty and delays in recognizing patient deterioration. We explore how AI can act as a virtual specialist to assist clinicians in decision-making, risk assessment and symptom monitoring across acute and home-based settings. It examines the role of supervised ML, NLP and device-integrated monitoring in improving prognostication, triage and individualized treatment planning. The discussion highlights AI’s ability to detect subtle physiological and behavioural changes that may indicate early and rapid clinical decline by referencing recent peer-reviewed research. Additionally, it addresses key implementation challenges, including reduced model accuracy in patients with multiple illnesses, the need for transparency in ethically sensitive decisions and the importance of ensuring AI aligns with institutional protocols and individual clinician preferences. When developed thoughtfully and deployed responsibly, AI-driven systems can enhance access to high-quality palliative care while supporting clinician judgment in even the most complex and resource-constrained environments.

Clinical and Operational Challenges in Palliative Care
Patients receiving palliative care frequently present with significant clinical complexity, often characterized by multiple coexisting chronic illnesses such as cancer, chronic heart failure, chronic obstructive pulmonary disease (COPD) or advanced dementia. The simultaneous management of these comorbid conditions typically involves polypharmacy, creating increased risks of adverse drug reactions, drug-drug interactions and treatment complications, thereby elevating the clinical complexity and burden for healthcare providers5,6,14.

In addition to multimorbidity and polypharmacy, patients in palliative care commonly experience unpredictable trajectories of disease progression and symptom severity. Unlike acute care, where clinical progression and outcomes are generally more predictable, palliative care requires ongoing reassessment and adjustments. Symptoms such as pain, fatigue, breathlessness, shock, reduced urine output, incontinence, delirium and restlessness can change rapidly and unpredictably, making continuous evaluation vital. This variability adds layers of uncertainty, complicating timely intervention and symptom management
2,15. Furthermore, symptom experiences are frequently subjective and individualized, necessitating careful and nuanced clinical judgment16.

This complex interplay of clinical factors necessitates comprehensive, multidisciplinary involvement spanning multiple specialties, including palliative medicine, internal medicine, oncology, nursing, pharmacy, social work and mental health services. Coordinating such interdisciplinary care is inherently resource-intensive and logistically challenging, particularly when patient conditions evolve swiftly or when differing or outright contradictory specialist opinions make it difficult to reach a unified decision
8,17.

Resource constraints in healthcare systems further exacerbate these challenges. Limited availability of specialist expertise, fragmented communication among providers and workforce shortages contribute significantly to fragmented patient care, delayed referrals to appropriate palliative or hospice settings and inadequate symptom relief. These constraints disproportionately affect patients residing in rural or underserved communities, where specialist access and comprehensive palliative services are often limited or non-existent
4,18. Consequently, many patients experience unnecessary hospitalizations, preventable emergency visits, increased healthcare expenditures and reduced overall quality of life7.

Given the cumulative impact of these clinical complexities and operational constraints, there is a pressing need for innovative technological tools capable of enhancing the precision, efficiency and responsiveness of clinical decision-making in palliative care. Advanced solutions that enable clinicians to interpret clinical data quickly, accurately anticipate symptom escalation and promptly implement personalized interventions could substantially mitigate current barriers, ultimately improving patient outcomes and alleviating healthcare resource strain
13,19.

AI as a Virtual Specialist Offering Clinical Decision Support and Prognostic Guidance
The heightened complexity of cases managed in palliative care, coupled with workforce shortages and budgetary constraints, has created a critical need for scalable, decision-enhancing technologies at a reasonable financial cost. The whole gambit of AI systems and predictive analytics are emerging as powerful tools capable of synthesizing large volumes of structured and unstructured clinical data to deliver real-time, context-sensitive recommendations9. As mentioned, our view is that these technologies should be deployed as virtual specialists, augmenting human decision-making rather than outright replacing it and supporting clinical reasoning across diverse provider expertise.

Prognostication and Real-Time Triage
AI applications in palliative care have demonstrated significant advances in prognostic accuracy, clinical triage and early intervention strategies. One of the most validated uses is predicting survival trajectories, particularly in non-cancer populations where physician estimates often overestimate life expectancy by 30–40%, leading to delayed referrals and underutilization of early palliative interventions20,21. ML models trained on longitudinal electronic health record (EHR) data have shown superior predictive accuracy for 6- and 12-month mortality, with some achieving area under the curve (AUC) scores between 0.82 and 0.8922,23. These models detect subtle physiologic and behavioural signals-such as declining functional status, changes in vital signs and increased healthcare utilization-that clinicians may overlook in routine practice24. AI-driven time-series analysis has further refined hospice eligibility predictions, particularly in patients with frailty, dementia or progressive multi-organ disease13.

Beyond prognosis, AI enables real-time triage by stratifying patients according to clinical risk and care needs. At Stanford University, Avati et al. developed the End-of-Life Predictive Model, a DL algorithm trained on over two million EHRs to identify hospitalized patients at high risk of death within 3 to 12 months. Integrated into hospital systems, it generates real-time risk scores, prompting earlier palliative care consults and improving operational efficiency in large medical centers
23.

Additionally, AI-driven NLP models have been developed to extract early indicators of clinical deterioration from unstructured EHR text. A 2022 study by the MIT Clinical NLP Group applied transformer-based DL models to tens of thousands of free-text notes from palliative care patients in a tertiary academic hospital. The model successfully identified latent distress signals, escalating symptoms and functional decline-features often absent from structured data fields-and achieved high sensitivity and specificity in predicting adverse events. These findings highlight NLP’s potential to enhance palliative care referrals by improving specificity and timeliness
25.

By integrating structured and unstructured data sources, AI strengthens palliative care decision-making-offering improved prognostic insights, refining hospice eligibility, optimizing patient triage and uncovering hidden deterioration risks. These innovations hold particular promise for ensuring timely and effective palliative interventions.

Improved Clinician Behaviour and Communication with Colleagues
Similarly, at the University of Pennsylvania, researchers from the Penn Medicine Nudge Unit developed the Advanced Care Planning Prompt, an ML-based tool embedded into clinical workflows to predict six-month mortality among oncology patients. The team paired this model with behavioural “nudges” such as peer comparison emails and targeted messages to encourage oncologists to initiate serious illness conversations. In their 2020 stepped-wedge cluster randomized trial involving 14,607 patients with advanced cancer, the intervention increased documentation of goals-of-care discussions by 4.6 times compared to baseline22. This study was among the first to show that integrating AI-generated mortality estimates with subtle behavioral strategies could lead to meaningful changes in clinician behaviour and communication practices, which are outcomes directly aligned with the goals of palliative care.

Supporting Mid-Level Clinicians in Complex Decision-Making
A less discussed but increasingly critical application of AI lies in its potential to augment the decision-making capabilities of mid-level providers such as nurse practitioners (NPs) and physician associates (PAs). These providers are taking on an expanding role in palliative care delivery, often leading consults and managing care plans independently, especially in community-based and home-care models26. Between 2010 and 2020, the number of NPs practicing in palliative and hospice settings increased by over 80% and by 2023, NPs constituted more than 50% of the provider workforce in many large hospice agencies in the United States27.

This shift is partly driven by cost-efficiency and workforce shortages, but mid-levels often enter the field without the intensive residency or subspecialty fellowship training that physicians undergo. As a result, they may encounter challenges in managing high-acuity, multi-morbidity cases where diagnostic uncertainty and complex symptomatology are common
28. AI-based clinical decision support systems can serve as real-time knowledge augmentation tools, offering guideline-based recommendations, risk stratification and tailored treatment options that support less-experienced providers in delivering high-quality care. In a recent survey of palliative care NPs, over 70% expressed interest in AI-assisted platforms to help with medication titration, prognostication and care prioritization, particularly in home-based or solo-practice environments29.

By offering accessible and easily understood expertise, AI may help bridge gaps in training and experience, reduce variability in care quality and empower mid-level clinicians to manage complex cases with greater confidence and precision, especially in settings where supervising physicians are overextended or in regions with limited access to specialty-trained providers
30.

Continuous Learning and Adaptive Expertise
Unlike static clinical guidelines or rule-based algorithms, AI models can evolve continuously by incorporating new data and feedback. This capability allows them to adapt to changes in local patient populations, treatment practices and health system structures over time. Retraining models periodically can enhance performance, reduce biases introduced by outdated data and align outputs with the real-world context in which they are deployed31. In palliative care, where disease trajectories are frequently non-linear and unpredictable and where off-label medication use is sometimes required to manage complex symptoms, the adaptive nature of AI systems offers a distinct advantage.

At the same time, safeguarding clinical autonomy remains essential. While AI may offer evidence-based insights, the final responsibility for decision-making must reside with trained clinicians attuned to the patient’s goals, values and psychosocial context. Transparent, interpretable AI systems are essential to ensuring that clinicians can critically assess and apply algorithmic recommendations without undermining the therapeutic relationship
32.

Monitoring and Deterioration Forecasting in Acute and Home-Based Palliative Care
Early recognition of clinical deterioration is essential in palliative care to minimize patient suffering, guide appropriate therapeutic decisions and prevent avoidable hospitalizations or aggressive interventions inconsistent with patient goals. AI offers a scalable solution for continuous risk stratification by synthesizing high-dimensional data from EHRs, bedside monitors, wearable sensors and life-sustaining devices such as ventilators, dialysis machines, cardiac telemetry, etc. By identifying subtle physiologic trends and deviations often undetectable through conventional unintelligent and non-automated monitoring, AI can alert clinicians to early signs of decline, prompting timely reassessment and supportive intervention.

This capability is especially critical in home-based palliative care, where continuous in-person evaluation is often infeasible. Symptom-based deterioration in chronic illnesses such as advanced heart failure, COPD and end-stage renal disease frequently goes unrecognized until the onset of acute decompensation. We foresee AI-enabled monitoring systems eventually integrating with home medical devices such as dialysis machines, continuous positive airway pressure (CPAP) units and mobile electrocardiogram (ECG) monitors, allowing the detection of subtle anomalies in biometric trends well before patients or caregivers become aware of pertinent clinical changes
33. This early warning capacity supports anticipatory guidance, the timely adjustment of care goals and the rapid initiation of palliative interventions when needed, ultimately reducing avoidable emergency visits and improving alignment with patient preferences for care at home.

AI-Based Clinical Support for Ventilated and Critically Ill Palliative Patients
Patients in palliative care who require mechanical ventilation, dialysis or other life-sustaining interventions often present with rapidly evolving and multifactorial clinical deterioration. AI-driven systems are increasingly being investigated for their capacity to support high-stakes, time-sensitive decision-making in these scenarios.

A landmark study by Komorowski et al. introduced the “AI Clinician,” a reinforcement learning model trained on data from over 96,000 intensive care unit (ICU) admissions in the MIMIC-III database. The model incorporated 46 routinely collected clinical variables to recommend individualized treatment strategies for sepsis, a leading cause of mortality in ICU settings. Remarkably, patients whose care aligned closely with the AI’s recommendations had higher survival rates than those managed using conventional physician decision-making strategies. The study demonstrated the feasibility of using reinforcement learning to develop adaptive, data-driven clinical decision-support tools that outperform rule-based guidelines in complex, high-risk conditions
34.

Expanding on this, studies using the publicly available MIMIC-IV database have explored the application of ML algorithms to anticipate complications in mechanically ventilated patients. For instance, Johnson et al. utilized gradient boosting and recurrent neural networks to forecast adverse respiratory events, including oxygenation failure and unplanned reintubation, by analysing real-time ventilator parameters, arterial blood gases and vital signs. Their models achieved AUC values ranging from 0.78 to 0.86, demonstrating strong predictive performance. These early warning systems, when embedded into clinical workflows, may allow providers to anticipate decompensation up to 48 hours in advance, offering a critical window to modify ventilator settings or initiate supportive therapies before irreversible deterioration occurs
35.

Together, these studies underscore the growing role of AI in managing critically ill patients with advanced disease of fluctuating severity, including those receiving palliative care in intensive care settings. By continuously analysing real-time physiologic data from devices such as ventilators, infusion pumps, dialysis machines and electrocardiogram monitors, AI models can detect early signs of clinical deterioration before they become apparent through standard monitoring. This advanced detection capability can reduce the cognitive load on ICU teams, particularly in resource-constrained environments or during surges in patient volume. It also supports timely clinical responses, whether through escalation of care or thoughtful de-escalation that aligns with the patient's values and goals, which are essential in high-acuity palliative care. As these technologies evolve, their integration into routine workflows may improve the accuracy and responsiveness of palliative interventions, especially for patients who may struggle to express their changing care preferences.

AI-Enabled Wearable Devices and Remote Monitoring in Palliative Care
Recent advances in wearable technologies have opened new frontiers for real-time patient monitoring in palliative and end-of-life care. In hospice settings, a 2022 prospective observational study conducted at Taipei Medical University Hospital investigated the utility of wearable actigraphy devices-which continuously track physical activity and sleep-wake cycles using motion sensors typically worn on the wrist. The study involved 68 terminally ill patients and found that movement metrics such as accumulated angle and spin values positively correlated with patient outcomes. Patients who survived to discharge exhibited significantly higher activity levels compared to those who died during their inpatient hospice stay, suggesting that continuous actigraphy-based monitoring may provide meaningful insights into prognosis and functional decline36,37.

Building on this premise, a 2023 prospective cohort study conducted at National Taiwan University Hospital evaluated the feasibility of using wearable devices coupled with ML to predict 7-day mortality in patients with terminal cancer receiving end-of-life care. Participants were equipped with smartwatches that continuously collected physiological data, including heart rate, sleep duration, step count and blood oxygen saturation. The study employed various ML models, with the extreme gradient boosting (XGBoost) classifier achieving the highest performance, demonstrating an area under the receiver operating characteristic curve (AUROC) of 0.96, an F1-score of 78.5% and an accuracy of 93%. These findings highlight the potential of integrating wearable technology with AI capabilities to provide timely prognostic information, thereby supporting clinical decision-making and personalized care in end-of-life settings
38.

These efforts are particularly valuable in home-based palliative care, where continuous in-person clinical monitoring is often impractical. A 2024 systematic review published in Palliative Medicine analysed 20 studies on electronic symptom monitoring in home-based palliative care. The review found that most patients positively engaged with electronic symptom monitoring, which could improve their quality of life, physical and emotional well-being and symptom scores without significantly increasing costs. However, the review also noted variability in the reported data and inadequate statistical power in some studies, limiting firm conclusions about the effects on outcomes like survival, hospital admissions, length of stay, emergency visits and adverse events. Despite these limitations, the integration of electronic symptom monitoring holds potential for enhancing patient-reported outcomes and decreasing hospital visits and costs in home-based palliative care settings
39.

Collectively, these technologies illustrate a shift toward more proactive, data-driven care models in palliative medicine. By enabling continuous, passive monitoring in low-resource settings, AI-augmented wearables and remote sensors represent an important avenue for reducing disparities in end-of-life care access and improving responsiveness to the needs of critically-ill patients outside of traditional inpatient settings.

Discussion: Oversimplification Erodes Reliability and Clinician Trust
As AI tools are increasingly adopted in palliative care, questions about their reliability, explainability and ethical-legal defensibility are receiving increased attention. A key concern is that many models are developed to solve narrow, task-specific problems using datasets mainly composed of patients with a single dominant illness. However, palliative care patients frequently present with multiple interacting comorbidities, which complicates both symptom interpretation and care prioritization. This issue was illustrated in 2021 by Rajkomar, et al., which evaluated a DL model trained to predict in-hospital mortality using EHR data across multiple hospitals. The model achieved high overall predictive accuracy, but its performance was significantly lower in patients with multiple chronic conditions, especially when the training data lacked similar multimorbidity profiles. The authors concluded that algorithmic reliability decreased as clinical complexity increased, emphasizing the need for utilizing training data with multimodal inputs taken from high-acuity populations like those receiving palliative care40.

Explainability and Ethical-legal Defensibility
Another important consideration is explainability. Clinicians must understand and verbalize the rationale behind AI-generated recommendations, especially when these outputs relate to sensitive decisions such as withholding or withdrawing life-sustaining therapy. A 2022 study by Tonekaboni, et al. explored how clinicians engaged with explainable ML models designed for ICU decision support. The authors found that models offering interpretable outputs-such as visual explanations of variable importance or logical decision pathways-significantly increased clinician trust and willingness to act on AI recommendations41. This transparency is crucial in the context of palliative care, where decisions can carry ethical and legal weight, especially around end-of-life choices. If a clinician’s decision were to be scrutinized in court, the ability to articulate how an AI-supported recommendation aligned with clinical judgment and was based on understandable evidence could form an essential part of a legal defense in negligence claims.

As it stands now, many DL models operate as “black boxes," generating predictions or treatment recommendations without sufficiently explaining the underlying rationale. This haziness can erode clinician trust and hinder shared decision-making-especially in palliative care, where treatment decisions require individualized scrutiny and ethically complex judgment and often carry legal implications. A 2021 scoping review by Panch, et al. found that clinicians across specialties consistently identified limited interpretability as a major barrier to adopting AI tools in sensitive contexts such as end-of-life care. In these settings, recommendations must be medically, ethically and legally defensible. Panch and colleagues emphasized that the greatest risk posed by AI is not malicious intent but flawed design and implementation, particularly when models are used in clinical environments that differ significantly from those on which they were trained
42. A poorly matched model may generate outputs incompatible with the patient’s clinical reality or overtly stated care preferences. To prevent such mismatches, prioritization of transparency in algorithmic reasoning and the ability to adapt to diverse clinical scenarios should be the focus of future advancements.

Aligning Protocols and Preferences for Seamless Workflow Integration
Human factors research emphasizes that for AI decision support tools to be practical in palliative care, they must be tailored to the specific protocols, guidelines and workflows of the hospital systems in which they are deployed. Carayon and colleagues have highlighted how a lack of contextual awareness can lead to two extremes: clinicians may disregard AI recommendations when they seem disconnected from local practices or they may over-rely on them when those recommendations appear easy to implement, even if they do not fully address the patient’s clinical complexity43. To avoid common pitfalls, AI systems must be designed to reflect each institution’s standards of care while remaining adaptable to the unique decision-making styles of individual clinicians. This means ensuring that outputs comply with established protocols and guidelines and presenting recommendations in formats that align with how different providers interpret and act on clinical information. When AI tools are integrated into clinical environments in a way that respects both institutional workflows and human cognitive habits, they are more likely to support context-sensitive decisions, uphold provider autonomy and reinforce patient-centered care.

Conclusion
Integrating artificial intelligence into palliative care offers a meaningful opportunity to improve decision-making, enhance prognostic accuracy and support timely, patient-centered interventions. Acting as virtual specialists, AI tools can help clinicians manage complex cases more effectively, especially in settings with limited specialist access. From real-time data analysis in intensive care units to remote monitoring via wearable devices, AI shows strong potential in detecting early signs of clinical decline and guiding appropriate responses. These innovations may reduce unnecessary hospitalizations, improve resource use and support consistent, compassionate care across diverse settings.

Still, adopting AI in palliative care demands thoughtful attention to interpretability, bias and ethical accountability issues. Clinicians must understand and justify AI-generated recommendations, particularly when making sensitive decisions about life-sustaining treatments or end-of-life goals. Transparent design, context-aware adaptation and collaborative development with input from healthcare providers and ethicists ensure that AI supports, rather than undermines, human judgment. When designed responsibly, AI can help deliver high-quality care without sacrificing the values of dignity, empathy and individualized attention that define palliative medicine.

References
1. World Health Organization. Palliative Care 2020.
2. Kelley AS, Morrison RS. Palliative care for the seriously ill. New Eng J Med 2021;385(13):1168-1177.
3. Hawley P. Barriers to access to palliative care. Palliative Care and Social Practice 2017;10:1178224216688887.
4. Alharby BSF, Khaled AB, Alzehairi FM, et al. Critical Analysis of Cross-Cutting Themes in Healthcare Systems and Practices. J Ecohumanism 2024;3(8):4162.
5. Portz JD, Kutner JS, Blatchford PJ, Ritchie CS. High symptom burden and low functional status in the setting of multimorbidity. J American Geriatrics Society 2017;65(10):2285-2289.
6. McPhail SM. Multimorbidity in chronic disease: impact on health care resources and costs. Risk management and healthcare policy 2016:143-156.
7. Kelley AS, Covinsky KE, Gorges RJ, et al. Identifying older adults with serious illness: A critical step toward improving the value of health care. Health Serv Res 2017;52(1):113-131.
8. Courtright KR, Chivers C, Becker M, et al. Electronic health record mortality prediction model for targeted palliative care among
hospitalized medical patients: A pilot quasi-experimental study. J Gen Intern Med 2019;34(9):1841-1847.

9. Topol EJ. High-performance medicine: The convergence of human and artificial intelligence. Nat Med 2019;25(1):44-56.
10. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA 2018;319(13):1317-1318.
11. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med 2022;28(1):31-38.
12. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6(2):94-98.
13. Vu E, Steinmann N, Schröder C, et al. Applications of machine learning in palliative care: a systematic review. Cancers 2023;15(5):1596.
14. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr 2017;17(1):230.
15. Hui D, Bruera E. Models of palliative care delivery for patients with cancer. J Clin Oncol 2022;40(8):850-865.
16. Hui D, Hannon BL, Zimmermann C, Bruera E. Improving patient and caregiver outcomes in oncology: Team-based, timely and targeted palliative care. CA Cancer J Clin 2018;68(5):356-376.
17. Firn J, Preston N, Walshe C. What are the views of hospital-based generalist palliative care professionals on what facilitates or hinders collaboration with in-patient specialist palliative care teams? A systematically constructed narrative synthesis. Palliative medicine 2016;30(3):240-256.
18. Hawley P. Barriers to access to palliative care. Palliat Care 2017;10.
19. Miller DD, Brown EW. Artificial intelligence in medical practice: The question to the answer? Am J Med 2018;131(2):129-133.
20. Christakis NA, Lamont EB. Extent and determinants of error in doctors’ prognoses in terminally ill patients: Prospective cohort study. BMJ 2000;320:469-472.
21. Glare P, Virik K, Jones M, et al. A systematic review of physicians' survival predictions in terminally ill cancer patients. BMJ 2003;327:195-198.
22. Manz CR, Parikh RB, Small DS, et al. Effect of integrating machine learning mortality estimates with behavioral nudges to clinicians on serious illness conversations among patients with cancer: A stepped-wedge cluster randomized clinical trial. JAMA Oncol 2020;6:204759.
23. Avati A, Jung K, Harman S, Downing L, Ng A, Shah NH. Improving palliative care with deep learning. BMC Med Inform Decis Mak 2018;18(4):122.
24. Luonuansuu TA, Christensen AR, Hutchinson SZ. Artificial Intelligence/Machine Learning in Palliative Care# 492. J Palliative Med 2025;28(1):123-125.
25. Sandham MH, Hedgecock EA, Siegert RJ, Narayanan A, Hocaoglu MB, Higginson IJ. Intelligent palliative care based on patient-reported outcome measures. J Pain Symptom Manag 2022;63(5):747-757.
26. Virdun C, Luckett T, Davidson PM, Phillips J. Strengthening palliative care in the hospital setting: a codesign study. BMJ supportive & palliative care 2020;14(1):798-806.
27. American Association of Nurse Practitioners. Nurse Practitioner Workforce Statistics 2023.
28. Auerbach DI, Buerhaus PI, Staiger DO. Implications of the rapid growth of the nurse practitioner workforce in the US: An examination of recent changes in demographic, employment and earnings characteristics of nurse practitioners and the implications of those changes. Health Affairs 2020;39(2):273-279.
29. Mills J, Fox J, Damarell R, Tieman J, Yates P. Palliative care providers’ use of digital health and perspectives on technological innovation: a national study. BMC palliative care 2021;20(1):124.
30. Grabenkort WR, Meissen HH, Gregg SR, Coopersmith CM. Acute care nurse practitioners and physician assistants in critical care: transforming education and practice. Critical Care Med 2017;45(7):1111-1114.31. Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: Review, opportunities and challenges. Brief Bioinform 2018;19:1236-1246.
32. London AJ. Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Cent Rep 2019;49:15-21.
33. Wong A, Young AT, Liang AS, Gonzales R, Douglas VC, Hadley D. Development and validation of an electronic health record–based machine learning model to estimate delirium risk in newly hospitalized patients without known cognitive impairment. JAMA network open 2018;1(4):181018.
34. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med 2018;24(11):1716-1720.
35. Johnson AEW, Ghassemi MM, Nemati S, et al. Machine learning and decision support in critical care. Proc IEEE 2016;104(2):444-466.
36. Huang Y, Kabir MA, Upadhyay U, et al. Exploring the potential use of wearable devices as a prognostic tool among patients in hospice care. Medicina 2022;58(12):1824.
37. Ancoli-Israel S, Cole R, Alessi C, et al. The role of actigraphy in the study of sleep and circadian rhythms. Sleep 2003;26(3):342–392.
38. Liu JH, Shih CY, Huang HL, et al. Evaluating the potential of machine learning and wearable devices in end-of-life care in predicting 7-day death events among patients with terminal cancer: cohort study. J Med Internet Res 2023;25:47366.
39. Mao S, Liu L, Miao C, et al. Electronic symptom monitoring for home-based palliative care: A systematic review. Palliat Med 2024;38(8):790-805.
40. Rajkomar A oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. npj Digital Med 2018;1:18.
41. Tonekaboni S, Joshi S, McCradden MD, Goldenberg A. What clinicians want: Contextualizing explainable machine learning for clinical end use. Proceedings of Machine Learning Res 2019;106:359-380.
42. Panch T, Mattie H, Celi LA. The Inconvenient Truth About AI in Healthcare. NPJ Digital Med 2019;2(1):77.
43. Carayon P, Kleinschmidt P, Hose BZ, Salwei M. Human Factors and Ergonomics in Health Care and Patient Safety from the Perspective of Medical Residents. In: Donaldson L, Ricciardi W, Sheridan S, Tartaglia R. (eds) Textbook of Patient Safety and Clinical Risk Management 2021.