End-of-life decisions are some of the hardest moments any family, clinician, or hospice team will ever face. Even when a patient has had candid conversations with loved ones, the reality of decline can feel different than anything imagined. When there is no advance directive or clear documentation of the patient’s wishes, those decisions become even more complex. Families may disagree, memories of past conversations may not align, and the clinical team is left trying to balance what is medically appropriate with what might honor the patient’s values. The result is often a mix of uncertainty, guilt, and emotional strain for everyone at the bedside.
This is the space where new data tools and artificial intelligence are starting to appear. Some models claim they can estimate what treatments a patient might choose at the end of life based on patterns in large data sets. Others aim to predict who is at higher risk of dying within a certain time frame, nudging clinicians to start goals-of-care conversations sooner or to consider hospice or palliative care earlier. For hospice and healthcare teams already stretched thin, it can be tempting to see these tools as a way to “solve” the hardest part of care: figuring out what to do when nothing is simple and time is short.
But there is a crucial distinction to hold onto: data and AI can support decision-making; they should not be the decision-maker. An algorithm might highlight that a patient shares characteristics with others who tended to decline aggressive interventions. It might flag that prognosis is shorter than it appears at first glance.
Yet it cannot sit with the family in their grief, it cannot understand a patient’s faith in the way a chaplain can, and it cannot weigh the quiet promises made at a kitchen table months or years before the illness progressed.
At best, AI can offer additional information, patterns, or prompts that help humans ask better questions. It cannot take away the responsibility – or the privilege – of truly listening to what matters most to the patient.
Ethical Challenges
This is where the ethical challenges begin to surface. If an AI model suggests that a patient “would not want” a particular treatment, how much weight should that suggestion carry, especially when there is no formal advance directive? If a clinician disagrees with the model’s output based on what they have heard from the patient or family, whose judgment should guide the plan of care? And if families hear that “the data says” their loved one would choose a certain path, will they feel free to disagree? Or, will they feel pressured by the perceived neutrality and authority of the algorithm? The more powerful and precise these tools appear, the more they risk subtly shifting who feels entitled to make the final call.
For clinical staff, the questions become deeply personal and practical. How will you integrate AI-generated risk scores or preference predictions into your bedside conversations without letting them overshadow your clinical intuition and your understanding of the patient’s story? When a model’s suggestion conflicts with what a patient or family is clearly expressing now, what will guide your next step? How might your moral distress change if a decision later comes into question and someone asks, “Why didn’t you follow what the algorithm recommended?” or, conversely, “Why did you rely on it so heavily?”
For administrators, AI at the end of life raises strategic and cultural questions. If your organization adopts tools that predict mortality or likely treatment preferences, how will that change workflows, staffing, and expectations around hospice and palliative care referrals? Will there be pressure – subtle or explicit – to align care patterns with what the data suggests, especially if payers or partner organizations see AI as a way to manage cost and utilization? How will you communicate to your teams, and to your community, that these tools are meant to inform compassionate care rather than to standardize deeply human decisions?
And for compliance and ethics leaders, AI adds new layers of risk and responsibility. If an AI recommendation influences an end-of-life decision, how should that be documented? What happens if patterns emerge showing that the tool performs differently across racial, cultural, or language groups? Who owns the responsibility to investigate and respond? Is there a point at which the use of AI in end-of-life decision-making should trigger explicit disclosure or consent from patients and families? And if your organization chooses not to use these tools while others do, could that one day be seen as a gap in standard of care – or as a principled stance on preserving human judgment?
End-of-Life Decisions Live in a Crowded Space
None of these questions have easy answers, and perhaps they shouldn’t. End-of-life decisions have always lived in a space where medicine, ethics, family, and faith meet. AI does not change that; it just adds a new voice into an already crowded room. The challenge for hospice and healthcare teams may not be whether to use these tools at all, but how to use them in a way that keeps the center of gravity firmly with the patient and those who know them best.
As AI continues to move closer to the bedside, each organization – and each role within it – will have to keep asking:
- What do we want AI to do in end-of-life care, and what do we want to reserve for humans alone?
- How will we notice if the technology meant to support us is quietly shaping decisions more than we realize?
- And in the moments when nothing is clear and there is no advance directive to guide us, whose voice should carry the most weight: the algorithm’s, the family’s, the clinician’s, or the patient’s story as we have come to know it?
Hospice and palliative care have always been about making room for the hard questions. AI doesn’t take those questions away – it may simply give us new ones to live with.





0 Comments