Understanding HQRP: The History of Hospice Quality

Understanding HQRP: The History of Hospice Quality

Hospice care, since its inception, has been about compassionate care. However, the underlying mechanisms of how it is paid for and how hospice quality is evaluated have undergone a significant transformation. This shift is part of a broader healthcare movement towards value-based care, moving away from simple fee-for-service models. For hospice, this journey can be understood through key eras and events.

1. The Foundation: 1983–2010

The concept of hospice care solidified with the establishment of the Medicare Hospice Benefit in 1983. This landmark legislation provided a structured way for Medicare beneficiaries to access comprehensive end-of-life care. Originally designed with a primary focus on cancer patients and shorter lengths of stay, it reflected the understanding of terminal care at the time. The reimbursement structure was a per-diem (per day) payment. This model, while straightforward, did not initially distinguish between different levels of care intensity.

The Medicare Hospice Benefit was established in 1983 under the Tax Equity and Fiscal Responsibility Act (TEFRA). For nearly three decades, the payment model was simple: a flat per-diem rate for four levels of care. It was designed for a different era – primarily for cancer patients with short, predictable lengths of stay. During this “Static Era,” CMS had financial data on what they paid, but almost no clinical data on what was actually happening in the home.

2. The Turn Toward Accountability: 2010–2014

As hospice use grew and patient demographics evolved, questions arose about varying practices and quality across providers. This ushered in an era focused on accountability and data collection. The Affordable Care Act (ACA) of 2010 changed the legal landscape. It mandated the creation of the Hospice Quality Reporting Program (HQRP). This marked a fundamental shift, transforming quality reporting from a voluntary endeavor into a mandated requirement for all hospice agencies.

A pivotal moment in this data-driven approach was the 2014 introduction of the Hospice Item Set (HIS). For the first time, agencies were required to submit standardized data on specific quality processes. This also tied hospice quality to payment. Failure to report HIS data resulted in a 2% (now 4%) reduction in the annual payment update.

However, HIS was a “process-based” tool. It measured if a hospice performed an action (like asking about pain), not whether the patient actually improved. Thus, while HIS was a vital step forward, its process-oriented nature was essentially retrospective. It confirmed whether specific admission and discharge procedures were documented, not necessarily if the patient’s well-being improved as a result.

3. Rebalancing the Payment Model: 2016

Parallel to the quality reporting initiatives, CMS implemented structural changes to the hospice payment system itself. In 2016, a significant change was introduced for Routine Home Care (RHC) payments. CMS realized the flat per-diem rate for Routine Home Care (RHC) didn’t match the reality of care delivery. This wasn’t about reducing payments, but about acknowledging the higher intensity of services typically required during the initial 60 days and the final days of a patient’s life.

CMS implemented RHC Payment Reform, creating two separate RHC rates: a higher rate for days 1–60 and a lower rate for days 61+. This was a structural signal that CMS was closely analyzing length-of-stay data and visit intensity. The different per-diem rates were were intended to incentivize a better alignment of payments with actual resource utilization.

Another key milestone was the creation of claims-based metrics. Recognizing the treasure trove of data within existing claims submissions, CMS developed indicators such as those within the Hospice Care Index (HCI). This approach utilized claims data to look for patterns related to care quality, such as the frequency of visits in the last days of life. This represents a clever use of existing data to derive quality insights, moving beyond self-reported assessments.

4. The Era of “Invisible” Metrics: 2022–Present

While agencies were focused on their clinical notes, CMS began using the bills themselves to measure quality. In 2022, they introduced the Hospice Care Index (HCI), a claims-based measure consisting of 10 indicators.

Unlike HIS, which clinicians fill out, the HCI is calculated entirely from existing claims data. This allows Medicare to identify patterns – like “live discharges” or “visits in the last days of life” – without requiring new forms, moving the industry closer to a Value-Based Purchasing mindset.

5. The Failed “Carve-In” and the Path to HOPE: 2021–2025

CMS also engaged in direct testing of value-based models through various pilots and demonstrations. One prominent example was the hospice component of the Value-Based Insurance Design (VBID) Model, often called the “hospice carve-in.” Launched in 2021, Value-Based Insurance Design (VBID) allowed Medicare Advantage plans to manage hospice benefits.

The goals of the VBID hospice carve-in were to assess if integrating hospice within a Medicare Advantage plan could improve care coordination, enhance quality, and reduce spending by preventing unnecessary hospitalizations. This initiative ran until 2024. The insights gained from this experiment, particularly regarding the need for robust quality measures and care coordination, continue to influence the overall direction of hospice payment and quality strategy. The experiment officially ended in December 2024, largely because CMS lacked a standardized, real-time clinical assessment tool to measure outcomes across different plans.

This brings us to the present. The Hospice Outcomes and Patient Evaluation (HOPE) tool, effective October 1, 2025, is the direct answer to this 40-year journey. CMS has finally reached a point where they are no longer satisfied with process checkboxes; they are building the infrastructure to pay for the actual impact of care.

The Path Forward and Why This Matters

The cumulative experiences from these various efforts – the process metrics of HIS, the structural changes in RHC payments, the deployment of claims-based metrics, and the practical learnings from models like VBID – all pointed to a persistent need. The industry required a standardized, patient-centric way to measure actual patient outcomes rather than just processes. This recognized need for more meaningful, outcome-focused data is a direct driver behind the development of the Hospice Outcomes and Patient Evaluation (HOPE) tool, that replaced HIS in late 2025. HOPE aims to capture data longitudinally during a patient’s stay, focusing on symptom impact and goal-setting – providing the rich data environment necessary to genuinely advance value-based care in hospice.

This historical overview illustrates that the shift towards value-based care in hospice is not a recent or sudden development. It has been a steady, deliberate evolution building upon the foundation laid in 1983, constantly striving for a more refined, data-driven system that ultimately ensures high-quality care is both provided and effectively measured. Every regulation, from the ACA to the HOPE tool, has been a stepping stone toward a system that rewards agencies for clinical outcomes rather than just census volume.

How to Master the Hospice CBR for Better Compliance

How to Master the Hospice CBR for Better Compliance

Many hospice administrators are familiar with the hospice PEPPER report. A smaller number of hospice leaders, however, are familiar with its counterpart: the Comparative Billing Report (CBR) and eCBR, the electronic version of the Comparative Billing Report. While the PEPPER provides a general overview of a hospice agency’s billing data, the CBR is a specific tool used by Medicare to identify individual agencies whose billing patterns differ significantly from their peers. To manage hospice compliance effectively, it is important to understand the purpose and the mechanics of the CBR.

What is a CBR and Who Generates It?

A CBR is a formal educational resource produced by CMS (the Centers for Medicare & Medicaid Services). It provides data-driven insights into a hospice agency’s billing patterns compared to state and national averages.

The report is a collaborative effort:

  • CMS: defines the metrics (like length of stay) used to monitor for billing errors.
  • National Contractors: CMS hires private companies to do the data analysis. CMS may hire different companies to handle the data work versus managing distribution of the reports to providers.
  • MACs (Medicare Administrative Contractors): An agency’s local contractor (like Palmetto GBA, CGS, or NGS) may also provide their own version, called an eCBR, through their provider portal.

How the CBR Highlights Outliers

The CBR is a proactive tool designed to encourage providers to review their own data before a formal audit occurs. It identifies “outliers” – agencies that fall outside of normal billing ranges – by following a simple comparison process:

  1. “Finding your Neighbors”: Instead of comparing a small local hospice to a massive national chain, the CBR groups each hospice agency with similar agencies. An agency’s data is compared against other hospices in their specific state. This ensures the comparison is fair and based on the local market.
  2. The 1-to-100 Ranking: For each metric that is measured, the report provides the agency’s “percentile” score, ranking the agency on a scale from 1 to 100.
    • Imagine 100 hospices are standing in a line, ordered from the lowest billing to the highest.
    • If an agency’s CBR report indicates that the agency is in the 90th percentile, it means that the agency is billing more than 90 of the other hospices.
    • This is why the 90th percentile is the “red flag” area; it tells Medicare that the agency is at the very edge of the pack.
  3. Clear Results There is no need to be a trained statistician to read or interpret the CBR report. The report uses clear labels:
    • Significantly Higher: This is the primary “outlier” signal. It indicates that the agency’s billing for a specific metric is in the top 10% (90%th percentile) of all providers. While not proof of wrongdoing, it is a high-priority “red flag” that warrants an immediate internal review of patient charts.
    • Higher: The agency is billing more than the average hospice. This is a signal to monitor the trend to ensure it does not move toward the 90th percentile.
    • Does Not Exceed: The agency’s billing is aligned with or lower than the average of its peers.
    • Not Applicable (N/A): The agency did not have enough claims in that category during the reporting period to create a statistically valid comparison.

Real-World Metric Examples

Each CBR focuses on a narrow “Target Area” of vulnerability. Two common examples for hospices include:

  • Non-Cancer Length of Stay (NCLOS) > 210 Days: Comparing the agency’s percentage of long-stay non-cancer patients against jurisdiction benchmarks.
  • GIP Average Length of Stay (ALOS): Benchmarking the agency’s inpatient stays (Q5004–Q5009) against state averages to identify potential overutilization.

How CBR Differs from PEPPER

It is helpful to view these two reports as different tools for the same goal:

FeaturePEPPERCBR / eCBR
Primary IntentProvides a broad risk profile across many areas at once.Focuses on a specific billing trend with educational detail.
ResultA list of scores for 12+ different categories.A specific label like “Significantly Higher” for one area.
ComparisonNational, MAC, and State averages.Usually State, Region, and Jurisdiction National averages.

Converting Comparative Data into Operational Action

The value of these reports lies in the establishment of a functional loop that transforms comparative data into focused action. Hospice leadership should treat PEPPER and CBR/eCBR results as signals tied to specific topics and timeframes. By comparing results against national or state benchmarks, leadership can determine whether a specific billing pattern requires continued monitoring or a formal investigation.

Determining the Response: Monitor vs. Investigate

A hospice agency that appears as a statistical outlier should view the data as a prompt for a focused internal review rather than evidence of an error. The Hospice PEPPER User’s Guide clarifies that these reports do not identify improper payments directly; instead, they serve as guides for auditing and monitoring billing changes over time. When a hospice agency looks materially different from its peers – particularly when that difference persists across multiple quarters – leadership should initiate a targeted investigation.

Maintaining a Narrow Scope

If an investigation is necessary, the review should remain strictly tied to the specific topic that triggered the signal. Effective hospice leadership avoids broad “audit everything” exercises in favor of targeted analysis:

  • Length of Stay (LOS) > 210 Days: The agency should review the long-stay patient population and the documentation patterns driving extended care.
  • GIP Average Length of Stay (ALOS): Leadership should focus on General Inpatient stays, discharge transitions, and the specific documentation supporting those inpatient days.

Closing the Loop

To finalize the process, the hospice agency must document the scope of the review, the specific findings, and any subsequent process changes. Re-checking the trend in the following cycle ensures these reports become a permanent part of the agency’s operating rhythm. CMS and MACs intend for these tools to function as educational resources that support a self-audit culture within the hospice organization.

Summary: Key Takeaways for Hospice Leadership

Effective compliance management requires utilizing both the PEPPER and the CBR as complementary tools for agency oversight. While the PEPPER serves as a broad indicator across many categories, the CBR functions as a targeted tool for identifying specific billing trends that may require immediate attention.

Hospice leadership should prioritize internal reviews for any metric labeled “Significantly Higher” or landing in the 90th percentile, as these results indicate the agency is a statistical outlier compared to its peers. By conducting narrow, focused chart audits based on these specific signals and documenting the findings within the QAPI process, a hospice agency can demonstrate a proactive approach to billing accuracy. Ultimately, treating the CBR as an educational resource rather than a threat allows leadership to refine workflows and improve documentation standards before external audits are initiated.

References and Additional Reading

QAPI Documentation:  How to Show Your Program is Active and Effective

QAPI Documentation: How to Show Your Program is Active and Effective

Hospice leaders often understand that QAPI is required by CMS, but many do not know how to document the program in a way that proves it is genuinely active and effective. CMS surveyors want to see more than binders, charts, or paperwork. They are looking for documentation that demonstrates continuous, data-driven improvement that is tracked over time. In other words, during survey, they are not just evaluating documents.  They are evaluating whether documentation reflects real action.

Why Documentation Matters

In the context of hospice QAPI, documentation is not about filling binders for the sake of compliance. It is about showing that the organization identifies problems, takes measurable action, analyzes results, and adjusts processes accordingly. CMS defines hospice QAPI as a data-based, objective approach to quality management that continuously monitors the outcomes of services, patient safety, and quality of care and requires that providers use this data to design and implement improvement projects when necessary.

To meet this standard, documentation must answer five questions clearly:

  • What was reviewed
  • What problem or risk was identified
  • What action was taken to address it
  • Whether that action made a difference
  • What the hospice will do next

If your documentation cannot answer these questions, CMS will not consider the QAPI program compliant, even if the hospice is working hard behind the scenes. The issue is often not that quality work isn’t happening.  Rather, the problem is that the work is not being documented clearly enough to show its impact.

Common Documentation Pitfalls

Many hospices get caught in documentation traps that weaken QAPI. They may create binders filled with policies but no records of action, prepare meeting minutes that vaguely state “QAPI discussed” without meaningful content, collect data that is not reviewed or analyzed, or maintain checklists that are completed but not tied to improvement decisions.

These habits create the appearance of a QAPI program without actually demonstrating one. CMS surveyors are trained to recognize documentation that looks like performance but does not show performance.

Start With Defined Indicators

The first step in documenting an effective QAPI program is to begin with defined indicators that are measurable. These indicators form the basis of what the organization monitors and what is documented throughout the year. Examples include pain assessment and management outcomes, timeliness of visits, medication error rates, clinical documentation compliance, grievances or caregiver complaints, and family satisfaction trends. The mistake many hospices make is tracking too many indicators and losing the ability to review and act on them consistently. Monitoring a smaller number or indicators – five to ten well-selected metrics – is more manageable and provides a clearer picture of change over time.

Show How Data Is Reviewed

Once indicators are established, documentation must show how the hospice reviewed data. This is where meeting minutes matter. They should include the date and time of review, the names or roles of participants present, the indicators that were reviewed, and the trends or variances noted. A clear example might read:

QAPI meeting held March 12, 2026. Reviewed late visit data for RN visits Jan–Feb 2026. Findings: 18% of scheduled visits started more than 15 minutes late. Geographic clustering identified in Zone 3. Attending: CEO, DON, QAPI Lead, RN Coordinator.

This simple statement shows activity, data, focus, and context, all elements that demonstrate that QAPI is functioning.

Document Root Cause Analysis, Not Blame

When a pattern or problem is identified, CMS expects hospices to document a root cause analysis. Root cause analysis is not about blame. Documentation should avoid language that points to individuals as “the problem.” Instead, it should focus on contributors such as workflow bottlenecks, documentation burden, staffing configurations, communication breakdowns, unclear policies, EMR inefficiencies, geographic routing challenges, or training needs.

Tools like “Five Whys” or Fishbone Diagrams can help identify these causes and show depth of analysis. For example, if nurses are repeatedly arriving late, documentation might state:

Primary contributing factor appears to be travel distance; route assignments have not been updated to reflect current census distribution. Documentation burden noted as secondary factor; RNs report medication review template adds charting time. The goal is to show thoughtful analysis, not superficial assumptions.

Record Corrective Actions Taken

After the cause is understood, documentation must show what action was taken. This can be operational, educational, technological, or process-based, but it must be specific and measurable. Documentation should include the intervention chosen, the person responsible for implementing it, and the date it was initiated. For instance:

Action: Adjust RN territory assignments to reduce travel time and reallocate visits in Zone 3. Responsible: Director of Nursing and Operations Manager. Implementation date: March 15, 2026.

This tells the surveyor exactly who acted, what was done, and when. It also provides an anchor point for follow-up measurement.

Prove Results With Re-Measurement

Few steps are more important than re-measurement.  This is where many hospices fail. QAPI work is not complete until the hospice checks whether the intervention worked — and documents the outcome. If an intervention does not lead to improvement, documentation should show that the hospice adapted or escalated the intervention rather than abandoning it. CMS does not expect hospices to fix everything on the first try; it expects them to document continuous improvement.

A strong re-measurement entry might read:Re-measured late visit percentage on April 15, 2026. Post-intervention result: Late visits reduced to 9% in Zone 3; hospice-wide reduction to 12%. Action considered effective; monitoring quarterly going forward.

An Example of QAPI Documentation Done Well

When all these elements come together, they tell the story CMS is looking for. Consider a full improvement cycle: On January 20, a hospice identifies a 12% medication documentation error rate during chart audits. In February, EMR templates are revised and staff training is conducted. On March 5, re-measurement shows the error rate has dropped to 3%. This is the type of documentation that proves QAPI is not theoretical. It also shows the hospice is functioning with intention and accountability rather than reacting randomly.

Tools That Support Documentation

The tools used to track this information do not need to be complicated. QAPI meeting minutes, action logs, re-measurement logs, and simple trend charts can meet CMS expectations when used consistently. Many hospices find it helpful to maintain a single “QAPI Action Log” that lists each improvement project from start to finish. CMS offers examples, worksheets, and guidance documents on its website for providers who need structure.

Final Takeaway

Ultimately, documentation should tell a story of how your hospice

  • Found a risk or opportunity
  • Tested an intervention
  • Measured the result
  • Made further decisions

based on what was learned. When this story can be followed easily and supported with evidence, a hospice has documentation that reflects an active and effective QAPI program. This is the level of clarity CMS expects — not perfection, but proof of progress.


How to Collect QAPI Data that Shows What Needs to Improve

How to Collect QAPI Data that Shows What Needs to Improve

In hospice, most organizations understand why Quality Assessment and Performance Improvement (QAPI) is required. What many do not understand is how to collect data in a way that reveals patterns, risks, and opportunities for improvement.

QAPI data collection does not mean saving every report, printing every dashboard, or drowning in spreadsheets. It means collecting the right information, in the right way, at the right time, so that it provides a story about what is happening inside the organization.

What Data Collection Should Accomplish

A hospice should collect data to answer three essential questions:

  • What is happening?
  • Is it getting better, worse, or staying the same?
  • Does it represent a risk to patients, operations, or regulatory compliance?

If the data that the agency is collecting does not help answer these questions, then either the data points are wrong or the method of collection needs to change.

The Most Common Mistake

Hospices often gather data after a problem has already occurred — almost like autopsy work. That prevents improvement.

Data collection must happen before, during, and after issues appear. Only then can you identify trends and prevent problems instead of reacting to them.

QAPI data can be thought of like a heartbeat monitor: If a patient’s heartbeat is only monitored after the patient has coded, the clinical staff will not have the information that they need to successfully intervene.

What Data Collection Looks Like in Practice

A successful data collection process has three characteristics:

CharacteristicWhat It Means
ConsistentCollected on a schedule (weekly/monthly/quarterly)
AccessibleStaff can enter information quickly without barriers
ActionableSomeone reviews it and can make decisions from it

Data that is collected but never reviewed is not QAPI; it’s record-keeping.

A Realistic Example

Scenario: A hospice agency is receiving more calls from families stating that nurses are arriving late for scheduled visits.

This is a signal, and signals should trigger structured data collection.

Here is how the hospice agency should approach this in QAPI:

Step 1: Define the Data Point

What should be measured?

Scheduled visit time vs. actual arrival time

This must be collected the same way for every visit being reviewed.

Step 2: Create a Simple, Repeatable Tool

The agency does not need software to begin. A chart, form, or shared spreadsheet is enough:

Patient IDDateScheduled TimeArrival TimeLate? (Yes/No)ReasonReported byNotes

Step 3: Collect Data Over a Set Timeframe

The agency can decide on the timeframe over which data will be collected: two weeks? one month? one quarter?
The timeframe must be long enough to show a trend, but short enough to act quickly.

Step 4: Analyze

After the data is collected, review what happened.

QuestionWhy It Matters
How often are nurses late?Shows severity
Are the same nurses repeatedly late?Training or workload issue
Are late visits tied to geography or routing?Scheduling issue
Are delays tied to documentation load?Workflow burden issue
Does lateness correlate with patient complexity?Staffing model issue

Step 5: Intervene

Example findings → Example actions:

FindingsAction
Late arrivals cluster in one regionAdjust territory planning
Late due to excessive documentation timeModify EMR workflow or training
Late due to visit volumeReevaluate caseload standards
Late due to travel timeRedraw service area or change routing

Step 6: Re-Measure

Intervention is not improvement unless data proves it. After the intervention is implemented, the agency must measure again to confirm whether lateness improved. If it did — fantastic. If not — the agency needs to try a new intervention.

This is how QAPI proves effectiveness.

Why This Method Works

This process does three crucial things:

  • Turns perception (“families say nurses are late”) into measurement
  • Turns measurement into insight (“where, when, why?”)
  • Turns insight into action (“fix the problem in the system, not the person”)

When to Start

Intervention and correction of problems identified does not require software systems or large volumes of data. If a hospice is waiting for the “perfect data system,” then the hospice is waiting too long.

  • Start small.
  • Start with one data point.
  • Start even if the first round is messy.

QAPI success begins with a mindset change — not a software purchase.

Takeaway

A hospice agency does not require large volumes of data in order to address issues identified. All that is needed is data that is collected consistently and reviewed with purpose. Data collection is not about volume. It is about visibility.

When data starts showing patterns, it offers the power to prevent problems instead of reacting to them.

Additional References


What is the Hospice Quality Assessment and Performance Improvement Program?

What is the Hospice Quality Assessment and Performance Improvement Program?

A hospice Quality Assessment and Performance Improvement (QAPI) program is the formal system a hospice uses to understand how well it is functioning, where it is at risk, and how it will improve over time. Under 42 CFR § 418.58, CMS requires every hospice to maintain an ongoing, hospice-wide, data-driven program that evaluates the quality and safety of care and takes deliberate action when improvement is needed. In practical terms, a QAPI program is not a set of reports or a compliance binder.  It is the structured way a hospice identifies problems, analyzes why they occur, implements changes, and checks whether those changes actually improve care for patients and families.

While the regulation under 42 CFR § 418.58 describes what CMS expects, it does not specify how to build a functioning QAPI program from scratch. The good news is that CMS is not looking for a perfect system. It is looking for a repeatable structure that allows the hospice to identify risk, improve care, and demonstrate learning over time.

The most successful hospice QAPI programs start by putting structure in place before worrying about metrics or dashboards.

What does QAPI mean

At its core, QAPI combines two key components: Quality Assurance (QA) and Performance Improvement (PI). Quality Assurance focuses on setting and maintaining standards of care, while Performance Improvement is about fixing systemic or recurring problems in those care processes. Together, they form a comprehensive, data-driven approach that involves everyone in the organization  –  clinicians, administrators, and support staff – in practical problem-solving and care enhancement activities. This makes QAPI more than just a regulatory requirement; it is an organized way of doing business that builds quality into every level of hospice operations.

What is the scope of a QAPI program

A hospice QAPI program must be hospice-wide, meaning it must cover all services that affect patient care including clinical services, psychosocial and spiritual care, interdisciplinary group functioning, documentation systems, safety processes, and services provided under contract. The scope of the hospice QAPI program must be defined in writing. The written scope becomes the anchor when questions arise later about whether an issue belongs in QAPI.

The CMS Conditions of Participation require that hospices “collect and analyze patient care and administrative quality data and use that data to identify, prioritize, implement, and evaluate performance improvement projects to improve the quality of services furnished to hospice patients.” This emphasizes the importance of using objective data to show improvement in outcomes, care processes, satisfaction, or other performance indicators.

How does the QAPI program work

A QAPI program begins with data collection. The objective of the data collection is not to accumulate paperwork. Rather, the objective is to reveal patterns, risks, and opportunities for improvement. This can include clinical outcomes, documentation audits, incident reports, grievances, and patient or caregiver feedback. What matters most is that the data allows the hospice to answer key questions:

  • What is happening?
  • How often is it happening?
  • Why is it happening?
  • What can we do to improve?

QAPI does not require a hospice agency to design a complex data dashboard. It requires identifying reliable data sources that already exist and deciding how they will be used and reviewed.

The agency can start by identifying a small set of core data inputs: patient outcomes, complaints and grievances, adverse events, utilization trends, documentation audits, and patient or family experience data. The goal is not volume; the goal is visibility. When data is reviewed consistently and discussed meaningfully, it becomes usable for improvement.

Identifying concern and monitoring improvements

If an area of concern is identified, the hospice must design and implement an improvement strategy, evaluate the effectiveness of that intervention, and continue monitoring the results over time.

CMS does not require a specific improvement model but it does expect hospice agencies to demonstrate that improvement efforts follow a logical process. The key is choosing an improvement cycle that is easily understood and repeatable and that does not require specialized staff training.

Most hospice agencies succeed by using this straightforward and repeatable sequence:

  • Identify an issue using data
  • Analyze why it is happening
  • Implement a targeted change
  • Re-measure performance
  • Monitor whether improvement is sustained

The exact labels are less important than consistency. When the same cycle is used repeatedly, QAPI becomes easier to manage and easier to explain during survey.

What differentiates a strong QAPI program from a weak one is the ability to demonstrate measurable change. Hospice staff and leaders should be able to point to specific improvements that resulted from their QAPI efforts, backed by data over time. This could be a reduction in documentation errors, better pain control outcomes, improved timeliness of visits, or more positive caregiver feedback.  These are all examples of real impacts that show the program is not just active, but also effective.

Governance of the QAPI program

CMS places responsibility for QAPI effectiveness on hospice leadership and the governing body. This does not mean that leadership must manage every detail of the QAPI program. What it does mean is that leadership must ensure QAPI operates consistently and has authority.

Leaders are responsible for ensuring that QAPI is integrated into the hospice agency’s policies, procedures, and culture. This includes establishing clear objectives, designating qualified individuals to oversee day-to-day activities, and allocating the resources necessary to support ongoing performance measurement and improvement. The governing body must review QAPI findings regularly and ensure that identified issues are addressed at the organizational level.

Hospice leadership must establish a standing QAPI structure with a regular meeting rhythm and interdisciplinary participation. This can be a formal QAPI committee or a standing agenda item within an existing quality or leadership meeting. What matters is not the name of the meeting, but that QAPI activities are reviewed consistently, decisions are documented, and leadership is aware of priorities and outcomes.

Document how the program operates, not just that it exists

Regulatory compliance is inseparable from solid documentation. CMS surveyors expect to see evidence that a QAPI program is active and effective. Documentation should clearly reflect what was reviewed, what issues were identified, what actions were taken to address those issues, and what the results were. These records should show the agency’s ability to track performance and demonstrate improvement over time.

A QAPI program that exists only in manuals or binders but lacks real, documented improvement activities will be seen as ineffective during survey. Strong documentation tells the story of improvement over time. It shows that QAPI is active rather than simply theoretical. This becomes critical during survey, when the hospice must demonstrate not only intent, but execution.

Why QAPI Matters Beyond Compliance

While QAPI is a regulatory requirement, its impact extends far beyond mere compliance. When implemented thoughtfully, a QAPI program becomes a strategic advantage for a hospice agency. It enhances care quality, strengthens patient and family satisfaction, and supports organizational resilience in a rapidly evolving healthcare environment.

A hospice that can continuously monitor performance, learn from data, and act proactively is better positioned to deliver high-value, person-centered care every day. In an era where quality reporting and public transparency are increasing – including through programs like the Hospice Quality Reporting Program (HQRP), which publicly reports data on hospice performance measures – hospices that embrace continuous improvement are likely to stand out in quality metrics and community reputation.

Additional References

AI At the End of Life:  Help, Not a Decider

AI At the End of Life: Help, Not a Decider

End-of-life decisions are some of the hardest moments any family, clinician, or hospice team will ever face. Even when a patient has had candid conversations with loved ones, the reality of decline can feel different than anything imagined. When there is no advance directive or clear documentation of the patient’s wishes, those decisions become even more complex. Families may disagree, memories of past conversations may not align, and the clinical team is left trying to balance what is medically appropriate with what might honor the patient’s values. The result is often a mix of uncertainty, guilt, and emotional strain for everyone at the bedside.

This is the space where new data tools and artificial intelligence are starting to appear. Some models claim they can estimate what treatments a patient might choose at the end of life based on patterns in large data sets. Others aim to predict who is at higher risk of dying within a certain time frame, nudging clinicians to start goals-of-care conversations sooner or to consider hospice or palliative care earlier. For hospice and healthcare teams already stretched thin, it can be tempting to see these tools as a way to “solve” the hardest part of care: figuring out what to do when nothing is simple and time is short.

But there is a crucial distinction to hold onto: data and AI can support decision-making; they should not be the decision-maker. An algorithm might highlight that a patient shares characteristics with others who tended to decline aggressive interventions. It might flag that prognosis is shorter than it appears at first glance.

Yet it cannot sit with the family in their grief, it cannot understand a patient’s faith in the way a chaplain can, and it cannot weigh the quiet promises made at a kitchen table months or years before the illness progressed.

At best, AI can offer additional information, patterns, or prompts that help humans ask better questions. It cannot take away the responsibility – or the privilege – of truly listening to what matters most to the patient.

Ethical Challenges

This is where the ethical challenges begin to surface. If an AI model suggests that a patient “would not want” a particular treatment, how much weight should that suggestion carry, especially when there is no formal advance directive? If a clinician disagrees with the model’s output based on what they have heard from the patient or family, whose judgment should guide the plan of care? And if families hear that “the data says” their loved one would choose a certain path, will they feel free to disagree? Or, will they feel pressured by the perceived neutrality and authority of the algorithm? The more powerful and precise these tools appear, the more they risk subtly shifting who feels entitled to make the final call.

For clinical staff, the questions become deeply personal and practical. How will you integrate AI-generated risk scores or preference predictions into your bedside conversations without letting them overshadow your clinical intuition and your understanding of the patient’s story? When a model’s suggestion conflicts with what a patient or family is clearly expressing now, what will guide your next step? How might your moral distress change if a decision later comes into question and someone asks, “Why didn’t you follow what the algorithm recommended?” or, conversely, “Why did you rely on it so heavily?”

For administrators, AI at the end of life raises strategic and cultural questions. If your organization adopts tools that predict mortality or likely treatment preferences, how will that change workflows, staffing, and expectations around hospice and palliative care referrals? Will there be pressure – subtle or explicit – to align care patterns with what the data suggests, especially if payers or partner organizations see AI as a way to manage cost and utilization? How will you communicate to your teams, and to your community, that these tools are meant to inform compassionate care rather than to standardize deeply human decisions?

And for compliance and ethics leaders, AI adds new layers of risk and responsibility. If an AI recommendation influences an end-of-life decision, how should that be documented? What happens if patterns emerge showing that the tool performs differently across racial, cultural, or language groups? Who owns the responsibility to investigate and respond? Is there a point at which the use of AI in end-of-life decision-making should trigger explicit disclosure or consent from patients and families? And if your organization chooses not to use these tools while others do, could that one day be seen as a gap in standard of care – or as a principled stance on preserving human judgment?

End-of-Life Decisions Live in a Crowded Space

None of these questions have easy answers, and perhaps they shouldn’t. End-of-life decisions have always lived in a space where medicine, ethics, family, and faith meet. AI does not change that; it just adds a new voice into an already crowded room. The challenge for hospice and healthcare teams may not be whether to use these tools at all, but how to use them in a way that keeps the center of gravity firmly with the patient and those who know them best.

As AI continues to move closer to the bedside, each organization – and each role within it – will have to keep asking:

  • What do we want AI to do in end-of-life care, and what do we want to reserve for humans alone?
  • How will we notice if the technology meant to support us is quietly shaping decisions more than we realize?
  • And in the moments when nothing is clear and there is no advance directive to guide us, whose voice should carry the most weight: the algorithm’s, the family’s, the clinician’s, or the patient’s story as we have come to know it?

Hospice and palliative care have always been about making room for the hard questions. AI doesn’t take those questions away – it may simply give us new ones to live with.

Reading Material