Continuous Quality Improvement Plan


Step 1: Assign responsibility
Step 2: Delineate the scope of care and service
Step 3: Identify important aspects of care and service
Step 4: Identify indicators
Step 5: Establish thresholds for evaluation
Step 6: Collect and organize data
Step 7: Initiate evaluation
Step 8: Take actions to improve care and services
Step 9: Assess the February 29, 2008
Step 10: Communicate results to affected individuals and groups, and continue monitoring


The quality improvement plan is a method to promote continuous improvement in patient care. Through the quality improvement plan, medical, allied health professionals, service personnel, vendors, patients and other dialysis customers work together to improve the process of care that is delivered.

Each dialysis center/facility should maintain its own quality improvement plan. To assist with implementation, the Medical Review Board of The Renal Network recommends the following, which is based largely on recommendations from the Joint Commission for Healthcare Organizations and the Forum of ESRD Networks.

Used to improve all the processes of the dialysis, a quality improvement plan encompasses the entire organization, from the governing body to support services and includes direct care providers and consumers, and must be committed to a cooperative effort to improve care. The opportunities to improve care will mostly be found by examining the systems and processes by which care is provided; these processes generally cross areas of specialty or departmental lines, necessitating interdepartmental communication and coordination. Dialysis care will be improved by looking at the series of activities that comprise any key function in the dialysis center/facility.

The following are the key components of the quality improvement plan:

  • Identifying the most important aspects of care;
  • Measuring well-defined indicators linked to the aspects of care or service;
  • Evaluating the current process of care and implementing changes;
  • Monitoring the changes in the process for improvement;
  • Evaluating the outcomes and recommending future improvements; and
  • Maintaining improvements over time.

Monitoring and evaluation is the process by which care is, on an ongoing basis, assessed and improved. The process is described in the following ten steps:

  • Step 1: Assign responsibility;
  • Step 2: Delineate the scope of care and service;
  • Step 3: Identify important aspects of care and service;
  • Step 4: Identify indicators;
  • Step 5: Establish thresholds for evaluation;
  • Step 6: Collect and organize data;
  • Step 7: Initiate evaluation;
  • Step 8: Take actions to improve care and services;
  • Step 9: Assess the effectiveness of actions to assure improvement is maintained; and
  • Step 10: Communicate results to affected individuals and groups, and continue monitoring.

At the end of this document are three tables that illustrate this process. Table 1 presents the process in an outline form and should be able to provide the readers an overview of monitoring and evaluation. Table 2 is a checklist to assist organizations in designing their monitoring and evaluation process. Table 3 presents a comparison synopsis between quality assurance and quality improvement.

The quality improvement team is responsible for implementing, documenting, monitoring and evaluating the improvement plan.

The medical director of the dialysis unit is ultimately responsible for all the quality improvement plans in the dialysis facility. The organization’s quality improvement plan team is composed of the organization’s leadership. Team membership for improvement projects is composed of key personnel specific to the individual improvement project.

The teams should meet as often as needed to ensure that the plans and individual projects are implemented, monitored, evaluated and maintained. Progress reports should be kept and disseminated on the plans and projects as they continue.


The dialysis facility/center leadership are responsible for (a) overseeing the design of and fostering an approach to continuously improving quality; (b) establishing quality improvement responsibilities in the organization; and (c) setting strategic priorities for quality assessment and improvement.

Organization leaders. Dialysis facility/center leadership is key to assuring an organization-wide commitment to improving quality and to assuring that quality improvement is given a high priority among the organization’s activities and that it includes those important processes that cross department/service lines.

Design and foster the approach to continuously improving quality. The dialysis facility/center leaders are responsible for overseeing the design of the organization’s approach to improving quality and for assuring that this approach is carried out.

Designing the approach to continuously improving quality includes determining:

  • how the items for ongoing monitoring will be chosen;
  • how feedback information regarding quality-related issues can be used to discern opportunities for improvement;
  • how data will be collected and organized;
  • how priorities for assessment and improvement methods will be established;
  • how assessment and improvement methods will be applied; and
  • how quality related information will be communicated throughout the organization.

The leaders must also determine how they and the staff learn the methods of quality improvement and foster the staff’s commitment to and involvement in the process.

Establish responsibilities in the organization. Once the process for continuously improving quality has been designed, the leaders must establish who will be responsible for carrying out the process. No matter how and by whom the process is carried out, the leaders continue to oversee and as appropriate, engage in the activities.

Department/unit/service directors are responsible for seeing that the activities in their units are encompassed by the monitoring and evaluation activities.

Set priorities for assessment and improvement. Among the dialysis organization’s leaders’ most important responsibilities is overseeing the setting of priorities for assessment and improvement. Priority setting is based on the review of the findings from ongoing monitoring, as well as other feedback information that may indicate an opportunity to improve the quality of care and service for patients and the organization, the priorities should be set for evaluation and improvement.

For example:

  • Our dialysis organization leaders are the Chief Officers, Board of Director officers, Board of Directors and medical director.
  • Our organization’s leaders define its vision in a mission statement. "The mission of our dialysis unit is to promote and provide high quality dialysis care and services to patients, and to promote and provide a high quality physical, social and emotional environment for our patients, patient families, employees and visitors".
  • Our dialysis organization has designed a team concept to continuously improve the quality of care and services it provides to its customers. Our organization will employ the methods of continuous quality improvement to achieve our mission.
  • Our organization’s leaders will be responsible for supporting and sustaining the improvement process in our dialysis facility.

Based on the review of data, patient and employee information, industry and professional information, federal, state and local regulations and requirements, priorities for assessing and improving care and services will be made.

All care and service the dialysis facility/center provides patients should be considered when setting priorities for ongoing monitoring.

Identifying important functions. One method of delineating the scope of care and service is to identify the organizations key governance, managerial, clinical, and support functions. All departments and services should contribute to this delineation of key functions.

Important functions are those functions having the greatest affect on the quality of care a patient ultimately receives. Important functions include and are not limited to the following examples.

  • medical
  • biotechnology services
  • infection control
  • environmental services
  • scheduling
  • water maintenance
  • medication use
  • patient services
  • nursing
  • dietary
  • social services
  • administration
  • information management
  • purchasing
  • accounting/billing services

Identifying the activities performed in the organization. Another method of delineating scope of care and service is to compile and inventory of the activities performed in the dialysis facility/center. Such an inventory can be based on a review of the following list of examples.

  • the types of patients served (i.e. age, race, sex)
  • the range of conditions & diagnoses treated
  • the range of activities involved in serving patients (includingthose services other than direct patient care
  • the types of staff providing these activities
  • the sites where care and services are provided
  • the times when care and services are provided
  • high volume activities
  • high cost activities
  • strongly perceived activities linked to quality

By identifying the activities and/or the functions that are provided, a dialysis facility/center can delineate its scope of care and services. Quality improvement focuses on the understanding and improvement of the functions and processes involved in organization’s activities.

In this step, the dialysis unit selects the items for ongoing monitoring and determines its priorities.

Select the aspects of care that will be monitored. Using the scope of care and service as a basis, the organization must select aspects of care and service that are important for ongoing monitoring. These aspects of care may be important functions, procedures, treatments, processes or other activities that affect patient care. The choice of important aspects of care or service should be made by those who are experts in the areas under consideration, but the organizations leaders should concur in their choice. The aspects of care chosen should be those believed to be most important to the quality of patient care and service. Organization resources and the importance to patients are important factors in determining what can and should be monitored. The following are examples of some possible important aspects of care and service;

This example pertains to monthly blood collection.

  • blood test orders on patient chart
  • blood test paper requisitions
  • blood sampling techniques
  • blood sample preparation/storage
  • blood sample results
  • missing/clotted samples
  • counseling of blood test results to patients, family, other staff

Establish priorities among important aspects of care and service. Organization resources and importance to the patient will need to be considered when setting priorities for important aspects of care.

This step involves selecting the performance measures for the important aspects of care and service.

What are indicators? Indicators are measures that can be used to monitor care and service; the measures can be related to the process or the outcome of care. Processes are the activities that act upon an "input" from a "supplier" to produce an "output" for a "customer". For example, the activities carried out by dialysis facility/center staff in their care for patients, (including assessment, treatment planning, test ordering and interpretation, medication administration, performance of invasive procedures, and discharge planning) are "processes." Outcomes are the products of one or more of the processes, and could include complications, adverse events, and short term and longer-term results of care and service.

Indicators are measures of specific, objective events, occurrences, facets of treatment, and so forth that provide information about the quality of a particular aspect of care or service. These measures provide information useful in assessing the quality of important aspects of care and service and directing attention toward opportunities for improvement. It should be possible to collect objective data pertaining to each indicator.

Identify teams. To assure that the most accurate and productive indicators are used, those staff who are knowledgeable (usually by involvement in) about the particular aspects of care or service work together. These "teams" of knowledgeable individuals may be interdepartmental or intradepartmental; each member may address one or more aspect of care or service to determine what measures would be most useful in monitoring the aspect of care or service. In doing so, the team should consult authoritative sources, including health care and quality assurance/improvement literature, professional standards, applicable regulations, and their own (or other staff’s) staff experience. Choosing the best teams for indicator development will help all staff to know that the indicators are useful and accurate, resulting in increased faith and cooperation in the monitoring and evaluation process.

Teams develop indicators. Each team should develop a set of indicators for the important aspects of care and service. The indicator sets from multiple teams may be combined, and a final group used in the ongoing monitoring. The final set of indicators should reflect non-duplication, appropriate number of indicators for resources available.

The following examples list dialysis teams and care indicators that can be identified for improvement.

  • Social worker, patient representative, administrator, and nurse and family member develop a customer comment card to monitor patient and family suggestions, comments and concerns.
  • Physician and nurse develop a central venous line indicator. The indicator is defined as the number of central lines used specifically for hemodialysis per month divided by the total number of hemodialysis patients. The indicator will be monitored monthly. The patient with central venous lines will have the following information collected: type, insertion date, removal date, inserter, facility where procedure was performed, date and number of documented infections at insertion site, date and number of blood infections, number of times each catheter was clotted prior to initiation of dialysis treatment, and verified report of stricture.
  • Administrator, nurse and human resource specialist monitor monthly staff turnover. The indicator is defined as the number of employees that terminate employment every month divided by the total number of employees in the unit.

The organization establishes, for each indicator, a mechanism to determine when further evaluation must be triggered.

What is a threshold for evaluation? A threshold for evaluation helps the staff answer the question, "Based on these data, must we launch an intensive evaluation of this aspect of care?" A threshold for evaluation can be a predetermined important clinical event. For example, if "failure to check the dialysate for reuse disinfectant prior to initiation of therapy" is an indicator, the threshold may be set at 0%, so that each event triggers and evaluation. A threshold may also be a predetermined level of performance. For example, if clotted permanent catheters are used as an indicator, staff may set a level (e.g.10% per week of all patients with this type of access) beyond which they believe further identification evaluation to identify opportunities for improvement must be undertaken. A threshold may also be a pattern or trend in the indicator data, such as a specified change in the rate of clotted AV accesses, or a specified difference between time spent post dialysis for bleeding between one shift of patients and the next patient shift. Staff may analyze past experience to statistically set upper and or lower "control" limits (for example, the upper limit of time for a puncture site to clot is 30 minutes, the lower limit is 5 minutes), or they may evaluate customer needs to set upper and/or lower "specification" limits. No matter what exact form a threshold takes, it is a means to decide whether investment of resources in intensive evaluation must occur.

Thresholds trigger when evaluation must be initiated, but they are not the only way evaluation is initiated. Even when a threshold is reached, staff may want to evaluate an aspect of care or service to determine whether variation can or should be reduced or whether the mean performance can be improved. (In addition, steps 6 & 7 discuss other feedback sources that may trigger an evaluation.)

Most indicators should have a threshold that will trigger further evaluation. Each team charged with developing indicators should establish related thresholds for evaluation. Health care literature; staff experience, past performance (historical data), and patient needs all can contribute to the decision on thresholds for evaluation.

For each indicator, data must be collected and organized so that those responsible can apply thresholds and determine when further evaluation is required.

Data sources for each indicator. The individual teams or groups who develop the indicators or thresholds are in the best position to identify sources for data pertaining to each indicator. This information helps to guide a decision as to which indicator will be the most productive and that data collection methodology is appropriate. The source of data will vary depending on the indicator, but the following list of examples describes some common places where quality-related data can be found.

  • Patient records
  • Laboratory reports
  • Incident reports
  • Medication sheets
  • Department logs
  • Death Forms/Autopsy Reports
  • Infection control reports
  • Water quality reports
  • Direct observation & measurements
  • Billing records
  • Utilization review reports
  • Network demographic reports
  • Inventory reports
  • Network comparison reports
  • Purchase Orders(usage)
  • Patient responses/comments

Data-collection methodology. A data collection methodology must be chosen and established for each indicator. Data collection that follows the flow of patient care often involves different areas of expertise, i.e. dialysis facilities and laboratories, the data must involve a cross specialty team. In these cases, leaders of the organization often oversee the design of data collection methods. To minimize the investment of resources, the most efficient data collection process should be established; one that takes into account data collection already being carried out in the organization.

Designing and establishing this methodology entail answering several related question:

  • Who will collect the data?
  • Will collection be concurrent or retrospective?
  • Will sampling be appropriate?
  • Is data collection amenable to computer support?
  • How often will data be organized and compared with thresholds?
  • Who will organize the data?
  • How will the data be displayed?
  • Who will apply the thresholds for evaluation?

Collection of indicator data. To decide on who will collect data and how the data will be collected, responsible individuals should look at existing activities that involve data collection, including utilization review, Network database reports, billing, etc. Ideally, data collection for monitoring and evaluation would be integrated into an existing function rather than conducted separately.

Another consideration in the data collection decision is the level of knowledge necessary to collect the data. Some indicator data may be self-evident, while other data may require specialized knowledge to reliably collect.

Data collection can be retrospective or concurrent. Some data must be collected retrospectively, concurrent data collection can be effective by using observation coupled with the review of documents for timely communication of quality-related information and to minimize time involved in data collection. If sampling is appropriate for high volume aspects of care or services, the sample size and sample selection method should be established.

Persons designing the data-collection methodology also need to study the organization’s computer capabilities. Computerized data collection, especially in larger facilities/centers, may speed the process.

Findings must be reported at specific times to the individual or group responsible for organizing the data and applying the thresholds for evaluation. This activity could be performed by those who generate this data, by those who collect the data, or by an appropriate individual or group with threshold-for-evaluation concept and the specific threshold; threshold involving trends and patterns may be more difficult to apply than those with upper and lower limits or specific performance rates. The timing of the reports will depend on the specific aspect of care and service. Typically reports are monthly, bimonthly, quarterly, semi-annual or annual. Sentinel events are usually reported as they occur.

A decision must be made whether the data, both from ongoing monitoring and other quality-related feedback, warrant initiation of further evaluation of the aspect of care and service.

Responsible individuals apply thresholds. At regular, specified times, the individual(s) responsible for applying the thresholds for evaluation should do so.

Setting priorities for evaluation. The findings from ongoing monitoring that show thresholds have been reached should be assessed, as well as other feedback (for example, patient satisfaction surveys, staff comments) that suggests opportunities for improvement may be present. Then, taking into consideration the effect on patient care and services as well as organization resources, priorities for further evaluation are set.

Convene teams for intensive evaluation. Those individuals who can best evaluate all facets of the particular aspect(s) of care and service are then brought together. This "team" may be the team who developed the indicators and thresholds, or another group with appropriate representation. When necessary, these teams should be composed of members from different departments and services, to assure that interdepartmental processes are considered.

Teams evaluate the aspects of care and service. Evaluation, including peer review when appropriate, should be used to determine whether there is an opportunity to improve care or service. In general, staff evaluating care should be attentive to opportunities for improvement involving systems, knowledge, and behavior.

One issue that can create conflict in evaluating care and service is determining exactly how performance can be improved. It is important to make decisions as objectively as possible. Many tools can help assure objectivity and to understand the causes of observed performance; these include measures of processes and outcomes, cause & effect diagrams, Parieto diagrams, flow charts, run charts, histograms, and scatter diagrams, department standard, guidelines, protocols, practice guidelines, team member expertise, professional standard guidelines, and pertinent health care literature.

Teams should record in worksheets or meeting minutes their assessments, conclusions, recommendations and rationales.

For example:

The dialysis access team members were selected; the team included the medical director, three charge nurses representing the unit’s three patient shifts and two hemodialysis technicians. The access team defined the aim of their improvement project: to assess, monitor, and improve the central lines of the patients in our dialysis unit. The team will monitor the number of central line accesses in the unit on a weekly basis and compile monthly reports to be presented to the unit quality improvement plan team. The team collected historical information on patients with central line in their unit for the previous six months. That analysis provided a base rate of the number of central line accesses/patient population, the number of clotted central lines clotted/ per unit of time/number of patients with central lines, types of catheters in use, percent of catheters clotted, patient specific information, insertion specific information, confirmed and suspected infection of the catheters.

If evaluation identifies an opportunity for improvement, actions should be recommended and taken. Pilot projects or trials in a limited area are recommended before a system wide change is undertaken.

Teams recommend and/or take actions. The team evaluating the aspect of care or service should determine appropriate actions; depending on the aspect of care or service being evaluated, under certain circumstances these teams may take actions themselves and forward the results to the leaders. The actions should be directed toward the root causes and should have an eye toward overall improvement in the quality of care and service.

Some possible actions include the following examples:

  • For system issues – changes in communication channels, changes in organizational structures and processes, adjustments in staffing, and changes in equipment of charts forms;
  • For knowledge issues – inservice education, continuing education, making accessible data or scientific reports, and circulating informational material; and
  • (for behavior issues) informal and informal counseling, changes in assignment, and disciplinary action.

Actions are determined and implemented. Under some circumstances, quality improvement teams and other staff persons may be empowered to select and implement actions. Under other circumstances, the leaders are responsible for the final determination of which actions to implement and for selecting who will implement them. The leaders may decide that the teams themselves should take certain actions (e.g. designing systems changes). Other actions may fall into the purview of service/unit/department chairpersons or may require ad hoc group to implement action plan.

For example:

The team established that the baseline rate of clotted accesses was 7% with a standard deviation between weekly rates of 1%. The team set a threshold of 10%, which is three standard deviations to trigger an evaluation.

The medical director presented the teams findings at the monthly meeting and individual team members presented the finding to all shifts at the bi-weekly staff meetings as part of the in-service programs. The team continued to monitor and to collect information on the units process. The next week’s data showed and rate of 12% and the team began to implement a change.

Based on the historical information and checklists, the team prioritized which parts of the process could be changed in order to improve the quality of central lines in the unit. The education of patient and staff regarding the in-center and home care of the access was identified as a process for improvement. The plan was defined as increasing the applied knowledge of central line care to unit staff, patients, family members and nursing home staff. The team set up specific education modules to address each of these groups.

The monitoring continued and implementation of the education modules commenced. Feedback from the initial modules showed that the terminology for patients, family members and nursing home staff needed to be revised and this was done. The feedback information showed that several other questions from nursing home staff needed to be addressed and this was reported to the quality improvement plan team by the medical director.

Monitoring and evaluation continues after actions are taken. It must be determined whether the actions actually improved care or service and that the improvement is maintained.

Staff review subsequent findings and recommend further action, if necessary. The findings from continued monitoring (or from special follow-up monitoring, for areas not subject to ongoing monitoring) will provide evidence to determine whether the actions were effective. Data from one or two monitoring periods may be necessary to make the determination. If care and service do not improve within the expected time, responsible persons could initiate further evaluation and determine further action. Responsible persons should be attentive to findings as they continue to be compiled; ongoing and follow-up monitoring should ultimately show that meaningful improvement is maintained.

Monitoring is continued. Ongoing monitoring should continue for the selected important aspects of care and service. When feedback from outside the ongoing monitoring process triggers evaluation, the leaders should decide that, for example, subsequent patient satisfaction surveys will provide sufficient information. They may also decide that additional ongoing monitoring needs to be initiated; and then the team, indicator, thresholds and data sources would be added to the quality improvement review process.

For example:

The monitoring continued and implementation of the education modules commenced. Feedback from the initial modules showed that the terminology for patients, family members and nursing home staff needed to be revised and this was done. The feedback information showed that several other questions from nursing home staff needed to be addressed and this was reported to the quality improvement plan team by the medical director.

The rates of central lines continue to be monitored on a weekly basis, the rate of clotted accesses and infections have decreased in the last two months and a new rate of 4% has been established with a threshold of 7%.

To "close the loop" of monitoring and evaluation, the conclusions, recommendations, actions, and follow-up must be reported to the appropriate individuals and groups.

Information is disseminated. The involved team, as well as the organization leaders, should disseminate necessary information throughout the organization. In addition, the leaders and others will receive formal and informal comments, reactions and information from involved persons and groups on the effectiveness of monitoring and evaluation. The information should also be communicated to affected persons and groups.

For example:

The access team continues the monitoring and assessment of central line care in the unit. It communicated the findings and projects for improvement to the unit staff to patients, family members, nursing home and the organization’s quality improvement plan team on a monthly basis.

The monitoring and evaluation process in continuous quality improvement builds on other quality programs in place at most dialysis facilities/centers, especially of the facility/center is JCAHO accredited. The modifications in this quality improvement plan include these modifications and additions, primarily:

  • emphasis on the leadership role in improving quality;
  • expanding the scope of assessment and improvement activities beyond the strictly clinical to the interrelated governance, managerial, support and clinical processes that affect patient outcomes;
  • utilizing other sources of feedback (in addition to ongoing monitoring) to trigger evaluation and improvement of care and services;
  • organizing the assessment and improvement activities around the flow of patient care and services, with special attention to how the "customer and supplier" relationships between departments (as well as within departments) can be improved, rather than compartmentalize activities within departments and services;
  • focus first on processes of care and services rather than on the performance of individuals;
  • emphasizing continuous improvement rather than only solving identified problems; and,
  • maintaining improvement over time.

The monitoring and evaluation process continues to provides for ongoing assessment and improvement of care and services; the modifications allow the process to be more comprehensive, more clearly defined, and more effective in improving care and service for the patients of a dialysis organization.

Table ONE: Outline of the Monitoring and Evaluation Process
Source: "1992 Using Indicator Data to Improve Quality of Care," Joint Commission on Accreditation of Healthcare Organizations, JCAHO, One Renaissance Boulevard, Oakbrook Terrace, Illinois, 1992.

  1. Assign responsiblity
    1. Identify Organization leaders
    2. Design and foster approach to continuous improvement of quality
    3. Set priorities for assessment and improvement
  2. Delineate scope of care and service
    1. Identify important functions and/or identify the procedures, treatments and other activities performed in the organization
  3. Identify important aspects of care and service
    1. Determine the important functions, treatments, processes, and other aspects of care and services that warrant ongoing monitoring
    2. establish priorities among the important aspects of care and service chosen
  4. Identify indicators
    1. Identify teams to develop indicators for the important aspects of care and service
    2. Indicators are selected
  5. Establish thresholds for evaluation
    1. Each team identified thresholds for each indicator
    2. Thresholds are selected
  6. Collect and organize data
    1. Each team identifies data sources and data-collection methods for the recommended indicators
    2. The data-collection methodology is designed, including those responsible for collection, organization, and applying thresholds
    3. Collect data
    4. Organize data so thresholds for evaluation can be applied
    5. Collect data from other sources, including patient and staff surveys, comments, suggestions, and complaints
  7. Initial evaluation
    1. Apply thresholds for evaluation to indicator data
    2. Initiate evaluation of aspect of care or service if threshold is reached
    3. Assess other feedback (e.g., staff suggestions, patient-satisfaction survey results)
    4. Priorities for evaluation are set
    5. Teams undertake intensive evaluation
  8. Take actions to improve care and service
    1. Teams recommend and/or take actions
  9. Assess the effectiveness of actions and assure improvement is maintained
    1. Assess to determine whether care and service have improved
    2. If not, further action is determined
    3. a) and b) are repeated until improvement is achieved and maintained
    4. Monitoring is maintained and priorities for monitoring and the indicators are periodically reassessed
  10. Communicate results to relevent individuals and groups
    1. Teams forward conclusions, actions, and results to leaders and to affected individuals, committees, departments, and services
    2. Information is disseminated as necessary
    3. Leaders and others receive and disseminate comments, reactions, and information from involved individuals and groups

Table TWO: Checklist for the Quality Improvement Process
Source: "1992 Using Indicator Data to Improve Quality of Care," Joint Commission on Accreditation of Healthcare Organizations, JCAHO, One Renaissance Boulevard, Oakbrook Terrace, Illinois, 1992.

  • Have leaders been identified?
  • Has a structure been established for leaders to oversee quality-improvement activities?

Does the quality-improvement process include:

  • a delineation of the scope of care and services?
  • an important aspects of care and service to be monitored on an ongoing basis?
  • a procedure to develop and approve indicators and thresholds for each important aspect of care?
  • a data-collection methodology?
  • a procedure by which data are compared with thresholds?
  • a procedure by which other feedback (e.g., patient and staff surveys) may trigger further evaluation?
  • a procedure by which actions are taken to improve care and service?
  • a procedure by which care and service are reassessed and improvement is maintained?
  • a procedure by which all affected individuals are groups receive results of monitoring and evaluation?

This checklist pertains to designing, rather than performing, monitoring, and evaluation. Conscientiously fulfilling all items on this checklist means the organization should have the basis for an effective monitoring and evaluation process for continuous quality improvement of quality.

Table THREE: Distinctions Between the Monitoring and Evaluation Processes as Currently Practiced and the Monitoring Evaluation Process in the Context of Continuous Quality Improvement
Source: "1992 Using Indicator Data to Improve Quality of Care," Joint Commission on Accreditation of Healthcare Organizations, JCAHO, One Renaissance Boulevard, Oakbrook Terrace, Illinois, 1992.

  1. Assign responsibility

    Current: Each department/service assigned responsibility for overseeing and carrying out monitoring and evaluation within the department.

    QI: The organization leaders oversee the design of and foster an approach to continuously improving quality which includes both intradepartmental and interdepartmental activities.

  2. Delineate scope of care and service

    Current: Each department/service delineated its separate scope of care.

    QI: The organization, as a whole or by department/service, delineates its scope of care and service.

  3. Identify important aspects of care and service

    Current: Each department/service identified its high-volume, high-risk, and problem-prone aspects of care.

    QI: The organization, as a whole or by department/service, identifies high-priority important functions, processes, treatments, activities, and so forth to be monitored.

  4. Identify indicators

    Current: Each department/service identified indicators to correspond to the important aspects of care.

    QI: Teams of experts, inter- or intradepartmental, identify indicators for the important aspects of care and service. Indicators pertaining to structures of care are no longer emphasized.

  5. Establish thresholds for evaluation

    Current: Each department/service established the level, pattern, or trend in data for each indicators that would trigger intensive evaluation.

    QI: Teams of experts establish the level, pattern, or trend in data for each indicator that will trigger intensive evaluation. Statistical methods are emphasized, as is the fact that thresholds are not the only way evaluation is triggered.

  6. Collect and organize data

    Current: The department/service or organization established a data-collection methodology.

    QI: The data-collection methodology often includes a means by which feedback from sources other than ongoing monitoring is used to indicate areas for evaluation and improvement.

  7. Initiate evaluation

    Current: Care was intensively evaluated only when the threshold for a given indicator was reached.

    QI: When thresholds are reached, and also when other feedback (e.g., patient reports, staff reports) identifies other opportunities for improvement, leaders set priorities for evaluation and establish teams, which evaluate the patient care or service function in question.

  8. Take actions to improve care and service
    Current: Those with authority within or outside the department/service took action, based on recommendations of those who evaluated the care.

    QI: Greater emphasis is placed on focusing actions on processes, especially the "hand offs" between departments/services.

  9. Assess the effectiveness of actions and assure improvement is maintained.

    Current: Continued monitoring determined whether actions were effective.

    QI: A greater emphasis is placed on assuring that improvement is sustained over time.

  10. Communicate results to affected individuals and groups.

    Current: Departments/services and functions reported results to the quality assurance program, which disseminated the findings as necessary.

    QI: Findings of those performing monitoring and evaluation are forwarded to the leaders and to affected individuals and groups. Leaders also disseminate information as necessary.

The Renal Network, Inc.
911 E. 86th Street, Suite 202
Indianapolis, IN 46240
Phone: (317) 257-8265
Fax: (317) 257-8291
Patient Line:
1 (800) 456-6919
Email: [email protected]

Last updated on: February 22, 2008