The Joint Commission: Regulatory Mandates

Our Mission:  To continuously improve health care for the public, in collaboration with other stakeholders, by evaluating health care organizations and inspiring them to excel in providing safe and effective care of the highest quality and value.

Vision Statement:  All people always experience the safest, highest quality, best-value health care across all settings.

 

The Joint Commission has accredited hospitals for more than 60 years and today it accredits approximately 4,023 general, pediatric, long term acute, psychiatric, rehabilitation and specialty hospitals, and 366 critical access hospitals, through a separate accreditation program. Approximately 77 percent of the nation’s hospitals are currently accredited by The Joint Commission, and approximately 88 percent of hospitals that are accredited in the United States are accredited by The Joint Commission.

Annual fees for hospitals are based on the type of hospital and weighted values for volume based on the types of service provided by a hospital. Customers receive the annual fee invoice in January of each year; new customers receive their annual fee after submitting their application for accreditation. The on-site survey fee is billed within seven days of the survey’s completion. Health systems have the option to receive a corporate orientation or corporate summation. These can be conducted by the team leader, by a team member, by video conference or a summation report. The Joint Commission posts its pricing schedule on the Joint Commission Connect extranet.

The Joint Commission: Sentinel Event Reporting

The Joint Commission adopted a formal Sentinel Event Policy in 1996 to help hospitals that experience serious adverse events improve safety and learn from those sentinel events. Careful investigation and analysis of Patient Safety Events (events not primarily related to the natural course of the patient’s illness or underlying condition), as well as evaluation of corrective actions, is essential to reduce risk and prevent patient harm. The Sentinel Event Policy explains how The Joint Commission partners with health care organizations that have experienced a serious patient safety event to protect the patient, improve systems, and prevent further harm.

A sentinel event is a Patient Safety Event that reaches a patient and results in any of the following:

  • Death
  • Permanent harm
  • Severe temporary harm and intervention required to sustain life

An event can also be considered sentinel event even if the outcome was not death, permanent harm, severe temporary harm and intervention required to sustain life. See list below.

Such events are called “sentinel” because they signal the need for immediate investigation and response. Each accredited organization is strongly encouraged, but not required, to report sentinel events to The Joint Commission. Organizations benefit from self-reporting in the following ways:

  • The Joint Commission can provide support and expertise during the review of a sentinel event.
  • The opportunity to collaborate with a patient safety expert in The Joint Commission’s Sentinel Event Unit of the Office of Quality and Patient Safety.
  • Reporting raises the level of transparency in the organization and promotes a culture of safety.
  • Reporting conveys the health care organization’s message to the public that it is doing everything possible, proactively, to prevent similar patient safety events in the future.

Further, reporting the event enables “lessons learned” from the event to be added to The Joint Commission’s Sentinel Event Database, thereby contributing to the general knowledge about sentinel events and to the reduction of risk for such events. For more information call the Sentinel Event Hotline, 630-792-3700.

Goals of the Sentinel Event Policy

The policy has the following four goals:

  1. To have a positive impact in improving patient care, treatment, and services and in preventing unintended harm
  2. To focus the attention of a hospital that has experienced a sentinel event on understanding the factors that contributed to the event (such as underlying causes, latent conditions and active failures in defense systems, or hospital culture), and on changing the hospital’s culture, systems, and processes to reduce the probability of such an event in the future
  3. To increase the general knowledge about patient safety events, their contributing factors, and strategies for prevention
  4. To maintain the confidence of the public, clinicians, and hospitals that patient safety is a priority in accredited hospitals

An event is also considered sentinel if it is one of the following:

  • Suicide of any patient receiving care, treatment, and services in a staffed around-theclock care setting or within 72 hours of discharge, including from the hospital’s emergency department (ED)
  • Unanticipated death of a full-term infant n Discharge of an infant to the wrong family
  • Abduction of any patient receiving care, treatment, and services
  • Any staff-witnessed sexual contact as described above
  • Admission by the perpetrator that sexual contact, as described above, occurred on the premises
  • Sufficient clinical evidence obtained by the hospital to support allegations of unconsented sexual contact
  • Any elopement (that is, unauthorized departure) of a patient from a staffed around the-clock care setting (including the ED), leading to death, permanent harm, or severe temporary harm to the patient
  • Hemolytic transfusion reaction involving administration of blood or blood products having major blood group incompatibilities (ABO, Rh, other blood groups)
  • Rape, assault (leading to death, permanent harm, or severe temporary harm), or homicide of any patient receiving care, treatment, and services while on site at the hospital†
  • Rape, assault (leading to death, permanent harm, or severe temporary harm), or homicide of a staff member, licensed independent practitioner, visitor, or vendor while on site at the hospital
  • Invasive procedure, including surgery, on the wrong patient, at the wrong site, or that is the wrong (unintended) procedure‡
  • Unintended retention of a foreign object in a patient after an invasive procedure, including surgery§
  • Severe neonatal hyperbilirubinemia (bilirubin >30 milligrams/deciliter)
  • Prolonged fluoroscopy with cumulative dose >1,500 rads to a single field or any delivery of radiotherapy to the wrong body region or >25% above the planned radiotherapy dose

https://www.jointcommission.org/assets/1/6/SE_2017_CAMH.pdf

Joint Commission: Patient Safety Standards

The Role of Hospital Leaders in Patient Safety

Hospital leaders provide the foundation for an effective patient safety system by doing the following:

  • Promoting learning
  • Motivating staff to uphold a fair and just safety culture
  • Providing a transparent environment in which quality measures and patient harms are freely shared with staff
  • Modeling professional behavior
  • Removing intimidating behavior that might prevent safe behaviors
  • Providing the resources and training necessary to take on improvement initiatives

Safety Culture

Staff and leaders that value transparency, accountability, and mutual respect.4

  • Safety as everyone’s first priority.4
  • Behaviors that undermine a culture of safety are not acceptable, and thus should be reported to organizational leadership by staff, patients, and families for the purpose of fostering risk reduction.
  • Collective mindfulness is present, wherein staff realize that systems always have the potential to fail and staff are focused on finding hazardous conditions or close calls at early stages before a patient may be harmed.
  • Staff do not view close calls as evidence that the system prevented an error but rather as evidence that the system needs to be further improved to prevent any defects.
  • Staff who do not deny or cover up errors but rather want to report errors to learn from mistakes and improve the system flaws that contribute to or enable patient safety events.
  • Staff know that their leaders will focus not on blaming providers involved in errors but on the systems issues that contributed to or enabled the patient safety event.
  • By reporting and learning from patient safety events, staff create a learning organization.

A Fair and Just Safety Culture

A fair and just safety culture is needed for staff to trust that they can report patient safety events without being treated punitively. In order to accomplish this, hospitals should provide and encourage the use of a standardized reporting process for staff to report patient safety events. This is also built into the Joint Commission’s standards at Standard , which requires leaders to provide and encourage the use of systems for blame-free reporting of a system or process failure or the results of proactive risk assessments. Reporting enables both proactive and reactive risk reduction. Proactive risk reduction solves problems before patients are harmed, and reactive risk reduction attempts to prevent the recurrence of problems that have already caused patient harm.CAMH,, January 2017 PS – 9 Patient Safety Systems

A fair and just culture takes into account that individuals are human, fallible, and capable of mistakes, and that they work in systems that are often flawed. In the most basic terms, a fair and just culture holds individuals accountable for their actions but does not punish individuals for issues attributed to flawed systems or processes.Refer to Standard , which requires that staff are held accountable for their responsibilities.

Effective Use of Data Collecting Data

When hospitals collect data or measure staff compliance with evidence-based care processes or patient outcomes, they can manage and improve those processes or outcomes and, ultimately, improve patient safety.25 The effective use of data enables hospitals to identify problems, prioritize issues, develop solutions, and track to determine success.9 Objective data can be used to support decisions, influence people to change their behaviors, and to comply with evidence-based care guidelines.9,26 The Joint Commission and the Centers for Medicare & Medicaid Services (CMS) both require hospitals to collect and use data related to certain patient care outcomes and patient harms. Some key Joint Commission standards related to data collection and use require hospitals to do the following:

  • Collect information to monitor conditions in the environment
  • Identify risks for acquiring and transmitting infections
  • Use data and information to guide decisions and to understand variation in the performance of processes supporting safety and quality
  • Have an organization wide, integrated patient safety program within their performance improvement activities
  • Evaluate the effectiveness of their medication management system
  • Report (if using Joint Commission accreditation for deemed status purposes) deaths associated with the use of restraint and seclusion
  • Collect data to monitor their performance
  • Improve performance on an ongoing basis

A Proactive Approach to Preventing Harm

  • Identification of actionable common causes
  • Avoidance of unintended consequences
  • Identification of commonalities across departments/services/units
  • Identification of system solutions

Encouraging Patient Activation

  • Patient safety guides all decision making.
  • Patients and families are partners at every level of care. n Patient- and family-centered care is verifiable, rewarded, and celebrated.
  • The licensed independent practitioner responsible for the patient’s care, or his or her designee, discloses to the patient and family any unanticipated outcomes of care, treatment, and services.
  • Though Joint Commission standards do not require apology, evidence suggests that patients benefit—and are less likely to pursue litigation—when physicians disclose harm, express sympathy, and apologize.
  • Staffing levels are sufficient, and staff has the necessary tools and skills. n The hospital has a focus on measurement, learning, and improvement.
  • Staff and licensed independent practitioners must be fully engaged in patient- and family-centered care as demonstrated by their skills, knowledge, and competence in compassionate communication.

 

Joint Commission

Mission: To continuously improve health care for the public, in collaboration with other stakeholders, by evaluating health care organizations and inspiring them to excel in providing safe and effective care of the highest quality and value.

Vision: All people always experience the safest, highest quality, best-value health care across all settings.

Founded in 1951, The Joint Commission evaluates and accredits nearly 21,000 health care organizations and programs in the United States. An independent, not-for-profit organization, The Joint Commission is the nation’s oldest and largest standards-setting and accrediting body in health care. To earn and maintain The Joint Commission’s Gold Seal of Approval®, an organization undergoes an on-site survey by a Joint Commission survey team at least every three years. (Laboratories are surveyed every two years.)

The Joint Commission is governed by a 32-member Board of Commissioners that includes physicians, administrators, nurses, employers, quality experts, a consumer advocate and educators. The Joint Commission employs approximately 1,000 people in its surveyor force, at its central office in Oakbrook Terrace, Illinois, and at an office in Washington, D.C.

Standards development process
Joint Commission standards are developed with input from health care professionals, providers, subject matter experts, consumers, government agencies (including the Centers for Medicare & Medicaid Services) and employers. They are informed by scientific literature and expert consensus and reviewed by the Board of Commissioners. New standards are added only if they relate to patient safety or quality of care, have a positive impact on health outcomes, meet or surpass law and regulation, and can be accurately and readily measured. The standards development process includes the following steps:

  • Emerging quality and safety issues suggesting the need for additional or modified requirements are identified through the scientific literature or discussions with The Joint Commission’s standing committees and advisory groups, accredited organizations, professional associations, consumer groups or others.
  • The Joint Commission prepares draft standards using input from technical advisory panels, focus groups, experts and other stakeholders.
  • Draft accreditation standards are reviewed by field-specific Professional and Technical Advisory Committees (PTACs), which are composed of outside experts. Both accreditation and certification standards are reviewed by the Standards & Survey Procedures (SSP) Committee, a committee of the Board of Commissioners.
  • The draft standards are distributed nationally for review and made available for comment on the Standards Field Review page of The Joint Commission website.
  • If indicated, the draft standards are revised and again reviewed by the appropriate experts and/or PTACs.
  • The draft standards are approved by the SSP Committee and provided to the Board for a comment period. Once that period of time has passed, the standards are final, unless the Board seeks further discussion.
  • The survey process is enhanced, as needed, to address the new standards requirements, and surveyors are educated about how to assess compliance with the new standards.
  • The approved standards are published for use by the field.
  • Once a standard is in effect, ongoing feedback is sought for the purpose of continuous improvement.

Systems Thinking

In our complex system, we use nonlinear thinking to understand how things work. We try to understand the relationships—human and operational—within a system. To improve our complex system, we need a new mindset to expand our understanding of work, patients, and co-workers. That new mindset is systems thinking.

A systems thinker sees how the parts of an organization interact and how effectively people are working together. This new way of thinking permits us to see things we didn’t see before. Expanded thinking allows us to recognize and imagine ways of solving problems by grasping entire processes and systems. Such thinking also reinforces the idea that the whole is greater than the sum of its parts.

Systems thinking is fundamental to quality improvement, which requires a unity of purpose. Having a unified or shared purpose allows individuals and departments to come together, so all energy is directed toward achieving a single goal. Systems thinking creates a drive for never-ending improvement. It instills a sense of doing good work and learning to do it better while we work.

https://www.americannursetoday.com/improving-health-care-with-systems-thinking/

Systems thinking utilizes habits, tools and concepts to develop an understanding of the interdependent structures of dynamic systems. When individuals have a better understanding of systems, they are better able to identify the leverage points that lead to desired outcomes.

  • Systems thinking is a management discipline that concerns an understanding of a system by examining the linkages and interactions between the components that comprise the entirety of that defined system.
  • The whole system is a systems thinking view of the complete organisation in relation to its environment. It provides a means of understanding, analysing and talking about the design and construction of the organisation as an integrated, complex composition of many interconnected systems (human and non-human) that need to work together for the whole to function successfully.
  • Whole systems are composed of systems, the basic unit, which comprise several entities (e.g. policies, processes, practices and people) and may be broken down into further sub-systems.
  • Systems may be thought about as having clear external boundaries (closed) or having links with their environment (open). An open systems perspective is the more common and realistic.
  • The boundaries of a whole system may be chosen and defined at a level suitable for the particular purpose under consideration; e.g. the education system or a complete school system.
  • Similarly, systems can be chosen and defined at different levels and can operate alongside each other as well as hierarchically; e.g. the finance system, the decision-making system, the accountability system.
  • An organisation as an entity can suffer systemic failure. This occurs in the whole system or high-level system where there is a failure between and within the system elements that need to work together for overall success.
  • Factors in systemic failure may include confused goals, weak system-wide understanding, flawed design, individual incentives that encourage loyalty to sub-ordinate (rather than super-ordinate) goals, inadequate feedback, poor cooperation, lack of accountability, etc.
  • Whole system success requires a performance management system that is pitched above the level of individual systems and their functional leadership. Features may include group or team-level goal-setting, development, incentives, communication, reviews, rewards, accountability. The aim is to focus on what binds individuals together and what binds systems together rather than functional silo performance.
  • Whole system failure may co-exist alongside functional success. The leadership of silos may individually be successful but not be sufficiently integrated into the whole system owing to a shortcoming of systems design, management or understanding.
  • A whole system can succeed only through managers collaborating in and across a number of functional systems. The whole system can fail only if leadership at the level of the whole system fails, and where several senior managers are involved. Hence, such failure may be labelled a systemic failure of leadership.
  • In cases of systemic failure, individual executives who operate at a lower sub-system level may be free of responsibility and blame. They may argue (correctly) that it was the wider system that failed. They may claim that particular systems that integrate with their own work let them down. However, responsibility and accountability for the successful design and running of the (integrated) ‘whole system’ should rest somewhere.
  • Understanding and anticipating how the whole system is intended to work, actually works, and how it may buckle under pressure, can practically elude and defeat most executives. To avoid censure for this tough challenge, they sometimes seek recourse to the often hollow mantra “lessons will be/have been learned”. They also try to divert attention and reassure investors by referring to a single bad apple (e.g. a ‘rogue trader’), behind which usually lurks a systemic failure.
  • The leadership challenge is accentuated by the realisation that for every legitimate, official or consciously designed system (which is intended to be and is supposedly rational) there is a shadow system. The shadow system is where all the non-rational issues reside; e.g. politics, trust, hopes, ambitions, greed, favours, power struggles, etc.
  • The system can confuse, overpower, block, and fail leadership. But leadership can fail the system. A major failure of leadership within, across or down an organisation is referred to as ‘systemic’.

(Extract from Chapter 12 ‘Leadership and Systems’ in The Search for Leadership: An Organisational Perspective, Triarchy Press)

 

In global health, we are concerned with both theory and practice, and are in need of models that match the complex conditions in which we work. A common thread of all these theories, methods, and tools is the idea that the behavior of systems is governed by common principles that can be discovered and expressed. They are all helpful in trying to conceptualize the systems in place. Some are more focused on ways to change the system to produce better outcomes. In using these theories, methods, and tools, we are reminded by the statistician George EP Box that “all models are wrong, but some are useful” [35]. It is to these uses that we now turn.

In much of public health and medicine, we use research evidence on the efficacy of interventions to inform decisions with an expectation about their future effect. Some systems thinking methods and tools, such as scenario planning, can also be used to explicitly forecast future events. However, even then, such methods are intended to be used for identifying possible outcomes to provide insights on how to prepare for them rather than fixing on any particular outcome.

In his landmark address on “Why Model?”, which provided inspiration for this essay, Joshua Epstein identified 16 reasons other than prediction on why to model [36]. Most of these reasons are applicable to systems thinking more broadly. Many of these specific reasons relate to being able to explain how things work, and systems thinking is particularly useful to explaining how complex systems work. Many of models can be used for testing the viability of policy interventions in a safe and inexpensive way – agent based models, systems dynamics models, and scenario planning are particularly useful for these purposes. In this journal supplement, for example, Bishai et al. present a very simple systems dynamics model to illustrate the trade-offs and unintended consequences of policy choices related to allocation to preventive and curative services [29].

Systems thinking approaches can also provide guidance on where to collect more data, or to raise new questions and hypotheses. The methods and tools help us to make explicit our assumptions, identify and test hypotheses, and calibrate our models against real data. One of the frustrations of health planners and researchers has been the aspiration that interventions shown to be effective at small scale or in a research setting cannot be simply replicated at large scale or to reach populations that are most vulnerable. Systems thinking methods and tools are increasingly being used to explain epidemics and to inform programmatic expansion efforts [5, 6].

One of the more compelling reasons to use systems thinking approaches is to inspire a scientific habit of mind. Beyond the contributions of any particular theory, method, or tool, the practice of systems thinking can reinforce what Epstein calls a “militant ignorance”, or commitment to the principle that “I don’t know” as a basis for expanding scientific knowledge. Systems thinking adds to the theories methods and tools we otherwise use in global health, and provides new opportunities to understand and continuously test and revise our understanding of the nature of things, including how to intervene to improve people’s health. And for those who value thinking and doing in global health, that can only be a good thing.

https://health-policy-systems.biomedcentral.com/articles/10.1186/1478-4505-12-51

Situational Briefing Model: (SBAR)

SBAR : Situation – Background – Assessment – Recommendation

The SBAR (Situation-Background-Assessment-Recommendation) technique provides a framework for communication between members of the health care team about a patient’s condition. SBAR is an easy-to-remember, concrete mechanism useful for framing any conversation, especially critical ones, requiring a clinician’s immediate attention and action. It allows for an easy and focused way to set expectations for what will be communicated and how between members of the team, which is essential for developing teamwork and fostering a culture of patient safety.

The purpose of the situational briefing model is to eliminate poor communication which is the root cause of many adverse events.

S * Situation: What is going on with the patient? What is the situation you are calling about? This includes patient identification information, code status, vitals, and the nurse’s concerns.

  • Identify self, unit, patient, room number.
  • Briefly state the problem, what is it, when it happened or started, and how severe.

B* Background: What is the key clinical background or context?

Pertinent background information related to the situation could include the following:

  • The admitting diagnosis and date of admission
  • List of current medications, allergies, IV fluids, and labs
  • Most recent vital signs
  • Lab results: provide the date and time test was done and results of pervious tests for comparison
  • Other clinical information
  • Code status

A* Assessment: What do I think the problem is? What is the nurse’s assessment of the situation? Here the nurse indicates what he or she believes to be the problem based on client history and current assessment.

R* Recommendation: What do I recommend or what do I want you to do? What is the nurse’s recommendation or what does he/she want?

Physician follow-up actions are suggested, including possible tests.

Examples:

  • Notification that patient has been admitted
  • Patient needs to be seen now
  • Order change

Hindsight Bias

Hindsight bias, also known as the knew-it-all-along effect or creeping determinism, is the inclination, after an event has occurred, to see the event as having been predictable, despite there having been little or no objective basis for predicting it. It is a multifaceted phenomenon that can affect different stages of designs, processes, contexts, and situations. Hindsight bias may cause memory distortion, where the recollection and reconstruction of content can lead to false theoretical outcomes. It has been suggested that the effect can cause extreme methodological problems while trying to analyze, understand, and interpret results in experimental studies. A basic example of the hindsight bias is when, after viewing the outcome of a potentially unforeseeable event, a person believes he or she “knew it all along”. Such examples are present in the writings of historians describing outcomes of battles, physicians recalling clinical trials, and in judicial systems trying to attribute responsibility and predictability of accidents.[4]

Hindsight bias has both positive and negative consequences. The bias’s also play a role in the process of decision-making within the medical field.

Positive

Positive consequences of hindsight bias is an increase in one’s confidence and performance, as long as the bias distortion is reasonable and does not create overconfidence. Another positive consequence is that one’s self-assurance of their knowledge and decision-making, even if it ends up being a poor decision, can be beneficial to others; allowing others to experience new things or to learn from those who made the poor decisions.[31]

Negative

Hindsight bias decreases one’s rational thinking because of when a person experiences strong emotions, which in turn decreases rational thinking. Another negative consequence of hindsight bias is the interference of one’s ability to learn from experience, as a person is unable to look back on past decisions and learn from mistakes. A third consequence is a decrease in sensitivity toward a victim by the person who caused the wrongdoing. The person demoralizes the victim and does not allow for a correction of behaviors and actions.[31]

Medical decision-making

Hindsight bias may lead to overconfidence and malpractice in regards to doctors. Hindsight bias and overconfidence is often attributed to the number of years of experience the doctor has. After a procedure, doctors may have a “knew it the whole time” attitude, when in reality they may not have actually known it. In an effort to avoid hindsight bias, doctors use a computer-based decision support system that help the doctor diagnose and treat their patients correctly and accurately.[32

Health care system

Accidents are prone to happen in any human undertaking, but accidents occurring within the healthcare system seem more salient and severe due to their profound effect on the lives of those involved, sometimes resulting in the death of a patient. In the healthcare system, there are a number of methods in which specific cases where accidents happened are reviewed by others who already know the outcome of the case. These methods include morbidity and mortality conferences, autopsies, case analysis, medical malpractice claims analysis, staff interviews, and even patient observation. Hindsight bias has been shown to cause difficulties in measuring errors in these cases.[42] Many of these errors are considered preventable after the fact, clearly indicating the presence and importance of a hindsight bias in this field. There are two sides of debate in how these case reviews should be approached to best evaluate past cases: the error elimination strategy and the safety management strategy.[4] The error elimination strategy aims to find the cause of errors, relying heavily on hindsight (therefore more subject to the hindsight bias).[4] The safety management strategy relies less on hindsight (less subject to hindsight bias) and identifies possible constraints during the decision making process of that case. However, it is not immune to error.[4]