2 Discussion Postings

OpinionVITAL DIRECTIONS FROM THE NATIONAL ACADEMY OF MEDICINE
VIEWPOINT
J. Matthew Austin,
PhD
Johns Hopkins
University School
of Medicine,
Johns Hopkins
Medicine Armstrong
Institute for Patient
Safety and Quality,
Department of
Anesthesiology and
Critical Care Medicine,
Baltimore, Maryland.
Elizabeth A. McGlynn,
PhD
Kaiser Permanente,
Pasadena, California.
Peter J. Pronovost,
MD, PhD
Johns Hopkins
University School
of Medicine,
Johns Hopkins
Medicine Armstrong
Institute for Patient
Safety and Quality,
Department of
Anesthesiology and
Critical Care Medicine,
Baltimore, Maryland.
Editorial pages 1679
and 1682
Corresponding
Author: J. Matthew
Austin, PhD, Armstrong
Institute for Patient
Safety and Quality,
Johns Hopkins
Medicine, 750 E
Pratt St, 15th Floor,
Baltimore, MD 21202
(jmaustin@jhmi.edu).
Fostering Transparency in Outcomes, Quality,
Safety, and Costs
Public reporting of health care system performance is
promoted as a means for enhancing the value of health
care by improving quality and lowering costs.1 Transparency can improve value by engaging physicians and
health care organizations in quality improvement by appealing to their professionalism and stimulating competition among organizations, or by providing patients with
information that enables them to select physicians and
health centers that offer higher-quality services, lowercost services, or both.2 There are several examples of success, such as the reporting of cardiac surgical outcomes
from the Society of Thoracic Surgeons registry, reporting of measures of health care–associated infections from
the Centers for Disease Control and Prevention, and measures of patient experience,3 but little evidence of broad
and sustained improvements.
The benefits of transparency may be offset by risks
if the reporting is not valid. Today, there is no common
standard for how reliable and valid a measure should
be before it is publicly reported or used in a pay-forperformance program. Using measures for which the reliability and validity are poor or unknown poses risks, including disengaging clinicians from improvement work
and guiding patients to riskier care. It also raises potential ethical concerns, such as imposing unjust financial
and reputational harm on physicians and health care organizations. For example, despite widespread use, a recent assessment found that none of the 21 Agency for
Healthcare Research and Quality’s (AHRQ) Patient Safety
Indicators met a basic threshold for accuracy.4
Over the last 20 years, the United States has witnessed a shift from having little information about the
performance of the health care system to an abundance of measures reported in different ways by many
entities such as accreditation organizations, the Centers for Medicare & Medicaid Services (CMS), commercial health plans; consumer platforms (eg, Consumer
Reports and Yelp), and independent parties including
nonprofit and for-profit entities (eg, HealthGrades and
US News and World Report). The variety of measures and
methods, lack of standards, and failure to audit the underlying data can lead to conflicting results, which potentiallycreateconfusionordismissaloftheinformation.5
The current process of measuring and reporting the
health care system’s performance on health, quality,
safety, and costs is error prone and lacks standards. Data
used for performance measurement are often first developed for a different purpose such as billing or meeting regulatory requirements, so it is not surprising that
these data can be problematic when used for purposes
for which they were not developed.
jama.com
Each step in measuring and reporting performance presents opportunities for error. Yet no entity is
entrusted with ensuring the validity of the whole process. First, measures must be developed and specified,
which includes deciding on the dimension of care to be
measured and identifying the target population, outcome, or process of interest, and, if appropriate, a riskadjustment model. The National Quality Forum (NQF)
uses a multistakeholder consensus-development process to vet performance measures and endorses those
deemed important, scientifically acceptable, feasible,
and usable; use of these measures is voluntary.
Second,datamustbeidentifiedandcollectedtopopulatethemeasures.WiththeexceptionoftheNationalCommittee for Quality Assurance, which evaluates health plan
quality,someclinicalregistries,andasmallnumberofstate
health departments, few of the data used for performance
measurement are subjected to quality-assurance procedures specific to the intended use for measurement.
Third, the collected data are applied to the measure specifications. Entities that implement measures often indicate that they are using a standard endorsed
measure, but deviations from the measure, differing interpretations of the measure specifications, and adjustments for convenience are common. Such variation in
measure implementation means validity and comparability of the results are unknown.
Fourth, a public report is created, which includes categorizing clinicians and health care organizations into performance groups, and results are communicated to stakeholders. Entities creating public reports also define
performance categories, with no requirement that performance differences be tested for statistical significance. This can result in misclassifying some clinicians or
health care centers as better than or worse than others.
Fifth, methods for communicating results to stakeholders have had little cognitive testing, leaving appropriate interpretability of the reports largely unknown.6
Priority Considerations
Standard setting could stimulate improvements in the integrity of the underlying data and methods used to generate performance measures. One opportunity is to learn
from financial reporting and emulate the Financial
Accounting Standards Board (FASB), which is a notfor-profit organization recognized by the Securities and
Exchange Commission as the designated accounting
standard setter for public companies. The FASB establishes financial accounting and reporting standards also
for private companies and not-for-profits that follow
Generally Accepted Accounting Principles. The mission
(Reprinted) JAMA October 25, 2016 Volume 316, Number 16
Copyright 2016 American Medical Association. All rights reserved.
Downloaded From: https://jamanetwork.com/ by a Capella University Library User on 01/09/2023
1661
Opinion Viewpoint
of the FASB is to establish accounting and reporting standards and the
faithful implementation of the standards results in financial reports
that provide useful information to investors, creditors, and other providers of capital. Although the idea of an FASB for health care has been
discussed for a number of years, now may be the time for its development. Over the last 2 decades, health care stakeholders have agreed
that transparency of performance is essential. The United States must
now focus on improving the robustness of these systems. This is particularly important because value-based payment requires valid and
reliable measures to function appropriately. The standards must set
thresholds for data quality, psychometric soundness, and interpretability of results.
Transparency will be enhanced if the United States improves
measures and reduces the burden and costs of measurement. Measures that are meaningful to patients and those that measure the
overall value of the care delivered are needed. Measures must be
understandable and actionable. Reports must be usable by those
with limited literacy, numeracy, and time. Public and private stakeholders have made little investment in the science and innovation
of performance measurement. The United States lacks a safe space
for measurement innovation and incentives for organizations and
individuals to learn and improve.7
Summary Recommendations for Vital Directions
Policy makers can help enhance the effectiveness of performance
measurement and reporting in a number of ways.
• Create a health data standard-setting body. Policy makers could
fund a project to engage stakeholders and develop an initial design of a standard-setting body that is consistent with the recommendations in the National Academy of Medicine’s report Vital
Signs. An independent body could then be launched to develop
standards for all steps in the production of essential measures and
results so that the information available about the performance of
ARTICLE INFORMATION
Published Online: September 26, 2016.
doi:10.1001/jama.2016.14039
Conflict of Interest Disclosures: All authors have
completed and submitted the ICMJE Form for
Disclosure of Potential Conflicts of Interest.
Dr Austin reports receipt of a grant from the
Leapfrog Group outside the submitted work.
Dr McGlynn reports being the chair of the board of
trustees for the American Board of Internal
Medicine Foundation Board of Trustees, cochair for
the National Quality Forum Measures Application
Partnership Coordinating Committee, and chair for
the Agency for Healthcare Research and Quality
(AHRQ) National Advisory Council. Dr Pronovost
reports receipt of grants from AHRQ and the
Centers for Disease Control and Prevention on
improving patient safety and quality, unpaid
advisement to the Leapfrog Group and US News
and World Report, and receipt of an honorarium
from hospitals for speaking about patient safety
and quality.
Funding/Support: The National Academy of
Medicine’s Vital Directions initiative is sponsored by
the California Health Care Foundation, the John A.
Hartford Foundation, the Robert Wood Johnson
Foundation, and the National Academy of
Medicine’s Harvey V. Fineberg Impact Fund.
1662
the system is valid and accurate for health and health care. The success of payment reform will be limited if the quality of care provided cannot be accurately assessed. Depending on whether this
work is done by existing entities or a new entity, these steps could
take 2 to 5 years to accomplish.
• Build the science of performance measures. Policy makers could
encourage AHRQ and the Patient-Centered Outcomes Research
Institute to fund research on the science of performance measures, encourage CMS to continue its existing efforts, and encourage collaboration among the multiple federal agencies involved in
performance measurement. These federal agencies could also support innovation in establishing multistakeholder learning laboratories, creating feedback loops, and identifying data sources and
test beds to develop needed measures more quickly. These laboratories could perform initial tests and improve measures in the interval between when a measure is endorsed by the NQF and when
it is publicly reported and used in pay-for-performance programs,
preventing revision of measures after implementation.
• Improve the communication of data to patients. Policy makers
could fund research on how to communicate effectively with patients to raise their awareness about variations in the quality and
costs of care. For the health care market to work efficiently, more
patients who are knowledgeable about the quality of services they
are receiving are needed.
Despite progress in public reporting, health care performance
measurement has not yet achieved the desired goal of a system with
higher quality and lower costs. The time is right to evolve to a better performance measurement and reporting system. This advance would require a commitment to the science of performance
measurement, which requires imagination, investment, infrastructure, and implementation. Without such commitment, the opportunity to achieve the goal of higher-value care will be limited by the
inability to understand performance.
Disclaimer: This Viewpoint on fostering
transparency in outcomes, quality, safety, and costs
provides a summary of a discussion paper
developed as part of the National Academy of
Medicine’s initiative on Vital Directions for Health &
Health Care (http://nam.edu/vitaldirections).
Discussion papers presented in this initiative reflect
the views of leading authorities on the important
issues engaged, and do not represent formal
consensus positions of the National Academy of
Medicine or the organizations of the participating
authors.
Additional Contributions: Coauthors of the
National Academy of Medicine discussion paper
were Christine K. Cassel, MD, Kaiser Permanente
School of Medicine; Suzanne Delbanco, PhD,
Catalyst for Payment Reform; Ashish Jha, MD, MPH,
Harvard T. H. Chan School of Public Health; Bob
Kocher, MD, Venrock; Lew Sandy, MD,
UnitedHealth Group; and John Santa, MD, MPH,
formerly of Consumer Reports. Elizabeth Finkelman,
MPP, National Academy of Medicine, served as the
initiative director.
REFERENCES
1. Totten AM, Wagner J, Tiwari A, et al. Public
Reporting as a Quality Improvement Strategy.
Closing the Quality Gap: Revisiting the State of the
Science. Rockville, MD: Agency for Healthcare
Research and Quality; 2012.
2. Berwick DM, James B, Coye MJ. Connections
between quality measurement and improvement.
Med Care. 2003;41(1)(suppl):I30-I38.
3. Elliott MN, Lehrman WG, Goldstein EH, et al.
Hospital survey shows improvements in patient
experience. Health Aff (Millwood). 2010;29(11):
2061-2067.
4. Winters BD, Bharmal A, Wilson RF, et al. Validity
of the Agency for Health Care Research and Quality
Patient Safety Indicators and the Centers for
Medicare and Medicaid Hospital-acquired
Conditions [published online April 25, 2016]. Med
Care. doi:10.1097/MLR.0000000000000550
5. Austin JM, Jha AK, Romano PS, et al. National
hospital ratings systems share few common scores
and may generate confusion instead of clarity.
Health Aff (Millwood). 2015;34(3):423-430.
6. Kanouse DE, Schlesinger M, Shaller D, Martino
SC, Rybowski L. How patient comments affect
consumers’ use of physician performance
measures. Med Care. 2016;54(1):24-31.
7. McGlynn EA, Kerr EA. Creating safe harbors for
quality measurement innovation and improvement.
JAMA. 2016;315(2):129-130.
JAMA October 25, 2016 Volume 316, Number 16 (Reprinted)
Copyright 2016 American Medical Association. All rights reserved.
Downloaded From: https://jamanetwork.com/ by a Capella University Library User on 01/09/2023
jama.com
Lee et al. BMC Health Services Research (2016) 16:254
DOI 10.1186/s12913-016-1502-7
RESEARCH ARTICLE
Open Access
Handoffs, safety culture, and practices:
evidence from the hospital survey on
patient safety culture
Soo-Hoon Lee1, Phillip H. Phan2*, Todd Dorman3, Sallie J. Weaver3 and Peter J. Pronovost3
Abstract
Background: The context of the study is the Agency for Healthcare Research and Quality’s Hospital Survey on
Patient Safety Culture (HSOPSC). The purpose of the study is to analyze how different elements of patient safety
culture are associated with clinical handoffs and perceptions of patient safety.
Methods: The study was performed with hierarchical multiple linear regression on data from the 2010 Survey. We
examine the statistical relationships between perceptions of handoffs and transitions practices, patient safety culture,
and patient safety. We statistically controlled for the systematic effects of hospital size, type, ownership, and staffing
levels on perceptions of patient safety.
Results: The main findings were that the effective handoff of information, responsibility, and accountability were
necessary to positive perceptions of patient safety. Feedback and communication about errors were positively
related to the transfer of patient information; teamwork within units and the frequency of events reported were
positively related to the transfer of personal responsibility during shift changes; and teamwork across units was
positively related to the unit transfers of accountability for patients.
Conclusions: In summary, staff views on the behavioral dimensions of handoffs influenced their perceptions of
the hospital’s level of patient safety. Given the known psychological links between perception, attitude, and behavior,
a potential implication is that better patient safety can be achieved by a tight focus on improving handoffs through
training and monitoring.
Keywords: Handoffs, Staff attitudes, Patient safety culture, Communication, Personal responsibility, Accountability
Background
Clinical handoffs, also known as sign-outs, shift reports,
or handovers, occur in many places along the healthcare
value chain. It involves the ‘transfer of professional responsibility and accountability for some or all aspects of
care for a patient, or groups of patients, to another person or professional group on a temporary or permanent
basis’ [1]. For example, nursing handovers occur very
frequently, not only between shifts and among part-time
nurses, but also because nurses serve as the communication partner and informal coordinator for all healthcare
professionals to ensure the continuity of care in a 24* Correspondence: pphan@jhu.edu
2
Carey Business School, Johns Hopkins University, 100 International Drive,
Baltimore, MD 21202, USA
Full list of author information is available at the end of the article
hour seven-days-a-week environment [2]. The transfer
of professional responsibility became salient for residents
due to increased work-hour restrictions in U.S. residency
programs, which shortened the continuity of care and
increased the number of shift changes [3]. Concern for
the transfer of unit accountability heightened with the
fragmentation in the healthcare to the proliferation of
sub-specialties; creating more transitions and handoffs
with the increase in number of providers for a single patient [4]. Consequently, handoffs are a target for quality
improvements because they represent high-risk events.
The Joint Commission’s 2006 evaluation of accredited
healthcare organizations attributed at least 35 % of sentinel events to handoff errors [5]. Recent estimates implicate handoff errors in nearly 80 % of serious events
between 2004 and 2014 [6].
© 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lee et al. BMC Health Services Research (2016) 16:254
Patient safety culture, which consists of shared norms,
values, behavioral patterns, rituals, and traditions [7]
that guide the discretionary behaviors of healthcare professionals matter in handoffs. According to the theory of
planned behavior [8], staff observations of their institution’s practices and coworkers’ behavioral patterns in
handoffs will influence their perceptions of overall level
of patient safety, and their behavioral responses to such
issues. Therefore, employees who perceive that their do
institutions not emphasize patient safety may not pay
attention to such concerns [9]. To make improvements
in handoffs, healthcare policymakers must first understand how employees perceive their organizations’ patient safety culture [10].
The extant literature on handoffs largely focuses on
the relationship between inadequate communications
and perceptions of avoidable harm [11–13]. Poor handoff communication creates an opportunity for adverse
events because incomplete, inaccurate, and omitted data
create ambiguities between the sending and receiving
providers [14]. Yet, the literature has found little empirical evidence to suggest that effective information transfers are associated with positive perceptions of patient
safety [15]. We surmise that this is because a handoff is
multidimensional, involving the transfer of information,
responsibility and accountability, implying that previous
studies may have over-simplified handoff challenges [16].
This study contributes to the literature by empirically
investigating what past research has largely ignored: the
transfers of professional responsibility and unit accountability for patient safety between providers during handoffs [17]. In the transfer of responsibility, even with
effective information exchange, whether the receiving
provider feels the same sense of responsibility for the patient as the sending provider cannot be taken for granted.
In the case of physicians, this sense of responsibility is defined by Horwitz and colleagues [18] as a sense among
on-call physicians that they were not “just covering” for
the admitting physician but rather are integral to the patient’s care. A systematic review on the transfer of information during nurses’ transitions of care found that
senders exhibited few supportive behaviors during the
shift change, resulting in a low degree of engagement by
receivers as they demonstrated indifference and nonattentive behaviors [19]. Hence, we believe that during
shift changes, the active role and the responsibility of
healthcare providers in shaping an effective information
exchange protocol go beyond the mere transmission of
structured data [13, 16]. Without the effective transfer and
acceptance of responsibility, there is no assurance that the
handoff process has created an appropriate mental model
of the patient’s plan of care for the receiving provider.
Our search of the literature did not yield any research
on how the transfer of unit accountability influences
Page 2 of 8
staff perceptions of patient safety. Between-unit transitions of care can create uncertainty over who is ultimately accountable for a patient’s wellbeing. The crossdisciplinary and multi-specialty transition of care create
coordination difficulties, as handoffs can be irregular
and unpredictable [20, 21]. In addition, complications
related to inter-professional differences in expectations,
terminologies, and work practices make it challenging to
build a shared mental model, necessary for effective
transitions between providers [14]. Because conflicting
expectations and perspectives between units increase
barriers to effective handoffs, we expect that when
healthcare professionals perceive a supportive environment for cooperation and joint accountability between
units, they are more likely to have positive perceptions
of patient safety.
We further expect handoffs of information, responsibility, and accountability to influence each other, so that
improvement in one type will positively affect the other
types, and degradation in one will erode the others. Specifically, handing off comprehensive and accurate patient
information to a receiver is necessary for effectively
handing off responsibility and accountability [22]. In a
handoff, the failure of a sending unit to communicate
the rationale for a decision, anticipate problems, and expectations creates uncertainties and ambiguities for the
receiving unit [23]. Important information can be ignored or misinterpreted by the receiving unit when there
is unclear handoff of responsibility and accountability
resulting from ambiguous work procedures and a lack of
supportive infrastructure [12].
We explore the factors in an organization’s patient
safety culture that might be associated with effective
handoffs. Specifically, we posit that an organization’s
communication, teamwork, reporting, and management
cultures will have differential influences on effective
handoffs of information, responsibility, and accountability. The literature on information transfer has primarily
dealt with the mechanics of communication (i.e., ways in
which information is transmitted and received). We submit that this perspective is not complete without considering Marx’s theory of just culture [24]. Research has
shown that when providers feel supported and psychologically safe because their organizations are perceived
to be fair, they are more likely to communicate completely by voicing safety concerns [25, 26]. For example,
in studies on TeamSTEPPS, a teaming protocol often
used in surgical teams, any member (surgeon, nurse,
technician, and anesthesiologist) can speak up or callout observations of potential error because they view
each other as having equal responsibility and authority
for patient safety [27]. Feedback loops between the
sender and receiver are necessary for this process to
work. They allow both parties to properly manage
Lee et al. BMC Health Services Research (2016) 16:254
expectations and adjust their behaviors. Hence, a strong
communications culture, typified by the openness to and
willingness of clinicians to speak up, ask questions, and
provide feedback, would enhance effective handoff of
information.
In the case of shift changes, a culture of professionalism can mitigate errors and procedural violations that
arise primarily from aberrant mental processes such as
forgetfulness, inattention, low motivation, carelessness,
or negligence [28, 29]. Medical professionalism includes a
commitment to collaborating with others while engaging
in self-regulation to make the best clinical decisions [30].
Professionalism in nursing focuses on value-based cognitive and attitudinal attributes that are harnessed to deliver
patient centered care [31]. Nurses often utilize handoffs as
an avenue for socialization, education, and emotional support to facilitate integration and staff cohesion [19]. A
teamwork culture facilitates handoff of responsibility between the sending and receiving providers by seeking assistance or voicing concerns and clarifying issues through
bidirectional conversations. This process creates a shared
mental model of the patient’s clinical conditional and plan
of care [32]. Professionalism also implies proactive surveillance, detection, and the voluntary reporting of adverse
events [33]. Errors recurrences are reduced if medical incidences and pitfalls are proactively reported to the incoming provider during shift changes [34]. Therefore, a strong
teamwork culture and a culture of reporting adverse
events enhance effective handoff of personal responsibility
in shift changes.
Patient transfers between units span three domains: provider, service, and location, which are accompanied by
differences in social norms, terminologies, and work practices [14, 18]. Such transitions multiply the difficulties providers encounter when building a shared mental model of
the patient’s clinical problems and needs. Add to these are
systemic workplace traps such as unclear authority structures, inconsistent management support, unclear work
procedures, and the lack of supporting infrastructure,
which make safe handoffs challenging [21]. Such conflicts
could be addressed by improving inter-unit teamwork and
coordination [25]. Moreover, the provision of expectations
and policies from top management that address the assignment of accountability in the delivery of care could reduce delays and improve the coordination of care across
unit boundaries. We posit that inter-unit teamwork and a
top management that expects and is supportive of patient
safety would facilitate effective handoff of unit accountability during patient transitions.
Methods
Data
In 2006, the United States Department of Health and Human Services’ (DHHS) Agency for Healthcare Research
Page 3 of 8
and Quality (AHRQ) funded the development of the Hospital Survey on Patient Safety Culture (HSOPSC). This
survey was administered on a voluntary basis to all hospitals in the United States. The HSOPSC assesses hospital
staff opinions on 42 items that measure their institution’s
patient safety practices based on 5-point response scales
of agreement (“strongly disagree” to “strongly agree”) or
frequency (“never” to “always”). The de-identified data for
this study comes from the 2010 survey that was made
available for public use. It can be requested from the
AHRQ. It represents 885 U.S. hospitals that voluntarily
participated in the survey [7]. The views of healthcare professionals were aggregated for each institution, since past
studies have shown that aggregating these items from the
individual- and unit-level responses to the hospital level
led to more robust psychometric properties [35], which
are reported in Additional file 1.
In Table 1, we report the distribution of respondents
by job roles. About two thirds of respondents are from
the nursing and allied health professions while another
third are administrative staff. A small percentage of respondents were self-identified as physicians, although an
unknown percentage of the administrative staff could
also be physicians. The responses in this survey are
therefore representative of the views of nurses, allied
health professionals, management, and physicians.
Measures
Covariates
Four hospital characteristics pertaining to bedsize, hospital
type, ownership, and staffing were included as baseline covariates since we expect these factors to systematically
affect perceptions of patient safety. For example, large
government-owned teaching hospitals may experience
more incidents because they serve a more diverse population of patients that present with complex co-morbidities
than smaller private specialty hospitals. The frequency distribution for each covariate is reported in Additional file 2.
Handoff transfers
Four items related to handoffs and transitions of care in
the survey were used for our analyses. Handoff of patient
information comprises two items, ‘important patient care
Table 1 Percentage of respondents by job role
Job role
Percentage of
respondents
Nurses (RN, PA/NP, LVN/LPN)
37.10 %
Physicians (Attending, Resident)
3.66 %
Allied Healthcare Professionals (Pharmacist, PT, RT, OT,
Dietitian, Technicians, Patient Care Assistant)
24.12 %
Staff (Management, Administrative Assistant & other
clerical positions)
35.10 %
Lee et al. BMC Health Services Research (2016) 16:254
information is often lost during shift changes’ (reverse
coded) and ‘problems often occur in the exchange of information across hospital units’ (reverse coded). Handoff
of personal responsibility in shift changes is measured by
the item, ‘shift changes are problematic for patients in
this hospital’ (reverse coded). Handoff of unit accountability is measured by the item, ‘things “fall between the
cracks” when transferring patients from one unit to another’ (reverse coded).
Patient safety culture
Communication culture is measured by two composites,
communication openness and feedback and communication about error. Teamwork culture is measured by two
composite scales, teamwork within units and teamwork
across units. Reporting culture is measured by the composite, frequency of events reported. Supportive management action is measured by three composites, management
support for patient safety, supervisor/manager expectations
and actions promoting patient safety, and non-punitive
response to error. The items in the HSOPSC survey that
represent each of these composites are reported in
Additional file 3.
Patient safety perceptions
Patient safety perceptions comprises four items that
measures respondents’ agreement that ‘patient safety is
never sacrificed to get more work done’, ‘our procedures
and systems are good at preventing errors from happening’, ‘it is just by chance that more serious mistakes don’t
happen around here’ (reverse coded), and ‘we have patient safety problems in this unit’ (reverse coded).
Statistical analysis
We applied hierarchical multiple linear regression analysis using SPSS v21 to analyze the data. This technique
allows us to enter a fixed order of variables to control
for the influence of the covariates so that we can isolate
the effects of the predictors of patient safety perception.
We first entered the four hospital covariates into the regression model as baseline predictors on patient safety
perception. We then entered each handoff transfer variable into the regression model. Similarly, to assess the
effects of patient safety culture on each handoff transfer,
we first entered the four hospital covariates as baseline
predictors on each handoff transfer followed by the respective patient safety culture composite.
Results
First, we check for multicollinearity among the covariates and predictors. Multicollinearity, shown by the variance inflation factor (VIF), results in an inflated variance
or R2 in the outcome variable in the regression model
[36]. In our sample, the VIF was below 3.0, meaning that
Page 4 of 8
any significant relationships found are not inflated by
correlations between the predictor variables [36]. Table 2
reports strong support for the hypothesis that effective
handoffs of information, responsibility, and accountability are statistically significantly (p < .001) related to patient safety perceptions. Table 3 reports the inter-relationships among handoffs of information, responsibility, and accountability. Model 1 in Table 3 reports that enhancing handoffs of responsibility and unit accountability enhance the handoff of patient information. Model 2 in Table 3 explores the relationship between communication culture and the handoff of information. The results in Model 2 shows that while feedback and communication on error had a significantly positive effect on perceptions of effective handoff of patient information, communication openness had no influence on perceptions of effective handoff of patient information. Thus, a strong communication culture only partially enhances the effective handoff of patient information. Model 3 in Table 3 shows that enhancing handoffs of patient information and unit accountability enhance the handoff of responsibility during shift changes. Model 4 in Table 3 shows that both teamwork within units and frequency of events reported had statistically significant positive influences on perceptions of effective handoff of responsibility in shift changes. Thus, a strong teamwork culture and a reporting culture enhance the handoff of responsibility during shift changes. Model 5 in Table 3 shows that enhancing handoffs of patient information and personal responsibility enhance the handoff of unit accountability. Model 6 in Table 3 shows that while teamwork between units had a positive and significant association on perceptions of the effective Table 2 Hierarchical regression analyses on the impact of handoffs on patient safety perceptions Patient safety perceptions Model 1 Model 2 Model 3 -.01 .02 .03 Control variables: Bedsize Hospital type -.02 -.04* -.02 Ownership -.03 -.05** -.06** Staffing .60*** .62*** .64*** Predictor Variables: Handoff of patient information .35*** Handoff of personal responsibility .32*** Handoff of unit accountability .32*** Change in R2 .069*** .049*** .054*** Total Adj R2 .76*** .74*** .745*** Values in the table are standardized beta coefficients for n = 885 hospitals * p < .05, ** p < .01, *** p < .001 Lee et al. BMC Health Services Research (2016) 16:254 Page 5 of 8 Table 3 Hierarchical regression analyses on handoffs Dependent variables Handoff of patient information Handoff of responsibility Model 1 Model 3 Model 2 Handoff of unit accountability Model 4 Model 5 Model 6 Covariates Bedsize -.13*** -.20*** -.12*** -.01 -.14*** -.02 Hospital Type -.01 .02 .05** -.02 -.03 -.02 Ownership -.06*** .01 .03* -.01 .05*** -.01 Staffing .07*** .38*** .15*** .48*** -.01 .46*** Handoff transfer of Patient information .51*** Responsibility .38*** Unit accountability .60*** .66*** .21*** .25*** Patient safety culture Communication openness .06 Feedback & communication on errors .34*** Teamwork within units .15*** Frequency of events reported .23*** Teamwork across units .74*** Management support for patient safety .01 Supervisor/Manager expectations & actions promoting patient safety -.10*** Nonpunitive response to error Change in R2 Total Adj R 2 .01 .420*** .107*** .295*** .078*** .368*** .288*** .862*** .539*** .813*** .594*** .848*** .768*** Values in the table are standardized beta coefficients for n = 885 hospitals * p < .05, ** p < .01, *** p < .001 handoff of unit accountability, supportive management culture and non-punitive response to error had no effect on the handoff of accountability. We also found that supervisor/manager expectations and actions promoting patient safety had a statistically negative influence on perceptions of unit accountability. The data indicates that a strong teamwork culture enhances the handoff of unit accountability but this is not in case for management support. Discussion Most handoffs studies have focused on communication issues. They generally recommend structured information handoffs, such as IPASS, as a solution to communication problems. Ours is the first to delineate and empirically test the relationships of three different handoffs in information, responsibility, and accountability on perceptions of patient safety. The results generally show that effective handoffs of patient information, personal responsibility during shift changes, and unit accountability for patient transfers are significantly related to patient safety perceptions. The results also show that each handoff influences the others such that the improvement (or degradation) of one also improves (or erodes) the others. The data shows that communication exchanges, individual behaviors, and organizational processes have to be addressed before shared beliefs and values on perceptions of patient safety can be formed [37]. The results indicate that each type of handoff is affected by different patient safety culture composites. Providing feedback and communication about errors enhanced perceptions of effective handoff of patient information. However, the results indicate that a strong communication culture only partially ensures the effective handoff of patient information. Since communication openness is highly correlated with feedback and communication about errors (r = 0.63, p < 0.01), this finding may be the simple result of measurement since the effect of one cultural composite may mask the effects of the other. Future studies should start with a comprehensive definition of communication culture to include having a minimum data set, the use of mnemonics for communicating relevant information, and a process that include electronic means to support communication. The data shows that strong teamwork culture and reporting culture enhance perceptions of the effective handoff of responsibility during shift changes. Demonstrating such professionalism may require providers to Lee et al. BMC Health Services Research (2016) 16:254 create protected time and space for the handoff during shift change, prepare rationales for plans of care and tasks to perform, and verify that the receiving provider has accurately understood the information received. The data indicates that providers making the effort to ensure strong teamwork between units by demonstrating cooperation, collaboration, and coordination enhance the handoff of unit accountability. However, it was surprising that management support did not significantly enhance the handoff of unit accountability. Perhaps constant process improvement efforts can create fatigue, so that ‘management support’ is met with cynicism if resources to implement these efforts are insufficient. As well, frontline staff may not observe management support if the former do not routinely interact with the latter. Similarly, non-punitive responses to error are not observable if no actions were taken when errors were made. In short, management may need to exhibit the observable appropriate behaviors before unit accountability in handoffs can be enhanced. The results indicate that we have to focus on specific cultural composites when designing and training healthcare professionals to improve specific types of handoffs. For example, in large hospitals or in complex medical systems, the high workload and the pressures of coordinating clinical care between different units with different experiences and expectations increase challenges to proper handoffs. Here, management may need to invoke the sense of professionalism for all healthcare providers by offering evidence on the causes and consequences of poor handoffs while providing incentives and recognition for performing good handoffs. The strengths in using the HSOPSC survey data is the large number of hospital participants, which provide robust and stable coefficients in the regression model [38]. The limitations include the following. First, the data is cross-sectional from one time-period. A better estimation technique would be to utilize a panel of data going over several years, but that is not possible because the respondents are anonymous; a different dataset needs to be constructed. Second, physician representation in the data is low and therefore, one cannot generalize the responses or the implications of the results to physicians alone. Steps to incentivize physician participation will need to be taken for the data to represent all stakeholders in the hospital community. Third, no outcomes are reported from this dataset, such as the number of medical errors due to handoffs, the number of closecalls during transitions, or hospital length of stay. Therefore, future studies involving interventions related to handoffs of information, responsibility, and accountability are needed to correlate the implications for handoff practice to actual outcomes as there are none to date. Examples of such interventions may include having a Page 6 of 8 minimum data set when handing over patient information, assessing the efficacy of inter-professional teamwork training on enhancing professionalism, and teambased governance reporting structures to improving unit accountability. Fourth, from a theoretical standpoint, we were limited by the way the constructs were operationalized in the survey and the reliance on self-report data [38]. An opportunity clearly exists to develop comprehensive measures of these constructs in future studies by considering more fine-grained measures of information exchange and communication processes, personal responsibility as it relates to learning and team behaviors as well as unit accountability related to systems improvement, training, and staff empowerment. Having noted all these limitations, we still believe that the study points us toward a richer and theoretically robust way of conceptualizing handoffs. Conclusions The contribution of this study lies in the deconstruction of handoffs into information, responsibility, and accountability and in identifying the accompanying patient safety culture composites that differentially influence each type of handoff. We provided an in-depth look at the cultural drivers of effective handoffs than the literature has thus far examined. The different and sometimes strong cultures between professional specialties can cause the fragmentation of shared values, making it difficult for such professionals to view themselves as part of an organization. If the organization does not have a formal process to help healthcare professionals perceive each other as a resource, the handoff process is carried out in ‘silos’. In order to help healthcare professionals navigate the tradeoff between efficiency and thoroughness, hospitals can build a strong culture of teamwork across units, while using other organizational development activities to bind its members to a common vision and shared mental model. The theory of planned behavior suggests that attitude is a key factor, which can be influenced by training and education [39]. Perhaps training healthcare professionals with handoffs procedures and protocols can be used to influence a healthcare organization’s patient safety culture. Other techniques include mentoring and leading by example with a sharp focus on transitions of care as a central theme in a hospital’s safety program [40–42]. The interactions between the different types of transitions we showed in this study suggest that spillovers into other aspects of patient safety are likely to occur. More importantly, defining patient safety culture in a specific form (transitions of care) attenuates ambiguity so that stakeholders can more clearly identify with the goals and process of patient safety improvement programs. Lee et al. BMC Health Services Research (2016) 16:254 Additional files Additional file 1: Psychometric Properties of the Variables. Descriptive statistics and reliability analyses of the items in each patient safety culture composite. (DOCX 15 kb) Additional file 2: Frequency Distribution of Covariates. The distribution frequency for each covariate (control) variable used in the hierarchical regression model. This is report to describe the sample characteristics. (DOCX 12 kb) Additional file 3: Hospital Survey on Patient Safety Culture (HSOPC) survey items for each Patient Safety Culture Composite. A list of the items and descriptions from the HSOPC used in this study. (DOCX 13 kb) Page 7 of 8 4. 5. 6. 7. 8. 9. Abbreviations AHRQ, Agency for Healthcare Research and Quality; DHHS, Department of Health and Human Services (United States); HSOPSC, Hospital Survey on Patient Safety Culture 10. 11. Acknowledgements Not applicable. 12. Funding Not applicable. 13. Availability of data and material Data is available from the Agency for Healthcare Research and Quality (AHRQ) at http://www.ahrq.gov/research/data/dataresources/index.html (accessed: June 29, 2016). 14. Authors’ contributions SHL designed the study, conducted the literature review, statistical analysis, and drafted the manuscript. PP designed the study, participated in the statistical analysis, and helped draft the manuscript. TD interpreted the data, and participated in the revision of the manuscript. SW acquired the data, interpreted the findings, and participated in the revision of the manuscript. PJP contributed to the conceptual development, interpreted the findings, and participated in the revision of the manuscript. All authors read and approved the final manuscript. 16. 15. 17. 18. 19. 20. Competing interests The authors declare that they have no competing interests. 21. Consent for publication Not applicable. 22. Ethics approval and consent to participate Not applicable. Research involved non-identifiable organization and respondent public domain data. See http://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html#46.101 (accessed: June 29, 3016) 23. 24. 25. Author details 1 Strome College of Business, Old Dominion University, Norfolk, VA, USA. 2 Carey Business School, Johns Hopkins University, 100 International Drive, Baltimore, MD 21202, USA. 3School of Medicine, Johns Hopkins University, Baltimore, MD, USA. 26. 27. Received: 26 January 2016 Accepted: 1 July 2016 28. 29. References 1. BMA. Safe handover: safe patients. Guidance on clinical handover for physicians and managers. Junior Doctors Committee, London: British Medical Association; 2004. 2. Institute of Medicine. The Future of Nursing, Leading Change, Advancing Health. http://www.nationalacademies.org/hmd/Reports/2010/The-Futureof-Nursing-Leading-Change-Advancing-Health.aspx; accessed 10 May 2016. 3. Borowitz SM, Waggoner-Fountain LA, Bass EJ, et al. Adequacy of information transferred at resident sign-out (in-hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:6–10. 30. 31. 32. 33. Manser T. Fragmentation of patient safety research: a critical reflection of current human factors approaches to patient handover. J Public Health Res. 2013;2(3):e33. Joint Commission. Improving handoff communications: Meeting national patient safety goal 2E. Jt Comm Perspect Patient Saf. 2006;6(8):9–15. Joint Commission: Sentinel Event Data - Root Causes by Event Type. Retrieved from: http://www.jointcommission.org/Sentinel_Event_Statistics/ (2014). Accessed: March 15, 2015. Agency for Healthcare Research and Quality. Hospital patient safety culture surveys. Available at: http://www.ahrq.gov/qual/patientsafetyculture/ hospsurvindex.htm. (2010). Accessed: March 15, 2015. Ajzen I. The theory of planned behavior. Org Beh and Human Dec Proc. 1991;50(2):179–211. Boan DM, Nadzam D, Clapp JR. The impact of variance in perception of the organization on capacity to improve in hospital work groups. Group Dyn. 2012;16(3):206–2179. Nieva VF, Sorra J. Safety culture assessment: a tool for improving patient safety in healthcare organizations. Quality and Safety in Healthcare. 2003;12 Suppl 2:17–23. Arora V, Johnson J, Lovinger D, et al. Communication failures in patient sign-out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401–7. Horwitz LI, Moin T, Krumholz H, et al. Consequences of inadequate sign-out for patient care. Arch Intern Med. 2008;168:1755–60. Manser T, Foster S. Effective handover communication: an overview of research and improvement efforts. Best Pract Res Clin Anaesthesiol. 2011;25:181–91. Apker J, Mallak LA, Gibson SC. Communicating in the “gray zone”: perceptions about emergency physician–hospitalist handoffs and patient safety. Acad Emerg Med. 2007;14:884–94. Riesenberg LA. Shift-to-shift handoff research: where do we go from here? J Grad Med Educ. 2012;4:4–8. Manser T, Foster S, Gisin S, et al. Assessing the quality of patient handoffs at care transitions. Qual Saf Health Care. 2010;19:e44. Lofgren RP, Gottlieb D, Williams RA, et al. Post-call transfer of resident responsibility: its effect on patient care. J Gen Intern Med. 1990;5:501–5. Horwitz LI, Meredith T, Schuur JD, et al. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53:701–10. Holly C, Poletick EB. A systematic review on the transfer of information during nurse transitions in care. J Clin Nurs. 2014;23(17–18):2387–95. Arora VM, Manjarrez E, Dressler DD, et al. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4:433–40. Hilligoss B, Cohen MD. Hospital handoffs as multi-functional situated routines: implications for researchers and administrators. Adv Health Care Manag. 2011;11:91–132. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17:4–5. Raduma-Tomàs MA, Flin R, Yule S, et al. Doctors’ handovers in hospitals: a literature review. BMJ Qual Saf. 2011;20:128–33. Marx D. Patient Safety and the “Just Culture”: A Primer for Health Care Executives. New York: Columbia University; 2001. Dekker SWA. Just culture: who gets to draw the line? Cogn Tech Work. 2009;11:177–85. Edmondson AC. Psychological safety and learning behavior in work teams. Adm Sci Q. 1999;44:350–83. Weaver SJ, Rosen MA, DiazGranados D, et al. Does teamwork improve performance in the operating room? A multilevel evaluation. Jt Comm J Qual Patient Saf. 2010;36(3):133–42. Reason J. Human error: models and management. BMJ. 2000;320:768–70. Abbott A. The system of professions: An essay on the division of expert labor. Chicago: The University of Chicago; 1988. Foundation ABIM. Medical professionalism in the new millennium: a physician charter. Ann Intern Med. 2002;136:243–6. Chiovitti RF. Professionhood and professionalism as an educational aid for facilitating nursing students’ development and renewal of self and profession. J Nurs Educ Pract. 2015;5(11):51–64. Wohlauer MV, Arora VM, Horwitz LI, et al. The patient handoff: a comprehensive curricular blueprint for resident education to improve continuity of care. Acad Med. 2012;87:411–8. Foster AJ, Dervin G, Martin Jr C, et al. Improving patient safety through the systematic evaluation of patient outcomes. Can J Surg. 2012;55:418–25. Lee et al. BMC Health Services Research (2016) 16:254 Page 8 of 8 34. Mayer CM, Cronin D. Organizational accountability in a just culture. Urol Nurs. 2008;28:427–30. 35. Smits M, Christiaans-Dingelhoff I, Wagner C, et al. The psychometric properties of the ‘Hospital Survey on Patient Safety Culture’ in Dutch hospitals. BMC Health Svc Res. 2008;8:2306. 36. Hair JF, Anderson RE, Tatham RL, et al. Multivariate data analysis. 5th ed. Delhi: Pearson Education; 1998. 37. Reason J. Managing the risks of organizational accidents. Aldershot: Ashgate; 1997. 38. Sorra JA, Dyer N. Multilevel psychometric properties of the AHRQ hospital survey on patient safety culture. BMC Health Svc Res. 2010;10:199. 39. Lazarus GS, Foulke G, Bell RA, et al. The effects of a managed care educational program on faculty and trainee knowledge, attitudes, and behavioral intentions. Acad Med. 1998;73(10):1107–328. 40. Siemsen IMD, Madsen MD, Pedersen LF, et al. Factors that impact on the safety of patient handovers: an interview study. Scand J Public Health. 2012;40:439–48. 41. Beach CB, Cheung DS, Apker J, et al. Improving interunit transitions of care between emergency physicians and hospital medicine physicians: a conceptual approach. Acad Emerg Med. 2012;19:1188–95. 42. Emanuel L, Berwick D, Conway J, et al. What exactly is patient safety? In: Henriksen K, Battles JB, Keyes MA, et al., editors. Advances in patient safety: new directions and alternative approaches. Rockville: Agency for Healthcare Research and Quality; 2008. Submit your next manuscript to BioMed Central and we will help you at every step: • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal • We provide round the clock customer support • Convenient online submission • Thorough peer review • Inclusion in PubMed and all major indexing services • Maximum visibility for your research Submit your manuscript at www.biomedcentral.com/submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 495554 research-article2013 AJMXXX10.1177/1062860613495554American Journal of Medical QualityPronovost and Wachter Commentary Progress in Patient Safety: A Glass Fuller Than It Seems American Journal of Medical Quality 2014, Vol. 29(2) 165­–169 © 2013 by the American College of Medical Quality Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1062860613495554 ajmq.sagepub.com Peter J. Pronovost, MD, PhD,1 and Robert M. Wachter, MD2 Despite well over a decade of efforts to improve patient safety, 3 recent studies found persistently high rates of medical injury.1-3 One study analyzed harm rates over a 5-year period and found no evidence of improvement,1 while another reported that 1 in 3 hospitalized patients suffers preventable harm.2 These studies, which all used the global trigger tool (GTT) to measure harm rates, prompted a flurry of media reports questioning whether patient care is safer today. Although these findings are concerning, there is evidence that safety and quality are improving. High-quality intervention studies, involving teamwork training,4 simulation,5 bar coding,6 and practices for specific hazards such as falls,7 have demonstrated marked improvements.8 Interventions that combined checklists and culture change have substantially decreased central line–associated bloodstream infections (CLABSIs),9 hospital mortality,10 and surgical complications.11 A recent series of systematic reviews commissioned by the Agency for Healthcare Research and Quality also identified a number of safety practices with proven effectiveness in reducing rates of harm.12 Although the GTT is the most widely used global measure of patient safety, no one has critically evaluated whether it can validly measure progress in safety over time, or compare safety between organizations. In this article, we examine the validity of using the GTT for these purposes, review evidence of improvements in safety over time, and make recommendations for future work. The Global Trigger Tool Patient safety measures can either identify risks or events at single points in time (error reporting systems) or they can produce rates of events or harms. Only valid rate measures should assess changes over time or compare organizations.13 To be valid, rate-type measures must have clearly defined numerators (definition of the event), denominators (definition of those at risk for the event and time period, such as person-year), and surveillance systems to identify both events and at-risk populations. Error reporting systems play a vital role in identifying hazards that can focus improvement efforts. Yet the biases in these reporting systems are well documented.14 Changes in event rates are more likely caused by variations in reporting thresholds and reporting subjects than to true changes in safety.14 Although the information in error reporting systems can guide where to focus improvement efforts, such information should not be used to make inferences about whether safety is improving. Events submitted to error reporting systems generally lack clearly defined numerators, denominators, and surveillance systems. Too often, hospital leaders and boards monitor error reporting data over time, assuming the trends reflect changes in safety, often unaware of how reporting bias affects changes over time.15 Conversely, rate-based measures such as rates of CLABSIs, while imperfect, can be used to monitor changes in harm rates over time and among organizations. Although the GTT uses data from chart review rather than from self-reports by clinicians, the GTT is subject to biases similar to error reporting systems. Thus, it should be used more like an error reporting system (ie, to identify risks at a single point in time) than like a rate-based system (ie, to monitor changes in safety over time). Many of the events on the GTT lack clearly defined numerators, most lack clear denominators, and none have defined surveillance systems, rendering biased comparisons over time or among organizations. To date, most rate-type measures assess a single type of harm, such as CLABSIs. In this case, the numerator is a CLABSI, the denominator is a patient-day with a central venous catheter, and the surveillance system is a trained infection preventionist who follows standardized definitions from the Centers for Disease Control and Prevention, often employing information technology, to monitor for infections. This is the rigor needed to 1 Johns Hopkins University, Baltimore, MD University of California, San Francisco, CA 2 Corresponding Author: Peter J. Pronovost, MD, PhD, 750 E Pratt Street, 15th Floor, Baltimore, MD 21202. Email: ppronovo@jhmi.edu 166 confidently measure progress in safety, at least for an individual harm. Although individual harm measures are important, patients, providers, and policy makers also seek global, comprehensive measures of safety. Unfortunately, developers of such measures face several daunting challenges, particularly if they seek to use these measures to assess progress over time or to compare institutions. First, researchers must compile a list of patient harms using robust methods to create broad consensus. Aside from intensive care unit (ICU) patients,16 this has not been done. Second, both harms and those at risk for harms must be explicitly defined and validly measured, yet most types of harm have poorly defined numerators and denominators. Third, a global measure of safety requires a common denominator, likely a patient admission or patient-day. For a generic patient-day to be a valid denominator, all patients must have equal risk for each harm type in the numerator. This is highly unlikely. Patient admission would be a biased denominator because patients hospitalized for a month have greater risks than those hospitalized for a day. Patientday is equally problematic. For example, consider a global measure that included an elevated activated partial thromboplastin time (APTT) as a potential trigger. Heparin use is the major risk for an elevated APTT. To be valid, the proportion of patients on heparin (the proportion of patients at risk for an elevated APTT) must be the same in each study population, which is unlikely. Fourth, surveillance methods to identify events must be similar over time or among hospitals, another improbability. For example, rates of venous thromboembolism differ markedly depending on the surveillance method.17 Although risk-adjusted overall hospital mortality seems like an attractive global safety measure, variations in the coding of risk factors and inadequate risk-adjustment methods create substantial bias, limiting its usefulness.18,19 The GTT, which screens a random sample of patient charts over time for over 50 triggers or clues to adverse events, also seems like an attractive global safety measure.20 Unfortunately, it faces all of the aforementioned challenges. First, it does not include all patient safety events and may miss important medical harms for specific diagnoses or procedures (eg, diagnostic errors, delirium).16 Second, most triggers are poorly specified and lack defined surveillance methods. For example, the tool searches for health care–acquired infections without requiring the validated approach used by the Centers for Disease Control and Prevention. One trigger (C-1: any procedure complication) fails to list and define these American Journal of Medical Quality 29(2) complications. The absence of explicit definitions makes the tool sensitive (detecting all potential events) rather than specific, a philosophy of error reporting systems, although problematic for rate-based measures of safety. The lack of specificity and variation in surveillance increases the likelihood that data from different hospitals or time periods will measure different things. For example, a recent study of interrater variation found that 2 independent review teams differed 69% of the time when the same charts were searched for adverse events using the GTT.21 Moreover, the teams classified the same events differently and, most important, made different inferences about changes in patient safety. The authors concluded that the measurement properties of the GTT needed further study and could not measure progress in patient safety. Third, patients vary widely in their risk for events included in the GTT. An example is measure C-1, which includes in-hospital pneumothorax. In the absence of lung surgery, most patients who develop a pneumothorax do so iatrogenically, following a central line placement, thoracentesis, or biopsy. These procedures are performed on about 1 of every 100 patient-days in the hospital, and only a small fraction results in a pneumothorax. Thus, changes (over time) or variations (across institutions) in pneumothorax rates are more likely related to differences in the number of these procedures or in sampling strategies than to changes in safety. Most of the GTT measures are subject to this type of systematic and random error, and such errors are additive when the individual harms are rolled into a global measure. Fourth, while the GTT does outline the method of chart review, it does not define how to survey for events (whether a given event is recorded in the chart). Thus, hospitals with more robust surveillance methods, such as those that perform routine ultrasound to screen for deep venous thrombosis, will have artificially inflated harm rates.22 These concerns are highlighted by the 3-fold variation (from 10% to 33%) in harm rates in the 3 GTT-based studies.1-3 Unless there are good reasons to explain why 1 set of hospitals is 3 times safer than another, such differences likely are caused by measurement error and not by true differences in safety. Notwithstanding these concerns, the GTT represents a useful tool to identify certain safety risks, and many events it identifies should be the focus of improvement opportunities. Whether the GTT is more effective and efficient than more traditional error reporting systems or simply asking staff “how the next patient will be harmed”23 remains to be determined. Still, the 3 published GTT studies are concerning because they tell us that harm is still occurring, and more frequently than anyone would like. 167 Pronovost and Wachter Evidence of Patient Safety Improvements Shojania24 argues that we need to view safety through several different lenses to obtain a complete picture; he calls this “the elephant of patient safety.” Such a broader view suggests major improvements in safety over the past decade. The United Kingdom reported substantial reductions in some infections,25 and the Agency for Healthcare Research and Quality recently announced a 41% reduction in CLABSIs in American ICUs.26 Hospitals that meet the Leapfrog Group safety standards are increasing and their in-hospital mortality is decreasing.27 Performance on The Joint Commission’s Core Measures has improved and has been associated with better outcomes.28 Moreover, The Joint Commission’s National Patient Safety Goals have prompted hospitals to concentrate on reducing harm from underdosing and overdosing of anticoagulants, hospital-acquired infections, teamwork and communication errors, handoff errors, and leadership failures. Although such process and structural changes do not guarantee improvements in safety outcomes, the explosion in implemented evidence-based safety practices is reassuring. Medical schools, residency programs, accrediting organizations, and funders have made substantial safety-oriented changes. Other organizations also have made key contributions. The National Quality Forum articulated a series of “safe practices,” and the Leapfrog Group is publically reporting hospital compliance with these practices. Physician certification boards, professional societies, and the Accreditation Council for Graduate Medical Education are focused more aggressively on patient safety. Error reporting systems and structured methods to learn from medical errors and harms are ubiquitous in hospitals and many clinics. Many have implemented interventions to improve safety culture and teamwork. Hospital boards are increasingly engaged and strengthening accountability for hospital leaders and clinicians.29 Most medical schools and residency programs have patient safety training and the science of safety is advancing rapidly, with a growing cadre of scholars supported by federal funders and foundations. Finally, hospitals and physicians face growing economic incentives and social pressure to provide safe care. Hospitals now must report over 250 performance measures for use in transparency and pay-for-performance programs. Although these efforts are encouraging and display significant improvements in certain aspects of patient safety, the absence of a valid global measure to evaluate whether safety is broadly improving makes it difficult to support a claim that safety is improving overall—just as this absence makes it difficult to say that it is not. However, this literature paints a far more optimistic picture than the 3 GTT-based studies might indicate. Some kinds of harm have fallen, and rapid growth is seen in patient safety activities and practices throughout the American health care system. Our analysis provides a more optimistic counterweight to the pessimism that might be generated after reviewing the GTT-based literature. Bottom line, we really do not know whether safety is improving. To make this determination, we need a greater investment of time and money in the science of safety measurement and the implementation of systems to capture such measures.30 The federal Partnership for Patients initiative committed to a 40% reduction in 10 types of preventable harm31 but did not establish a robust measurement system to determine whether this goal has been met. Success in improving patient safety must be informed by science, embraced by clinicians, and guided by valid measures. What’s Next? We need more research on individual patient harms to determine which safety practices and strategies are worth pursuing, and which are lower priority. Additionally, we must develop valid, feasible, global measures of safety that can be tracked over time. To create them, the government, working with researchers, patients, and clinicians, must systematically define the universe of harms, develop rates for each type of harm, create valid methods to weight and aggregate individual harms, and test electronic methods for collecting the data. Such an approach is being pilot tested in the ICU.16 Ultimately, automating the measurement would allow efficient, real-time assessment of harm rates over time and across organizations. This would require a greater collaboration than currently exists among researchers, clinicians, and electronic health record vendors. Sadly, when it comes to our national effort to improve patient safety, we do not know today whether the glass is half empty or half full. Although the 3 GTT-based studies are concerning and illustrate the need for additional efforts to reduce preventable harm, the tool’s limitations highlight the need for better measures of patient safety. Finding efficient and robust ways to determine whether medical care is safer should be a national priority. Acknowledgment The authors wish to thank Christine G. Holzmueller, BLA, for her assistance in editing the article. She is employed by one of the authors as a senior medical writer and editor. Declaration of Conflicting Interests The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Dr Pronovost reports receiving grant or contract 168 support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation for research related to measuring and improving patient safety; honoraria from various hospitals and health care systems, and the Leigh Bureau to speak on quality and safety; board membership with the Cantel Medical group; and book royalties from the Penguin Group. Dr Wachter reports serving as the chair of the American Board of Internal Medicine (for which he receives a stipend); receiving a contract to his institution from the Agency for Healthcare Research and Quality for editing 2 patient safety Web sites; receiving compensation from John Wiley and Sons for writing a blog; receiving royalties from Lippincott Williams & Wilkins and McGraw-Hill for writing/editing several books; receiving support for helping to lead a leadership training program for IPC-The Hospitalist Company; serving on the scientific advisory boards for Patient Safe Solutions, CRISI, and EarlySense (for which he receives stock options); holding the Benioff Endowed Chair in hospital medicine from Marc and Lynne Benioff; and receiving funding for a sabbatical from the US-UK Fulbright Commission. He is also a member of the Board of Directors of Salem Hospital, Salem, Oregon, for which he receives travel reimbursement but no compensation. Funding The authors received no financial support for the research, authorship, and/or publication of this article. References 1. Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363:2124-2134. 2. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30:581-589. 3. Office of Inspector General. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries (Report No. OEI-06-09-00090). Rockville, MD: Department of Health and Human Services; 2010. 4. Neily J, Mills PD, Young-Xu Y, et al. Association between implementation of a medical team training program and surgical mortality. JAMA. 2010;304:1693-1700. 5. Shapiro MJ, Morey JC, Small SD, et al. Simulation based teamwork training for emergency department staff: does it improve clinical team performance when added to an existing didactic teamwork curriculum? Qual Saf Health Care. 2004;13:417-421. 6. Poon EG, Keohane CA, Yoon CS, et al. Effect of bar-code technology on the safety of medication administration. N Engl J Med. 2010;362:1698-1707. 7. Dykes PC, Carroll DL, Hurley A, et al. Fall prevention in acute care hospitals: a randomized trial. JAMA. 2010;304:1912-1918. 8. Pronovost PJ, Freischlag JA. Improving teamwork to reduce surgical mortality. JAMA. 2010;304:1721-1722. 9. Pronovost PJ, Marsteller JA, Goeschel CA. Preventing bloodstream infections: a measurable national success American Journal of Medical Quality 29(2) story in quality improvement. Health Aff (Millwood). 2011;30:628-634. 10. Lipitz-Snyderman A, Steinwachs D, Needham DM, Colantuoni E, Morlock LL, Pronovost PJ. Impact of a statewide intensive care unit quality improvement initiative on hospital mortality and length of stay: retrospective comparative analysis. BMJ. 2011;342:d219. 11. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491-499. 12. Agency for Healthcare Research and Quality. Making health care safer II: an updated critical analysis of the evidence for patient safety practices. http://www.ahrq.gov/ research/findings/evidence-based-reports/makinghcsafer. html.Accessed May 3, 2013. 13. Pronovost PJ, Miller MR, Wachter RM. Tracking progress in patient safety: an elusive target. JAMA. 2006;296:696699. 14. Pham JC, Colantuoni E, Dominici F, et al. The harm susceptibility model: a method to prioritize risks identified in patient safety reporting systems. Qual Saf Health Care. 2010;19:440-445. 15. Goeschel CA, Wachter RM, Pronovost PJ. Responsibility for quality improvement and patient safety: hospital board and medical staff leadership challenges. Chest. 2010;138:171-178. 16. Rogers RS, Pronovost P, Isaac T, et al. Systematically seeking clinicians’ insights to identify new safety measures for intensive care units and general surgery services. Am J Med Qual. 2010;25:359-364. 17. Haut ER, Pronovost PJ. Surveillance bias in outcomes reporting. JAMA. 2011;305:2462-2463. 18. Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SL. Variability in the measurement of hospital-wide mortality rates. N Engl J Med. 2010;363:2530-2539. 19. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010;340:c2016. 20. Parry G, Cline A, Goldmann D. Deciphering harm measurement. JAMA. 2012;307:2155-2156. 21. Mattsson TO, Knudsen JL, Lauritsen J, Brixen K, Herrstedt J. Assessment of the global trigger tool to measure, monitor and evaluate patient safety in cancer patients: reliability concerns are raised. BMJ Qual Saf. 2013;22:571-579. 22. Haut ER, Schneider EB, Patel A, et al. Duplex ultrasound screening for deep vein thrombosis in asymptomatic trauma patients: a survey of individual trauma surgeon opinions and current trauma center practices. J Trauma. 2011;70:27-34. 23. Pronovost P, Weast B, Rosenstein B, et al. Implementing and validating a comprehensive unit-based safety program. J Patient Saf. 2005;1(1):33-40. 24. Shojania KG. The elephant of patient safety: what you see depends on how you look. Jt Comm J Qual Patient Saf. 2010;36:399-401. 25. Benning A, Dixon-Woods M, Nwulu U, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011;342:d199. 26. Agency for Healthcare Research and Quality. Eliminating CLABSI: A National Patient Safety Imperative. Final Report on the National on the CUSP: Stop BSI Project Pronovost and Wachter (Publication No. 12-0087-EF). Rockville, MD: Agency for Healthcare Research and Quality; 2012. 27. Brooke BS, Perler BA, Dominici F, Makary MA, Pronovost PJ. Reduction of in-hospital mortality among California hospitals meeting Leapfrog evidence-based standards for abdominal aortic aneurysm repair. J Vasc Surg. 2008;47:1155-1160. 28. Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures—using measurement to promote quality improvement. N Engl J Med. 2010;363:683-688. 169 29. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361:1401-1406. 30. Shojania KG, Thomas EJ. Trends in adverse events over time: why are we not improving? BMJ Qual Saf. 2013;22:273-277. 31. Partnership for Patients. About the partnership. http:// partnershipforpatients.cms.gov/about-the-partnership/ what-is-the-partnership-about/lpwhat-the-partnership-isabout.html. Accessed July 19, 2013. Health Services Research © Health Research and Educational Trust DOI: 10.1111/1475-6773.12468 RESEARCH ARTICLE Predictors of Gaps in Patient Safety and Quality in U.S. Hospitals Lynn Unruh and Richard Hofler Objective. To explore predictors of gaps between observed and best possible Hospital Compare scores in U.S. hospitals. Data Sources. American Hospital Association Annual Survey; Area Resource Files; Centers for Medicare and Medicaid Services Medicare Provider and Analysis Review; and Hospital Compare data. Study Design. Using Stochastic Frontier Analysis and secondary cross-sectional data, gaps between the best possible and actual scores of Hospital Compare quality measures were estimated. Poisson regressions were used to ascertain financial, organizational, and market predictors of those gaps. Data Extraction. Data were cleaned and matched based on hospital Medicare IDs. All U.S. hospitals that matched on analysis variables in 2007 were in the study (1,823– 2,747, depending upon gap variable). Principal Findings. Most hospitals have a greater than 10 percent gap in quality indicators. Payer mix, registered nurse staffing, size, case mix index, accreditation, being a teaching hospital, market competition, urban location, and region were strong predictors of gaps, although the direction of the association with gaps was not uniform across outcomes. Conclusions. A significant percentage of hospitals have gaps between their best possible and observed quality scores. It may be better to use gap scores than observed scores in payments systems. More SFA research is needed to know how to lower gaps through changes in hospital and market characteristics. Key Words. Stochastic Frontier Analysis, predictors of gaps in hospital quality Public reporting and value-based purchasing systems use measures of quality based on averages or rankings of observed quality scores (Rosenthal and Frank 2006; Lindenauer et al. 2007; Ryan 2009). These methods provide a limited amount of information on how well hospitals are performing (Meddings and McMahon 2008). A method that provides information about how closely each hospital approaches its own best possible quality scores (or, alternatively, best possible outcomes) could provide additional information, especially to payers. Hospitals could be compared based on this rating, changes in 2258 Predictors of Gaps in Patient Safety and Quality 2259 their performance could be tracked over time, and benchmarks could be set based on the degree to which hospitals were achieving their own best possible quality scores given their resources. For payers, the method would be superior to one that ranks hospitals based on external criteria because it would not penalize hospitals that have fewer resources, but would instead reward them based on how they perform given their resources. In addition, once a hospital’s individualized quality ratings are determined, the results could also be used to assess factors that contribute to the gap in quality (difference between the observed quality score and the best possible score) to reduce that gap. One method for estimating gaps in quality is Stochastic Frontier Analysis (SFA). SFA assesses gaps in performance by analyzing the extent to which a producer makes best use of a given a set of inputs (capital, labor, and technology) in the productive process. “Productive process” is one that generates outcomes for a “producer”—the entity that engages in production. The measured gap between what the producer is capable of and the actual performance is called technical inefficiency, and it is due to unmeasured managerial decisions, employee actions, and technical problems within the firm. SFA deviates from standard regression models by having two error terms, the usual one which indicates noise, and a second one which represents the gap and is notated as ui. SFA starts with the idea that an actual outcome (e.g., a quality score or, for instance, a 30-day readmission rate) equals the sum of the best possible outcome and ui. So, estimating a ui for each hospital indicates the gap between what each hospital is capable of doing (its best possible outcome, also called the “frontier”) and its actual outcome. SFA is most commonly used to estimate efficiency, but it has also been used to estimate quality gaps in non–health care areas. Examples include performance gaps in investment (Reis 2005), management (Chen 2007), and national governance (Owen 2013). It has also been used to estimate other types of gaps (neither quality or efficiency) such as market effort by women (Hofler and Sen forthcoming); employer and employee information in the labor market (Polachek and Yoon 1987); choosing recreational destinations (Hofler et al. 2010); alliances in biotechnology markets (Kinukawa and Moto- Address correspondence to Lynn Unruh, Ph.D., R.N., L.H.R.M., Health Services Administration Program, Department of Health Management & Informatics, College of Health and Public Affairs, HPA-2, Rm 210-L, University of Central Florida, Orlando, FL 32816-2200; e-mail: lunruh@mail.ucf.edu. Richard Hofler, Ph.D., is with the Department of Economics, College of Business Administration, University of Central Florida, Orlando, FL. 2260 HSR: Health Services Research 51:6, Part I (December 2016) hashi 2010); optimism and pessimism (Groot and van den Brink 2007); and job satisfaction (Poggi 2010). In health care, SFA has estimated efficiency gaps in nursing homes and hospitals (Hofler and Folland 2001; Mutter, Rosko, and Wong 2008; Rosko and Mutter 2008), but to our knowledge, it has not been used to estimate quality gaps. This study uses SFA to estimate the gaps in seven Centers for Medicare and Medicaid Services (CMS) hospital quality and safety indicators (part one), and it explores the factors that contribute to those gaps (part two). BACKGROUND Use of Stochastic Frontier Analysis to Measure Gaps in Hospital Quality Two factors must be at work for SFA to be applicable to estimating hospital quality gaps (Kumbhakar and Lovell 2003). First, hospitals must be utilizing inputs (capital, labor, and technology) to optimize (maximize or minimize) some outcome, such as minimizing 30-day readmissions. Second, hospital employees make decisions, take actions, and/or fail to do certain things that lead to a failure to optimize that outcome (technical inefficiency). Both of these conditions are met for estimating gaps in quality in hospitals. Capital, labor, and technology are used to maximize quality patient outcomes and minimize patient adverse events (Bazzoli et al. 2007, 2008). Capital inputs in hospitals are the physical and monetary resources of the organization. Labor inputs are the human resources such as physicians, nurses, technicians, and other staff. Technology is the state of clinical or scientific knowledge embodied in a hospital, as well as available diagnostic equipment and procedures. Capital inputs provide a well-designed physical infrastructure and the financial resources to maintain and expand physical, technical, and human resources (Bazzoli et al. 2008). Net revenue, profit margin, payer mix, and system membership are capital resources that have been linked to quality (Bazzoli et al. 2007; Mutter, Valdmanis, and Rosko 2010; Carretta et al. 2013; Eappen et al. 2013; Jiang, Friedman, and Jiang 2013). Labor inputs enhance quality by putting to use the capital and technology inputs, and through their skills and expertise (Bazzoli et al. 2007, 2008). Total personnel, physician and RN/patient ratio, and nurse skill mix (RNs/total nurses) are indicators of labor inputs that could impact quality (Aiken et al. 2003; Unruh 2003; Cho and Yun 2009). Technology enhances quality by providing methods for employing the labor and capital inputs (Bazzoli et al. 2007). Technology Predictors of Gaps in Patient Safety and Quality 2261 indices have been linked to quality in some studies (Mukamel, Zwanziger, and Tomaszekwski 2001; Volpp et al. 2005; and Bazzoli et al. 2007, 2008). The attempt to maximize quality (minimize adverse events) may be negatively affected by factors such as administrative decision making and employee actions. These factors can be sources of inefficiencies in the use of the inputs in the production of quality. In other words, given input amounts, quality is dependent upon the decisions and actions in using those inputs. This is the source of the quality gap. Quality is also affected by factors outside the control of the organization. These are environmental (background) effects that can create either better or worse operating situations, which affects the ability to optimize the outcome. One example is additional reporting requirements by a governmental agency (e.g., CMS) that takes time away from providing quality health care. Another example might be a requirement that a hospital prove meaningful usage of electronic health records, a time-consuming and nonproductive process. In regression analysis, these factors are considered to be noise and are captured in the error term. When assessing gaps in quality using SFA, these effects are not counted in the gap, but rather, like regression analysis, they go into a symmetric error term. Studies of Predictors of Quality and Safety Although this is the first study of predictors of gaps in quality and safety in hospitals, a number of studies examine influences on quality and safety. Organizational characteristics of hospitals that are often included in studies of quality are case mix index (CMI), ownership, RN staffing, nurse skill mix, region of operation, size, teaching status, and urban or rural setting. CMI is thought to affect quality negatively, as higher case mix indicates sicker patients with greater risk for poorer outcomes. However, if the quality measures are also risk-adjusted, it may not be significantly related to quality. Research tends to show a nonsignificant relationship (Bazzoli et al. 2007, 2008). Research results for ownership, size, teaching status, region of operation, and urban or rural setting have been mixed. These characteristics are positively related to quality in some studies (Gowrisankaran and Town 2003; Unruh 2003; Escarce, Jain, and Rogowski 2006; Rogowski, Jain, and Escarce 2007; Harrison, Lambiase, and Zhao 2010; Carretta et al. 2013; Jiang, Friedman, and Jiang 2013), but negatively or nonsignificantly related in other studies (Bazzoli et al. 2007, 2008; Rogowski, Jain, and Escarce 2007; Chukmaitov 2262 HSR: Health Services Research 51:6, Part I (December 2016) et al. 2009; Harrison, Lambiase, and Zhao 2010; Mutter, Valdmanis, and Rosko 2010; Maeng and Martsolf 2011; Smith et al. 2012). RN staffing is usually measured as the number of RNs given the number of patients, patient days, or hours of patient care. Nurse skill mix is the proportion of RNs to the total number of all nurses. Greater values of these measures are fairly consistently related to higher quality (Bazzoli et al. 2007; Cho et al. 2009; Harrison, Lambiase, and Zhao 2010; Mark et al. 2004; Unruh 2003). Joint Commission accreditation is not a common measure of predictors of quality. Being accredited by the Joint Commission could be a factor in a hospital’s pursuit of quality as well as reimbursement, and therefore would be positively associated with quality. One study that examined this factor did find a positive relationship with quality (Gowrisankaran and Town 2003). Market factors that have appeared in studies of hospital quality include hospital competition, Health Maintenance Organization (HMO) market share or penetration, and the percentage of hospitals that are for-profit in the area. Hospital competition and HMO market share or penetration tend to show a positive or nonsignificant relationship with quality (Gowrisankaran and Town 2003; Rogowski, Jain, and Escarce 2007; Bazzoli et al. 2008; Harrison, Lambiase, and Zhao 2010; Mutter, Valdmanis, and Rosko 2010; Maeng and Martsolf 2011; Jiang, Friedman, and Jiang 2013). Results are mixed for the percentage of hospitals that are for-profit in the area (Bazzoli et al. 2007). Conceptual Framework Based on the studies above, we developed a framework for an SFA of gaps in hospital quality (part one of the study), and predictors of those gaps (part two). Figure 1 illustrates this model. In part one, the inputs into quality are capital, labor, and technology (Bazzoli et al. 2007). Capital inputs are net revenue, profit margin, payer mix, and system membership. Labor inputs that capture both the quantity and quality of labor are total personnel expenses/patient admission and nurse skill mix (RNs/total nurses). Technology inputs are represented by an index of services the hospital provides (e.g., angioplasty, cardiac catheterization) (Harrison & Sexton, 2006). In part one of the study, given a hospital’s direct inputs, it could be expected to produce a certain level of quality. If that level of quality is not achieved, the hospital is operating in a way such that its actual quality and safety scores fall short of its estimated best possible values; that is, it has a quality gap. The SFA measures that gap between the expected best possible quality level, given inputs, and the actual level of quality. Predictors of Gaps in Patient Safety and Quality Figure 1: 2263 Gaps and Predictors of Gaps in Quality and Safety Predictors of Gap Hospital Organizational Characteristics: --CMI --Joint Com. accred. --Ownership -- Region --RN staffing --Size --Teaching hospital --Urban or rural Market Characteristics: --HHI for f -profit --% for-profit hospitals in area Gap Analysis Inputs: Capital --net revenue --profit margin --payer mix --system membership Labor --total personnel expenses --Nurse skill mix Technology --technology index Inefficiencies in producing quality & safety No inefficiencies in producing quality & safety Observed quality and safety Gap in quality and safety Best possible quality and safety Other factors indirectly affect quality and safety through their influence on the process that leads to a gap. In our analysis, this relationship is analyzed in a separate second step after the gaps are estimated (part two of the study). Factors that are hypothesized to affect gaps in quality are the hospital’s organizational characteristics and its market environment (Bazzoli et al. 2007, 2008). Hospital ownership, size, urban or rural setting, and other characteristics can affect production decisions and outcomes. For-profit hospitals, for example, may be less likely to use inputs in a way that promotes quality. Market factors affecting hospital quality could include HMO market share. Hospitals in areas with high HMO market share may compete to keeps costs down, which may negatively affect the use of inputs and, therefore, quality (Bazzoli et al. 2007). 2264 HSR: Health Services Research 51:6, Part I (December 2016) M ETHODS Measures Table 1 lists the measures for the two parts of this study, their operational definitions, and their data sources. First, the inputs and the observed quality and safety measures used in the SFA are listed. Next, the table presents the predictors of gaps in quality and safety, which are explanatory variables in the second step of the analysis. Finally, the estimated gaps in quality and safety from the SFA are listed. These form the response variables in the second step of the analysis. Inputs and Observed Quality and Safety in SFA Analysis. Inputs were indicators of capital, labor, and technology. Capital indicators were net revenue/patient day, profit margin, payer mix (percent Medicare patients), and system membership (yes or no). For profit margin, we used the total margin: net income from total revenue/total revenue. Labor inputs were total personnel expenses/patient admission and RN full-time equivalents (FTEs)/total nurse FTEs. We were unable to include physician inputs due to lack of data. The technology indicator was a count of 12 technologies, which are listed in the table (Blegen, Vaughn, and Vojir 2008). To account for the possibility of a nonlinear relationship between inputs and quality, we included squared terms for all of the continuous variables (net revenue, profit margin, total personnel expenses, nurse skill mix). Observed quality and safety measures were seven CMS quality and safety measures from Hospital Compare: 30-day mortality and readmissions for heart failure and pneumonia, a composite patient safety indicator (PSI), and composite inpatient quality indicators (IQIs) for mortality due to surgical procedures and medical conditions (CMS, 2010, 2013). Thirty-day mortality and readmissions for heart failure and pneumonia are the risk-adjusted deaths or readmissions in the hospital in the population at risk divided by the population at risk (Krumholz et al. 2006; Keenan et al. 2008). The measures have been used as outcomes in several studies (Krumholz et al. 2006; Lindenauer et al. 2007; Keenan et al. 2008). IQIs and PSIs are risk-adjusted hospital rates of mortality and patient safety events obtained from software that identifies the events based on ICD9-CM diagnoses and procedures noted in the patient’s discharge record (Elixhauser, Pancholi, and Clancy 2005; Laditka, Laditka, and Cornman 2005). Technology Index Total personnel expenses Payer mix System member Nurse skill mix Net revenue Profit margin Variable PSI composite 30-day readmissions PSI composite of complications Pneumonia readmission Heart failure readmission Pneumonia mortality Observed quality and safety indicators in SFA 30-day mortality Heart failure mortality Technology Labor Inputs in SFA capital Table 1: Study Variables Risk-adjusted number of deaths in heart failure patients/number of heart failure patients Risk-adjusted number of deaths in pneumonia patients/number of patients with pneumonia Risk-adjusted number of readmissions in heart failure patients/number of heart failure patients Risk-adjusted number of pneumonia readmissions/number of patients with pneumonia Weighted average of reliability and risk-adjusted rates of adverse events, such as pressure ulcers, postoperative respiratory failure, and postoperative sepsis Count of 12 technologies: angioplasty, cardiac catheterization, open heart surgery, trauma center, lithotripter, hemodialysis services, CTscanner, diagnostic radio-isotope, MRI, PETscan, SPECT tomography, transplant surgery. The transplant surgery technology is comprised of 7 items. Net revenue/adjusted patient day Net income/revenue (revenue = total revenue from pt services + contributions + income investment) % Medicare patients System membership, yes = 1, no =0 RN FTEs/total of all nurse FTEs (all nurses = RNs + licensed practical nurses + nursing assistants) Total personnel expenses/patient admission Definition continued CMS CMS CMS CMS CMS MCR & AHA AHA AHA AHA AHA MCR MCR Source Predictors of Gaps in Patient Safety and Quality 2265 Variable Urban versus rural location RN staffing Size Teaching status IQI composite of mortality d/t surgical procedures IQI composite of mortality d/t medical condition Explanatory variables: Predictors of gaps in quality and safety CMI Organizational Joint Commission characteristics Ownership Region IQI composites Table 1. Continued CMS case mix index Accreditation by Joint Commission Whether hospital is for-profit, not-for-profit, private, government 1. New England: ME, NH, VT, MA, RI, 2. Mid-Atlantic: NY, NJ, PA 3. South-Atlantic: DE, MD, DC, VA, WV, NC, SC, GA, FL 4. East North Central: OH, IN, IL, MI, WI 5. East South Central: KY, TN, AL, MS 6. West North Central: MN, IO, MI, ND, SD, NB, KS 7. West South Central: AR, LA, OK, TX 8. Mountain: MN, ID, WY, CO, NM, AZ, UT, NV 9. Pacific: WA, OR, CA, AK, HI RN FTEs/acuity and outpatient adjusted patient days Number of acute-care staffed beds Member of Council of Teaching Hospitals of the Association of American Medical Colleges Urban–rural continuum–9 levels based on size of population and relation to metropolitan (metro) area 1—1 million or more in metro area 2—250,000–1,000,000 in metro area 3—

Order your essay today and save 25% with the discount code: STUDYSAVE

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Order a unique copy of this paper

600 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
Top Academic Writers Ready to Help
with Your Research Proposal

Order your essay today and save 25% with the discount code GREEN