Surg Endosc (2013) 27:2631–2637DOI 10.1007/s00464-012-2771-9
and Other Interventional Techniques
A primer on standards setting as it applies to surgical education
Juan Cendan • Daryl Wier • Kevin Behrns
Received: 24 September 2012 / Accepted: 11 December 2012 / Published online: 26 January 2013
Springer Science+Business Media New York 2013
Background Surgical technological advances in the past
three decades have led to dramatic reductions in the morbidity associated with abdominal procedures and permanently altered the surgical practice landscape. Significant
changes continue apace including surgical robotics, natural
orifice-based surgery, and single-incision approaches.
These disruptive technologies have on occasion been
injurious to patients, and high-stakes assessment before
adoption of new technologies would be reasonable.
Methods We reviewed the drivers for well-established
psychometric techniques available for the standards-setting
Results We present a series of examples that are relevant
in the surgical domain including standards setting for
knowledge and skills assessments.
Conclusions Defensible standards for knowledge and
procedural skills will likely become part of surgical clinical
practice. Understanding the methodology for determining
standards should position the surgical community to assist
in the process and lead within their clinical settings as
J. Cendan (&)
Department of Medical Education, College of Medicine,
University of Central Florida, 6850 Lake Nona Blvd., Suite 317,
Orlando, FL 32827, USA
Department of Surgery, University of Central Florida, Orlando,
Department of Surgery, University of Florida, Gainesville, FL,
standards are considered that may affect patient safety and
Keywords Clinical assessment Credentialing
High-stakes assessment Performance metrics
Skill assessment Standard setting
Surgical technological advances in the past three decades
have led to dramatic reductions in the morbidity associated
with abdominal procedures and permanently altered the
landscape of a practicing surgeon’s available armamentarium. Laparoscopy led the way in the late 1980s; however, significant changes continue apace, including surgical
robotics, natural orifice-based surgery, and single-incision
approaches. Although generally associated with patient
care improvements, these advances have brought temporarily disruptive changes; the initial deployments of the
technologies have on occasion been injurious to patients.
Patient safety concerns and a general intolerance for error
in our societal and medicolegal infrastructure suggest that
high-stakes assessment before adoption of new clinical
technologies will be inevitable.
Advances in innovative technical approaches and novel
devices are presented to surgeons who must then consider
clinical adoption of those devices and techniques with little
more than a description or brief skills laboratory session.
For both the clinician and the equipment developer,
eventual clinical adoption represents a balance of the
clinical relevance and the possible business marketability
(profitability) for the device. The training program
involved in widespread clinical adoption may not be the
highest-order priority in this process, which tends to be
driven by the surgical device industry. Thus, the surgical
community must help shape the process such that surgeons
adopting new techniques and processes are able to do so
through a defensible mechanism.
The principal stakeholders in any standards-setting
process are the consumers, funding organizations, and
professional representative bodies . In the case of surgeons, that would be, respectively, our patients, the insurance and governmental funding agencies that have an
interest in safe and cost-effective procedural delivery, and
those entities that license and accredit our clinical practice.
This latter group includes the hospital we operate in, the
American Board of Surgery, the Accreditation Council for
Graduate Medical Education (ACGME) and its residency
review committees, the American College of Surgeons
(ACS), and our partners and colleagues.
Surgical education in residency and beyond
The mechanisms currently in place for surgical education
broadly represent the concept of graded and supervised
responsibility. Overall this has been a success, and it is a
model replicated throughout the Americas and Europe. The
process begins with the undergraduate medical curriculum
(UGME) and follows with the residency (graduate medical
education, GME). After residency, a process of continuing
medical education (CME) comes into place, largely guided
by the certifying specialty and by state boards. Efforts to
expand tighter regulation of CME training are being
implemented to address the activities that are occurring at
the professional level when the active surgeon is no longer
working under the auspices of a controlled educational
Standards in UGME have paralleled efforts in general
education with local and national standards procedures in
place for many topic areas. A longtime standard-bearer in
this arena has been the National Board of Medical Examiners, which, together with the Federation of State Medical
Boards, provides the United States Medical Licensing
Examination, a three-part examination consisting of
knowledge and skills performance assessment. The standards set in these examinations are routinely monitored and
continuously reviewed by nationally recognized experts.
In the GME arena, the ACGME is currently revising its
competence criteria to reflect developmental acquisition of
milestones. The acquisition of milestones should occur
over time and should build on each other in a progressive
manner. Over a period of time, a trainee would accumulate
knowledge and skill that would lead to competence in an
The CME environment has lagged behind in the adoption of stringent processes to document maintenance and
competence in new skills. The majority of CME programs
remain at a level that reflects mere attendance at a function.
Surg Endosc (2013) 27:2631–2637
The ACS has initiated a program that classifies CME
activities according to five levels that reflect the involvement of the learner, ranging from class attendance (lowest)
to in situ supervision by an expert (highest level). CME is
evolving from attendance-based knowledge acquisition to a
process that includes attendance and self-assessment. The
American Board of Surgery Maintenance of Certification
(ABS-MOC, Part 2) requires 90 h of Category I CME over
the 3-year MOC cycle. Of the 90 h, 60 h must include a
self-assessment activity. The ABS also requires MOC, Part
4, which stipulates that diplomates must demonstrate
ongoing participation in national, regional, or local outcomes database or a quality assessment program.
In GME and CME, there is a notion that an actual highstakes assessment is unnecessary because at this stage of
professional development, the idea of continual improvement and self-education should be paramount. Optimally,
graduate students or professionals would identify an area of
weakness and develop a plan for self-education; implementation of the idea in clinical practice would be followed
by outcomes analysis. This cycle would be continually
repeated in areas in which the student or professional
exhibits weakness. However, this type of formative analysis and development does not exclude the possibility for
well-focused summative evaluations.
Surgical skills assessment has been traditionally an observational process wherein a resident is assessed in a clinical
environment; feedback is then provided, and over the
course of the resident’s training, goals of increasing complexity are achieved. This method has come under criticism
for being too subjective, prone to bias, and possibly not
representative of the resident’s entire skill set [2, 3]. The
surgical education community has made significant strides
toward objectifying the process, particularly since the
widespread adoption of surgical simulation platforms;
however, a recent meta-analysis by van Hove et al. 
conclude that ‘‘most methods of skills assessment are valid
for feedback or measuring progress of training, but few can
be used for examination or credentialing’’. In order to sit
for the ABS qualifying examination, the applicant must not
only complete the general surgery training program, but
also demonstrate successful completion of ACLS, ATLS,
and FLS. It is worth noting that even for such a rigorous
and high-stakes assessment, the published data are scant;
there may be proprietary data, however, that are not generally available. Van Hove et al. note that of the nine
articles referencing the Fundamentals of Laparoscopic
Surgery (FLS) module, only one provides level 1b (prospective) data, with one other article presenting level 3b
Surg Endosc (2013) 27:2631–2637
(nonconsecutive) and the remaining seven having level 4
(nonmatched case series) data, despite the adoption of FLS
into the surgical certifying process in North America.
In the near future, applicants for the ABS Qualifying
Examination must be evaluated in two Clinical Assessment
and Management Exams (outpatient) (CAMEO) and two
Operative Performance Evaluations. Soon this will increase
from two evaluations for each over the 6 years to six
evaluations each over the 5 years. Additionally, applicants
will also need to demonstrate successful completion of
Fundamentals of Endoscopic Surgery (FES). This is much
like FLS; however, there is a proscribed curriculum, and
FES will be more time-intensive than FLS. In short, surgeons are likely to be subjected to high-stakes scrutiny as
part of our licensure, and just how competence will be
measured is of great consequence.
Defining standards of competence
The responsibility of determining who is competent to
practice surgery—or more generally to perform procedures—is pronounced . Particular to surgery, there are
perioperative concerns and intraprocedural variations that
complicate the standards-setting process. There is more to
the management of the operative patient that the mechanical
skill of completing the operation; the need for a justifiable,
documented, accountable, and defensible method for
declaring comprehensive competence is critical. Observational methodologies for assessing procedure-based  and
nontechnical skills in context (in the operating room) are
currently being investigated . A balanced approach to any
assessment would reflect on the entire appropriateness of
care of the patient. That is, does the patient have the
appropriate indications for a procedure, and was the process
of evaluation and management leading up to the procedure
appropriate? In effect, the process of defining competence
can lead to fundamental exploration regarding how we
teach, what we want our learners to know, and how we want
them to behave.
It is worth noting that content and performance standards are not the same. Content standards refer to the
material and curriculum that learners are expected to know;
performance standards refer to the level of performance
expected from trainees. In either case, setting a standard
should pass those students who are competent and fail
those who are not. The procedure defining the standards
must include the input of experts; however, these judges
must be careful to set the passing mark at a level that is
reasonable. The process for agreeing on where that mark
should be set can be tremendously enlightening to all
involved and will generally be met with vigorous
There is substantial discussion about how experts evaluate students who are deemed competent; although
tempting, it is not enough to paraphrase Supreme Court
Justice Potter Stewart in Jacobellis v. Ohio : ‘‘We know
it [competence] when we see it.’’ Experts, in fact, may
judge students by mastery criteria rather than competence
criteria. The differences between competence criteria and
mastery criteria can be difficult to define, but the standardssetting process must be defensible, and a mastery assessment could be unfair to a surgeon who is trying to maintain
or enhance his or her practice in a competitive
The marginal pass
An interesting concept in this process is the idea of the
learner who manages a marginal pass . Experts involved
in the standards-setting process must keep in mind the
examinees who are about to take the examination and what
the likelihood of them knowing about or correctly performing a particular question or skill. Furthermore, that
conceptual examinees should reflect a reasonable, or even
marginal, pass. That is, the judge cannot be thinking of the
likelihood that the ‘‘best’’ student or the ‘‘top 10’’ student
would know the required information, but rather the likelihood that someone who will be a reasonable and safe
practitioner of that specific task would know. In this process, we have found it useful to conceive of the resident
that will be safe, good in the operating room, and knowledgeable, a resident who will make sound decisions for the
patient but is not otherwise a superstar. The judges need to
couch the pass/fail mark in this reality, and all judgments
and decisions must flow from that visualization.
The difficulty in defining a marginal pass is an issue for
the ABS; the failure rate for the certifying examination has
increased to 28 %. Perhaps this failure rate has occurred
because of the difficulty in distinguishing a marginal pass
from a failure. Approaches to further standardize the
examination and evaluation are in process, but the tendency to include experts in decision-making panels may
complicate the process by pulling the expectations toward a
particular subject matter expert’s data set.
A central tenet is the idea that performance standards
must be defensible; this outcome can only be guaranteed
with a transparent and well-documented standards-setting
process. The process needs to be reasonable, thoughtful,
and systematic . In particular, the selection and training
of panelists, the sequencing of activities in the assessment,
and careful documentation are critical. Failure to be clear
on any one of these categories can lead to an invalid outcome declaration. Development of the judges is important;
for example, the judges need to understand the difference
between a passing score (e.g., the percentage correct score)
and the passing rate (e.g., the percentage of students who
pass the test at any given score). Background materials
may need to be reviewed in preparation for such a process.
The number of judges required for such a process varies in
the literature, but a minimum of six and a maximum of 12
is reflected in most topic reviews .
There are a number of methods for setting performance
standards [11–13]. Generally performance can be assessed
by comparing one student to the others (relative), by setting
a specific cutoff of performance (absolute), or by creating
an assessment that reflects both an absolute and a relative
behavior—a compromise method. Additionally, in creating
a comprehensive assessment, the judges may face the need
to declare compensatory or noncompensatory standards
—that is, if the performance on isolated parts will be
averaged (poor performance can be compensated by good
performance elsewhere) or if there are components that are
critical to perform (must-pass components). In some cases,
there may be a need to mix these approaches within the
same assessment; if that is the case, each section will need
different performance metrics.
The most familiar method of standards setting is norm
referencing. All of us experienced norm referencing in
secondary education classes, where an examination was
given and a curve was devised for the class. In essence, the
examinee is compared to a peer group and the score distribution curve determines excellent—or, conversely,
poor—performance. From the standpoint of the educator,
this process is simple and reproducible; however, the
mathematical process defines the fact that there will always
be poor performers (e.g., there is always a lowest quartile),
although the entire class may have performed exceedingly
well and in excess of a conceptual minimal standard. There
has been a shift away from this type of analysis, and we
will not provide an example of it here.
The Angoff method for establishing pass/fail standards has
a well-established history and can be easily incorporated
into assessment standards setting. The process is one where
the evaluators describe the characteristics of a borderline
examinee and share some examples of prior students. Each
judge then reviews the examination items and determines
the likelihood that the borderline student would know that
Surg Endosc (2013) 27:2631–2637
item on a 0–100 % scale, or even a binary scale (e.g., true/
false, yes/no). A recorder notes these observations on a
chart, and once complete, the entire group reviews these
metrics. The process for review must be systematic and not
favor a specific judge, such as a senior faculty member or
the chairman. It is best to have that process defined a priori.
If there are wide variations between expected results, then
the group must discuss the issue and come to a conclusion.
It is generally accepted that variations greater than 20 %
should be discussed. The average of the judges’ scores for
that particular item then becomes the pass/fail mark for that
item and the average of all items is the pass/fail mark for
The Hofstee method combines an a priori judgment about
performance on the examination with actual data from the
test takers . Judges are asked to define minimum and
maximum acceptable passing scores and failure rates.
Graphically, this creates a boundary set that is rectangular
in shape (Fig. 1), the midpoint of which becomes the
pass/fail mark. This method is best used with some prior
knowledge of performance on the examination; however, it
can be deployed from a subgroup analysis or even during
the assessment exercise using the graphical Hofstee variation . This method focuses on the performance of the
whole test, not the specific items. This is a practical method
when you have existing performance metrics and wish to
now set a pass/fail point for an examination.
Below we work through a sample case that will seem
familiar to anyone involved in surgical education, whether
at the UGME or GME level. We will demonstrate several
of the aforementioned techniques for standards setting.
Our program faced a recent challenge that forced us to
consider these steps. In particular, we were preparing to
start a series of new residencies that will incorporate
ACGME milestones criteria from inception. Several of the
milestones criteria required standardized patient (SP)
encounters and resident expertise with patient interviewing
skills. Our SP platform had not yet been developed to the
point where summative evaluation was possible, and we
had to complete a process of standards setting through a
combination of Angoff-style judgment-based reviews of
individual assessment items, a group declaration of Hofstee
limits for pass/fail, and construction of borderline regression curves for the individual stations, which is beyond the
scope of this article . We now have in place defensible
performance standards for these stations and feel confident
that residents who perform beyond a certain score should
be passed and those below should fail and thus require
Surg Endosc (2013) 27:2631–2637
Fig. 1 Rectangular boundary set
indicating the pass/fail mark
Table 1 Statements used to tabulate the validity of a test according to the Hofstee method
The lowest acceptable percentage of residents that will fail the exam is…
Minimum fail rate
The highest acceptable percentage of residents that will fail the exam is…
Maximum fail rate
The lowest acceptable percentage-correct score that allows a borderline resident to pass is…
Minimum pass score
The highest acceptable percentage-correct score that allows a borderline resident to pass is…
Maximum pass score
You are asked by your chair to create a comprehensive,
high-stakes exam for residents that incorporates multiple
choice questions (knowledge) and procedures (skills). This
assessment will determine whether the resident can pass
from junior to senior level (high stakes, must be defensible). Where do you start? The program director wrote an
exam 2 years ago and has prior performance data but has
not formally declared a pass/fail mark for the assessment;
furthermore, she has decided to add a new 10-station skills
examination to this end-of-year evaluation. Thus, the entire
instrument will have 100 multiple-choice questions and 10
skills-station performance evaluations. This standardssetting procedure mirrors what surgeons face in the ABS
qualifying examination (multiple-choice questions with
one correct response) as well as the certifying examination
(oral exam questions that may pose a broader range of
acceptable behaviors). The process for determining pass/
fail standards is consistent with our description herein.
You need to convene a representative panel and are
fortunate to have 10 judges. The panelists must have a
reasonable understanding of the actual level of the learner
and a grasp of the range of behaviors that are observed at
Table 2 Tabulating the test validity of statements in Table 1
according to the Hofstee method
that level. The panel could include the program director, a
chief resident, and a minimally invasive surgery fellow
with an interest in education, then be filled out with other
faculty members. The general faculty members could be
specialists as long as they fit the aforementioned caveat—
that is, realistic expectations specific to the learner and
considerate to the concept of the marginal pass.
Because you have prior performance data for the
assessment, it is time to consider it. The panel learns that
the prior average was a 79.2 %, the minimum–maximum
range was 58–92 %, and all questions were answered
correctly by at least 40 % of test takers. The panel may
decide that this is a good exam; thus, they are not interested
Surg Endosc (2013) 27:2631–2637
Table 3 Example of the dichotomous method
Able to tie intracorporeal knot in \60s
. . . for every question on the assessment
Overall pass/fail, mean of means
in reviewing each of the 100 questions but would rather
take a global approach to the exam using the Hofstee
method. You should then ask the judges to declare an
answer to the following four statements in Table 1 and
tabulate these according to Table 2.
You create a graphic that represents the boundaries of the
judgment ranges and overlay it with the actual cumulative
performance of the residents taking the exam. The responses
from your panel are represented in Fig. 1. The point at which a
diagonal line across the rectangle intersects the actual performance curve is the pass/fail point for the whole assessment.
It is possible that the panel decides to review each and every
question in order to determine the pass/fail point. The Angoff
methodology (demonstrated in the subsequent paragraphs) has
been used by psychometricians to address this need. Although
time-consuming, it can reveal wide ranges of expectations
among the panelists (important for the program to address) and
may also reveal areas of consistent expectations (which can be
developed into must-pass stations, if desired).
Creating a pass/fail for the skills portion of the examination leaves us with two new quandaries: first, the assessment
is new, and there are no prior performance metrics to rely on;
and second, the individual assessments are not in traditional
multiple-choice format. We can utilize the Angoff method
for setting standards for this new examination as long as the
judgment panel has a reasonable concept of what is being
measured and what the performance should be.
Sharing this information is the responsibility of the
assessment creator, and there are a number of ways to
address the problem in an informed manner. For example, a
station that utilizes a virtual reality simulator metric may
have published performance data  that can inform the
process. For example, the time to complete a particular
activity on your simulator is 60–90 seconds for a 25–75
percentile of all users. If this is the case, your judgment
panel may use that information in the context of your
specific residents. It is worth noting that this type of
information is useful, but final judgment requires local
reckoning with the actual level of the learner, access to
equipment, degree of supervision, and so on.
Some components of the skills examination may be
dichotomous (e.g., they tied the knot), and some may be
graded along a time or other scale. Scaled numbers can be
averaged across judges, whereas dichotomous can be
assigned a 0 or 100 and thus averaged across the entire
assessment. An example of this method is shown in Table 3.
It is worth noting that the judges could decide that inability to
pass a particular dichotomous item translates to an overall
failure. This invokes a separate type of process that we will
not discuss further, but it also requires significant discussion
and agreement among the judges—a minimal skills set.
This same process could have been followed had the
panel decided to review each and every question in order to
determine the pass/fail point. Although time-consuming, it
can reveal wide ranges of expectations among the panelists
(important for the program to address) and may also reveal
areas of consistent expectations that can be developed into
must-pass stations, if desired.
The standards-setting process is laborious and can create
great discomfort. Professionals have long worked under the
premise that part of their charge was to self-regulate and,
by extension, accept the responsibility of their work
product and its consequences. The development and
adoption of novel surgical technologies and the confluence
of the patient safety movement, a societal move toward
regulation, and risk minimization lead us to consider the
process of skills assessment for high-stakes activities. The
surgical community should lead the way in defining
appropriate knowledge and skills metrics along the learning arch from UGME, to GME and into the CME environments. This document can serve as a primer for the
process introducing some of the general concepts driving
the process. The principal concepts are that any standardssetting process should be (1) deliberate and open for critique and (2) purposefully fair to those being assessed,
while (3) continuing to couch the relevance of the process
within the higher goal of patient safety.
An entire psychometrics literature exists to support these
endeavors, and we review only the basics here. Although
Surg Endosc (2013) 27:2631–2637
the term marginal may seem pejorative, it is a critical
concept in the process, and defining that level crystalizes
our understanding of knowledge and performance expectations. We maintain that the standards-setting process can
be tremendously enlightening to everyone involved in
forming the curriculum, the assessing body, and eventually
the person being evaluated. Surgeons must participate in
Disclosures Drs. Cendan, Wier, and Behrns have no conflicts of
interest or financial ties to disclose.
1. Southgate L, Hays RB, Norcini J, Mulholland H, Ayers B,
Woolliscroft J, Cusimano M, McAvoy P, Ainsworth M, Haist S,
Campbell M (2001) Setting performance standards for medical
practice: a theoretical framework. Med Educ 35:474–481
2. Darzi A, Smith S, Taffinder N (1999) Assessing operative skill
needs to become more objective. BMJ 318:887–888
3. Taffinder N, Sutton C, Fishwick RJ, McManus IC, Darzi A
(1998) Validation of virtual reality to teach and assess psychomotor skills in laparoscopic surgery: results from randomised
controlled studies using the MIST VR laparoscopic simulator.
Stud Health Technol Inf 50:124–130
4. van Hove PD, Tuijthof GJ, Verdaasdonk EG, Stassen LP,
Dankelman J (2010) Objective assessment of technical surgical
skills. Br J Surg 97:972–987
5. Searle J (2000) Defining competence—the role of standard setting. Med Educ 34:363–366
6. Marriott J, Purdie H, Crossley J, Beard JD (2011) Evaluation of
procedure-based assessment for assessing trainees’ skills in the
operating theatre. Br J Surg 98:450–457
7. Crossley J, Marriott J, Purdie H, Beard JD (2011) Prospective
observataional study to evaluate NOTSS (non-technical skills for
surgeons) for assessing trainees’ non-technical performance in
the operating theatre. Br J Surg 98:1010–1020
8. Brennan (1964) Jacobellis v Ohio. Tomson-Reuters. http://case
9. Zieky M (ed) (2001) So much has changed: how the setting of
cutscores has evolved since the 1980s. Lawrence Erlbaum
10. Hambleton RK, Brennan RL, Brown W, Dodd B, Forsyth RA,
Mehrens WA, Nellhaus J, Reckase M, Rindone D, van der Linden
WJ, Zwick R (2000) A response to ‘‘Setting reasonable and
useful performance standards in the National Academy of Sciences’ Grading the Nation’s Report Card’’. Educ Measure Issues
11. Downing SM, Tekian A, Yudkowsky R (2006) Procedures for
establishing defensible absolute passing scores on performance
examinations in health professions education. Teach Learn Med
12. Cusimano M (1996) Standard setting in medical education. Acad
13. Cusimano MD, Rothman AI (2003) The effect of incorporating
normative data into a criterion-referenced standard setting in
medical education. Acad Med 78(10):S88–S90
14. Hofstee W (1983) The case for compromise in educational
selection and grading. Jossey-Bass, San Francisco
15. DeGruijter D (1985) Compromise models for establishing
examination standards. J Educ Measure 22:263–269
16. Gormley G (2011) Summative OSCEs in undergraduate medical
education. Ulster Med J 80(3):127–132
17. von Websky MW, Vitz M, Raptis DA, Rosenthal R, Clavien PA,
Hahnloser D (2012) Basic laparoscopic training using the Simbionix LAP Mentor: setting the standards in the novice group.
J Surg Educ 69(4):459–467
Reproduced with permission of the copyright owner. Further reproduction prohibited without
Working Smart a professional practice forum
Navigating Privacy & Security / e-HIM Best Practices / Standards Strategies / The Sound Record
Building Interoperability Standards
and Ensuring Patient Safety
By Michael Glickman, MSE, and Anna Orlova, PhD
ANYONE WHO HAS ever developed a standard knows well the
many challenges that must be surmounted. Once a standard is
published, however, it’s not the end but in many respects only
the beginning. Moving standards from specification to practice
requires an equivalent if not greater effort, as does ensuring
that standards are not stuck at a point in time but are “living”
and are periodically updated to reflect experience from users as
well as advances in the state-of-the-art health information and
communication technology. More importantly, individual standards have to work together to enable information sharing and
interoperability across various health information and communication technology (HICT) products.
The result of 16 years of standards development has led the
International Organization of Standardization (ISO) Technical Committee 215, Health Informatics (ISO/TC 215) to the
practical realization that a “bundle” of individual standards
is required to create interoperable health information technology (health IT) standards that will ensure both adoption
Building Interoperability Standards
A bundle of individual standards that work together to enable
interoperability represents a high-level standard specification—an assembly of individual standards that move information from sender to receiver. Interoperability standards are
harmonized and intergrated individual standards constrained
to meet healthcare and business needs for sharing information
among organizations and systems for a specific scenario (use
case) of health information exchanges.
According to the Health Level Seven (HL7) definition, interoperability is comprised of the following three components (pillars):
48 / Journal of AHIMA November–December 15
1. Semantic interoperability—shared content
2. Technical interoperability—shared information exchange
3. Functional interoperability—shared rules of information
exchanges (i.e., business rules and information governance
(IG), “the rules of the road”)
Thus, the interoperability standard—a bundle or assembly of
individual standards—has to include individual standards from
these three components of interoperability. The concept of “a
bundle” of individual standards working together was first introduced by the Health Information Technology Standards Panel (HITSP, www.hitsp.org) in 2005. HITSP operated as a public
and private collaborative supported through a contract from
the Office of the National Coordinator for Health IT (ONC) to
the American National Standards Institute (ANSI). The HITSP
bundle was formally called “Interoperability Specification (IS).”
Between 2005 and 2009, HITSP developed 19 ISs for various national use cases including Electronic Health Record (EHR) Laboratory Result Reporting (IS 01), Biosurveillance (IS 02), Consumer Empowerment (IS 03), Quality (IS 06), and Consultation
and Transfer of Care (IS 09), among many others.
HITSP IS included specific individual standards grouped by
the following categories:
– Data Standards (vocabularies and terminology standards)
– Information Standards (reference information models,
information templates, and other)
– Information Exchange Standards (message-based and
– Identifier Standards
– Privacy and Security Standards
– Functional Standards (requirements for health information and communication technology derived from
the analysis of the use case)
Business Processes Standards (guidelines and best
practices described in the use cases)
For example, HITSP Biosurveillance IS 02 included 110 individual standards (see Figure 1).1 This assembly of standards supported a charge formulated in the National Biosurveillance Use
Case of transmiting “essential data from electronically enabled
healthcare to authorized public health agencies in real-time.”
Essential data included 40 data elements defined by the Centers for Disease Control and Prevention (CDC). Biosurveillance
use case was the first of the three national use cases developed
for HITSP by the American Health Information Community
(AHIC)—an ONC advisory committee that identified priorities
for health IT interoperability and developed national use cases.
The first three use cases included biosurveillance, EHR laboratory result reporting, and consumer empowerment. A total of
152 national use cases were developed by AHIC between 2005
and 2009. These use cases served as business requirements for
the HITSP interoperability specifications.
Built upon the HITSP methodology, ISO/TC 215 decided
to move forward with developing interoperability standards.
The working title for the ISO “bundle” is “Standards Reference
Portfolio (RSP).” The first domain selected for developing ISO
RSP is clinical imaging. The work has been conducted in collaboration between ISO/TC 215 and DICOM, a standards development organization.2
ISO RSP includes standards for content and payload (semantic interoperability), transport (technical interoperability), and
rules (functional interoperability) (i.e., standards for information governance and information management practices—
which are strategic AHIMA imperatives).3,4,5,6
Critical constituents of the RSP bundle’s functional interoperability standards include standards that enable data capture (information availability), data quality validation (data integrity), data
protection (capture of patient consent for healthcare procedure as
well as information sharing; protection of privacy, confidentiality,
and security of information), and other standards for information
governance principles in healthcare defined by AHIMA.7
ISO RSP also defines conformance criteria, which are
statements that specify how various individual standards
should work together. These criteria will be used by vendors to test RSP and to deploy standards into their products.
They also will be used in HICT certification processes, so
users know that the product is compliant with the interoperability standard.
Maintenance is important to keep RSP up-to-date. In order
to ensure that standards remain relevant ISO has developed directives that govern all ISO standards. One of these is the com-
Figure 1: Number of Individual Standards
Included in the HITSP Biosurveillance IS 02
NUMBER OF STANDARDS
Semantic Interoperability (Content)
Technical Interoperability (Transport)
Information Exchange Standards
Privacy and Security Standards
Functional Interoperability (Rules)
Business Processes Standards
(guidelines, best practices, use cases)
pulsory five-year systematic review when standards developers
employ a systematic review process to determine whether a
standard is (a) still relevant and in use, (b) is no longer needed
and should be retired, or (c) is in use but should be revised to
ensure its continued value to the industry. Adoption and continuing feedback from the users regarding the standards-based
capabilities allows them to keep the standard updated to meet
Figure 2 on page 50 presents the interoperability standard
framework with various RSP components and enablers.
Ensuring Patient Safety through
One of the keys of health IT adoption is enabling patient safety
while using the means of standards-based health information
and communication technology. Specific aspects of ensuring
health technology safety through standardization is specified
in the ISO/IEC 80001 standard published in 2010.8 It was up for
a systematic review in 2015. This standard was born out of the
recognition that networked medical devices are increasingly
being deployed on general purpose IT infrastructure. Though
the manufacturers have to rigorously apply risk management
to identify and manage potential safety issues and receive regulatory clearance to place their technology on the market, once
the product is purchased, implemented, and placed in use, risk
management processes are rarely applied to the resulting network of integrated devices, health information and communication technology systems, and applications. Unintended consequences that compromise patient safety had been occurring
far too frequently and overall confidence in the technology had
Journal of AHIMA November–December 15 / 49
Working Smart a professional practice forum
Navigating Privacy & Security / e-HIM Best Practices / Standards Strategies / The Sound Record
Figure 2: Interoperability Standard
Framework: Components and Enablers
been suffering accordingly.
Specific safety risks associated with non-interoperability of
health IT products include:
Data quality, misidentification and integration of patient
data from multiple sources (record matching on a patient
represents a critical record management step, so that the
information from one patient cannot be added to the chart
of another patient)
Data accuracy, availability, and integrity issues due to configuration, security, or IT operations failures
Decision support failures due to incorrect or outdated
medical logic, reference data, algorithms or alert triggers
Failures and inconsistencies in delivery, integration, or presentation of diagnostic information results
Failures and inconsistencies in delivery, integration or
presentation of therapy information (such as radiotherapy information)
Insufficient attention to workflow, human factors, change
management, or training of clinicians
Privacy breaches, data governance issues, or other causes
that erode provider and consumer confidence
The risk management for health information and communication technology can be formulated in four questions:
1. What can go wrong?
2. How can it happen?
3. What can be done about it?
4. How do we know we have done enough?
The ISO/IEC 80001 standard evaluated under the ISO systematic review process demonstrated that:
Yes, the ISO/IEC 80001 standard remains highly relevant,
50 / Journal of AHIMA November–December 15
even more so given the increasingly complex health IT environments and the increased integration of medical devices and various health IT products
No, ISO/IEC 80001 standard has not been widely implemented; as much as ever, it is widely recognized as a key
component of addressing safety and security when interoperable technology is deployed
Yes, the state-of-the-art health information and communication technology has been advanced
Yes, much has been learned about what is needed to ensure
the safe use of information in healthcare
There is a need for new understanding of medical device safety, and of the safety of any collection of objects running software
and being connected, such as the Internet of Things, in the context of a specific use or use case.
ISO/TC 215 Health Software Ad Hoc Group looked at the
broader issue of health software safety standards to admit that
“while our initial focus was on health software, we have recognized that the architecture of health software safety standards
must also address the safety of the broader health IT system,
and the socio-technical environment of which health software
is a component.”
This “environment” includes not only the information technology (i.e., hardware, software, networks, interfaces to other
systems and data), but also the:
People (i.e., clinicians, patients, consumers, caregivers, administrators)
Care processes (i.e., clinical workflow, decision algorithms
and care protocols)
Organization (i.e., capacity, governance, configuration decisions about how health IT is applied)
External environment (i.e., regulations, public opinion,
The group further focused on defining end-to-end safety management strategy, leveraging standards in areas such as risk,
quality, security, IT lifecycle, information governance, etc., and
identifying gaps that need to be filled. The report finalized during
spring 2015 identified the technology lifecycle over which safety
must be established and maintained. Eight key topics are integral
to achieving health information and communication technology safety. Grouped under the three categories—people, technology, and policies—they include:
1. Organization’s culture, roles, and competencies
2. Human factors, usability, and change management
3. Systems and software lifecycle processes
4. Safety management processes across software lifecycle
5. IT and information governance
6. Risk management
7. Quality management
8. Information privacy and security management
The ISO/IEC 80001 standard is an example of a standard that
will be included in the ISO/TC 215 RSP (bundle) to ensure that
standards included in the RSP properly address risks associated with semantic, technical, and functional components of
1. Health Information Technology Standardization Panel
(HITSP). “Biosurveillance Interoperability Specification
(IS) Number 02.” 2009. www.hitsp.org/InteroperabilitySet_Details.aspx?MasterIS=true&InteroperabilityId=49&P
2. Digital Imaging and Communication in Medicine (DICOM). http://dicom.nema.org.
3. Tech Terms. “Payload definition.” www.techterms.com/
4. Cohasset Associates and AHIMA. “A Call to Adopt Information Governance Practices: 2014 Information Governance in Healthcare.” 2014. www.ahima.org/~/media/
5. Cohasset Associates and AHIMA. “Professional Readiness
and Opportunity: 2015 Information Governance in Healthcare.” 2015. www.ahima.org/~/media/AHIMA/Files/HIMTrends/IGSurveyWhitePaperCR_7_27.ashx?la=en.
6. AHIMA. “A Call to Adopt Information Governance… .”
8. International Organization for Standardization (ISO)
and International Electrotechnical Commission
(IEC). “ISO/IEC 80001-1:2010. Application of risk management for IT-networks incorporating medical devices—Part 1: Roles, responsibilities and activities.”
October 1, 2010. www.iso.org/iso/catalogue_detail.
Michael L. Glickman (MGlickman@CNAInc.com) is CEO of Computer Network Architects and chair of ISO/TC 215 Health Informatics. Anna Orlova
(firstname.lastname@example.org) is senior director for standards at AHIMA and an
ISO/TC 215 member.
VHC CODING SOLUTIONS
PUT YOU OVER THE TOP.
Relief Coding, IP/OP Reviews, ICD-10
Impact Analysis, Post ICD-10
Implementation Assessments, Training,
Consulting Services, Denials Management,
RE VE NU E CYCLE S O LU T IO NS FO R H IM
Journal of AHIMA November–December 15 / 51
Copyright of Journal of AHIMA is the property of American Health Information Management
Association and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Rozovsky, F. A., & Woods, J. R., Jr. (Ed.). (2011). The handbook of patient safety
compliance: A practical guide for health care organizations. San Francisco, CA:
Jossey-Bass. ISBN: 9781118086995.
• Read Chapter 5, “Medication Error Reduction: Voluntary and
Regulatory Oversight,” pages 64–95.
MEDICATION ERROR REDUCTION: Voluntary and Regulatory Oversight
David M. Benjamin
John P. Santell
“Modern health care presents the most complex safety challenge of any activity on earth. However, we
have failed to design our systems for safety, relying instead on requiring individual error-free
performance enforced by punishment, a strategy abandoned long ago by safer industries such as
aviation and nuclear power.” So say Lucian Leape and colleagues in a JAMA editorial (Leape and others,
1998). Indeed, when you stop to think about it, why is it that the connector fitting on the intravenous
(IV) lines and the nasogastric (NG) tubes fit one another? And aren’t you glad that anesthesia machines
can no longer deliver an anesthetic gas unless the oxygen is running?
Unfortunately, when it comes to health care, Murphy’s Law not only applies but leads one to the
inescapable conclusion of Murphy’s disciple, O’Brien, who said of his mentor, “Murphy is an optimist!”
The problem is that Leape is right; modern health care is so complex that physicians, nurses, and
pharmacists can no longer carry around in their heads all the information they need to practice their
professions. Moreover, the health care system is designed not to prevent error but rather in a manner
that places the responsibility of error prevention squarely on health care practitioners. There are still
many common practices that can lead to unintended tragedy, such as allowing nurses access to harmful
floor stock items like vials of concentrated potassium chloride (KCl), rather than restricting such items to
the pharmacy or operating room pharmacy satellites. Concentrated potassium chloride is one of the
three drugs used in lethal injection, and it definitely will adversely affect a patient in one’s institution if it
is given without proper dilution (see 2004 Patient Safety Goal 3a of the Joint Commission on
Accreditation of Healthcare Organizations [ JCAHO] in Exhibit 5.1). And there are other problem-prone
practices. Many hospital pharmacies stock medications alphabetically, with the result that different
products with similar-looking names (for example, Lamisil and Lamictal) are right next to each other on
the shelf, leading at times to the dispensing of the wrong drug. According to data collected by the two
medication error reporting programs of the United States Pharmacopeia (USP), confusion because of the
similarity of drug names when either written or spoken (look-alike or sound-alike) accounts for
approximately 5 to 15 percent of all reported errors (Hicks, Cousins, and Williams, 2003; USP, 2001). Still
not convinced? Remember when omeprazole first came out on the market for the treatment of gastric
hyperacidity? Its brand name then was Losec. After numerous prescriptions for Lasix 50 mg were
mistakenly and unfortunately dispensed as Losec 50 mg, the manufacturer changed the brand name of
the drug to Prilosec. This case signals a new awareness by manufacturers that product naming and
labeling play a role in medication errors. Medical and medication errors are emerging from their cloak of
invisibility, and health care is entering the age of self-critical analysis slowly. Organizations in the private
sector like the National Patient Safety Foundation, the Institute for Healthcare Improvement, and the
Leapfrog Group have joined governmental agencies like the Agency for Healthcare Research and Quality
(AHRQ) and the Food and Drug Administration (FDA) in jumping on the ambulance en route to treating
our ailing health care system.
What is the result of putting the practice of medicine under the microscope? Physicians, patient safety
experts, systems engineers, and software manufacturers have identified many of the most common
system errors and unsafe practices that occur in the health care setting and are in the process of
redesigning systems to decrease the likelihood of making an error. In addition, our legislators have
proposed new law, in the form of the Patient Safety and Quality Improvement Act, that not only
encourages the reporting of medical and medication errors to newly designated patient safety
organizations (PSOs) but also provides legal privilege for such documented reporting. According to this
proposed Act, in order to be certified as a PSO an organization must meet certain specified criteria. It
must conduct activities that improve patient safety and the quality of health care delivery. It must not
have any conflicts of interest with its providers. The PSO must have appropriately trained staff, including
licensed or certified medical professionals. The PSO cannot be a component of an insurance company. It
must operate independently, and it must collect patient safety data in a standardized manner that
permits valid comparisons of similar cases among similar providers.
EXHIBIT 5.1. 2004 JCAHO NATIONAL PATIENT SAFETY GOALS.
Improve the accuracy of patient identification.
Use at least two patient identifiers (neither to be the patient’s room number) whenever taking blood
samples or administering medications or blood products.
Prior to the start of any surgical or invasive procedure, conduct a final verification process, such as a
“time out,” to confirm the correct patient, procedure and site, using active—not passive—
Improve the effectiveness of communication among caregivers.
Implement a process for taking verbal or telephone orders or critical test results that require a
verification “read-back” of the complete order or test result by the person receiving the order or test
Standardize the abbreviations, acronyms and symbols used throughout the organization, including a list
of abbreviations, acronyms and symbols not to use.
Improve the safety of using high-alert medications.
Remove concentrated electrolytes (including, but not limited to, potassium chloride, potassium
phosphate, sodium chloride >0.9%) from patient care units.
Standardize and limit the number of drug concentrations available in the organization.
Eliminate wrong-site, wrong-patient, wrong-procedure surgery.
Create and use a preoperative verification process, such as a checklist, to confirm that appropriate
documents (e.g., medical records, imaging studies) are available.
Implement a process to mark the surgical site and involve the patient in the marking process.
Improve the safety of using infusion pumps.
Ensure free-flow protection on all general-use and PCA (patient controlled analgesia) intravenous
infusion pumps used in the organization.
Improve the effectiveness of clinical alarm systems.
Implement regular preventive maintenance and testing of alarm systems.
Assure that alarms are activated with appropriate settings and are sufficiently audible with respect to
distances and competing noise within the unit.
Reduce the risk of health care–acquired infections.
Comply with current CDC hand hygiene guidelines.
Manage as sentinel events all identified cases of unanticipated death or major permanent loss of
function associated with a health care–acquired infection.
Source: JCAHO, 2004.
It is to be hoped that in the near future, practitioners and hospitals no longer will need to be concerned
that quality improvement investigations or root cause analyses will be used against them as evidence of
negligence in any subsequent legal proceeding. Moreover, PSOs will have opportunities to analyze error
reports and issue educational updates on the frequency and severity of reported medication errors.
Programs currently in place already provide hospitals a way to measure (or compare) their improvement
in medication safety and to determine where they have progressed and where more effort is still
needed. Such programs will be of greatest value only if and when legislative protection for submitted
data is enacted.
One such existing program is the U.S. Pharmacopeia’s MEDMARX error reporting program, which
facilitates analysis of medication errors in the institutional setting. This anonymous, confidential,
deidentified, Internet-accessible program allows hospitals and other health care facilities to report,
track, and share medication error data in a standardized format. MEDMARX uses the nationally
recognized National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP)
taxonomy, which includes an index for categorizing error events by severity and outcome and allows the
capture of both potential and near-miss events as well as harmful errors. This system for categorizing
medication errors is summarized in Exhibit 5.2.
Recently, USP summarized data findings for the three-year period covering January 1, 1999, to
December 31, 2001, (Santell, Hicks, McMeekin, and Cousins, 2003). During that period, USP examined
154,816 medication error records that were reported to the MEDMARX database. Errors labeled as
Category C made up approximately 47 percent (72,000 out of 154,816) of the total. Category B was the
next most frequently cited error category (32 percent). Records citing Category E errors occurred only
2.2 percent of the time. Category F errors occurred only 0.4 percent of the time (4 times per 1,000
errors). The most serious medication errors, in categories G, H, and I, occurred only 1, 3, and 1 time(s) in
10,000, respectively, and medication errors in categories E through I, representing all harmful errors and
those requiring intervention or hospitalization, occurred 2.63 percent of the time overall.
Unfortunately, many hospitals have medication safety reporting systems (whether for medication errors
or adverse drug reactions) that suffer from inadequate or inconsistent data capture. This results in great
underreporting and causes frustration for those charged with analyzing and communicating the findings
because key data elements often are missing. Other hospitals are diligent about collecting complete
data but are uncertain how best to use those data. To help transform data into useful information, USP
has constructed a medication safety initiative model for hospitals (Figure 5.1). The model is based on
USP’s work with practitioners and health systems, work with error reporting programs, and experience
with the issues of medication safety. It consists of four stages that deal with the environment and
culture, data collection, data analysis, and assessment of the impact of actions taken in response to
EXHIBIT 5.2. NCC MERP ERROR CATEGORY INDEX FOR SEVERITY LEVELS AND OUTCOMES.
Category A: Circumstances or events that have the capacity to cause error.
Error, no harm
Category B: An error occurred, but the error did not reach the patient.
Category C: An error occurred that reached the patient but did not cause patient harm.
Category D: An error occurred that reached the patient and required monitoring to confirm that it
resulted in no harm to the patient or required intervention to preclude harm.
Category E: An error occurred that may have contributed to or resulted in temporary harm to the
patient and required intervention.
Category F: An error occurred that may have contributed to or resulted in temporary harm to the
patient and required initial or prolonged hospitalization.
Category G: An error occurred that may have contributed to or resulted in permanent patient harm.
Category H: An error occurred that required an intervention necessary to sustain life.
Category I: An error occurred that may have contributed to or resulted in patient’s death.
Source: National Coordinating Council for Medication Error Reporting and Prevention, 2001.
Published data from national error reporting programs such as MEDMARX, accompanied by safety
models and technology, are becoming more available as risk management tools to improve the safety of
health care. CEOs often seek a business case for patient safety and for supportive tools like MEDMARX.
But the real question they should ask is how can one afford not to have some of these new advances in
quality improvement and risk assessment? The next step is to get the word out to every practitioner and
hospital administrator that patient safety tools are available and need only be implemented in their
health care facilities and medical offices in order to work.
FIGURE 5.1. USP MEDICATION SAFETY INITIATIVE MODEL.
Source: USP, 2002a.
What about cost? How can one afford electronic medical records or computerized prescriber order
entry (CPOE) or clinical pharmacists on every floor to assist physicians with drug therapy decisions and
nurses with medication-related questions? Leape and others (1997) reported that the use of clinical
pharmacists to assist physicians in selecting and prescribing medications reduced adverse drug reactions
by 66 percent and was reported to have the potential to save one intensive care unit an estimated
$270,000 over the course of a year, based on an estimated saving of $4,685 per preventable adverse
drug event. In 1998, Bates and others demonstrated that CPOE could decrease adverse drug experiences
(ADEs) by 84 percent and serious medication errors by 55 percent. Decreasing medication errors saves
money for the hospital and the health care system in general, to say nothing about avoiding the costs of
Medication Error Reduction in the Institutional Setting
In the institutional setting the medication use process (MUP) begins with writing the medication order
and ends with monitoring the effects of the prescribed drug on the patient. The process can be
schematized as illustrated in Figure 5.2. This schematic makes it apparent that getting the right drug to
the right patient requires excellent communication skills (both oral and written) among the members of
the health care team. Unfortunately, language is inherently inexact, and communication errors form the
basis for many medication errors and therapeutic misadventures (Benjamin, 2001b; USP, 2002b).
The danger of poorly written communication has been illustrated in several, well-publicized cases. For
example, when Boston Globe health reporter Betsy Lehman received a fatal, fourfold overdose of her
cyclophosphamide chemotherapy, it opened everyone’s eyes to the dangers inherent in prescription
writing. The order had been written as, “4 g/sq m over four days” (that is, four grams per square meter
over 4 days). The intention was that one-quarter of the dose (1 g/sq m) be given once daily for four
days, but the order was misinterpreted, and 4 g/sq m were given as a single lethal dose (ECRI, 2000).
Complete, accurate, and appropriately communicated medication orders are essential in reducing
medication errors. One of JCAHO’s National Patient Safety Goals (Exhibit 5.1, Goal 2) advocates written
policies aimed at minimizing errors in the communication of medication orders, specifically in the areas
of verbal (or telephone) orders and the use of abbreviations. The goal recommends developing a
standardized list of abbreviations, including abbreviations that should not be used because they can be
confusing (see, for example, Table 5.1).
FIGURE 5.2. SCHEMATIC OF MEDICATION USE PROCESS IN THE INSTITUTIONAL SETTING.
In 2002, data from USP’s MEDMARX program (Hicks, Cousins, and Williams, 2003) tracked error reports,
listing seven different communication-related causes of error: (1) communication, (2) verbal order, (3)
brand names sound alike, (4) brand and generic names sound alike, (5) generic names sound alike, (6)
abbreviations, and (7) nonmetric units used. Over 13 percent (26,386 out of 192,477) of all reported
errors were associated with these communication selections, with the majority of the errors (nearly 63
percent) reaching the patient. Communication selections were reported in 7 out of every 20 reported
fatal errors (35 percent) during 2002.
The way health care professionals communicate drug information varies considerably from hospital to
hospital, floor to floor, and unit to unit. Patterns of communication in health care are primarily
conversational (casual) rather than computational (analytical) (Coiera, 2000)—a style that in some
situations undermines patient safety. Additional confounding factors, such as organizational processes,
policies, and procedures, further contribute to the extent and scope of the problem. In the institutional
setting the process of getting the medication “from the pen to the patient” may involve more than
twenty individual steps, in contrast to the simplicity of prescribing for a patient in the outpatient setting,
where the prescriber either calls, faxes, scans, or e-mails the prescription directly to the pharmacy or
gives the prescription directly to the patient for presentation at the pharmacy. If there are twenty steps
involved in getting the medicine to the patient, then there are twenty opportunities to make an error.
Simplifying the system immediately helps. Even reducing the number of steps from twenty to nineteen
reduces the possibility of making an error by 5 percent. Moreover, health care today is truly a team
effort. No single health care professional can carry out all the steps by himself or herself. Physicians,
nurses, and pharmacists must rely on each other as well as on specially trained clerks, nursing assistants,
pharmacy technicians, and other hospital employees to perform their duties.
The complexity of the MUP in hospitals may be the reason why studies funded by the AHRQ revealed
that 39 to 49 percent of medication errors at large hospitals occurred at the physician-prescribing stage
of the process. Nursing administration was next at 26 to 38 percent, followed by transcription errors, 11
to 12 percent, and pharmacy dispensing errors, 11 to 14 percent (Bates and others, 1995; Leape and
TABLE 5.1. POTENTIALLY DANGEROUS ABBREVIATIONS.
Sources: United States Pharmacopeia, 2004a, 2004b.
May be read as 5
Use leading zero and write “0.5”
May be confused with < (lesser than) < Less than May be confused with > (greater than) Write out the meaning
May be interpreted as mg (milligrams) Write “mcg”
Write out the meaning
¼ NS Quarter-strength saline May be interpreted as 0.45% saline
Write “0.225% sodium
Write “0.45% sodium chloride”
0.5 mg 1.5 mg may be given to patient Spell out “one-half” or use decimal (“0.5 mg”)
1 Amp One ampoule
May be interpreted as 0.225% saline
1 milligram; may confuse size with strength
Twelve May be read as 120
Write exact dose
Don’t use trailing zero; write “12”
2–4 mg 2 mg up to 4 mg of medication to be given
“2 mg to 4 mg”
May be read as 24 mg Don’t use hyphen; write
Spell out “pounds”
May be read as 31 pounds
40 of K (verbal order) 40 mEq KCl (potassium chloride)
Convey full name of product
May be interpreted as Vitamin K 40 mg
5 ASA 5-acetylsalicylic acid (Mesalamine) suppository May be interpreted as aspirin suppository, 5
grains Write out generic name
Confused with optic
Left ear route for administration
Write out “both ears”
Write out “left ear”
Write out “right ear”
CC or Cc
May be mistaken for “U” (units) when poorly written
metric system “mL” or write out “milliliters”
CHG in saline
Chlorhexidine gluconate in saline
D/C or DC
Discharge or discontinue
Write out generic name
May be misinterpreted Write out “discharge” or
D/C PT or DC PT Discontinue physical therapy or discharge patient
out intended meaning
Vaccine for diphtheria, tetanus, acellular pertussis
May be interpreted as floxuridine
name—generic or trade
GM-CSFLeukine May be entered as G-CSF (Neupogen)
May be misinterpreted Write
May be confused with DTP
Write out complete drug product
Write out complete product name—generic or
May be interpreted as Humulin R
Write out complete product name
HC oint Hydrocortisone ointment
name—generic or trade
May be confused with hydrocortisone Write out complete product
Haemophilus influenzae B
May be confused with hepatitis B (HEP-B)
Haemophilus influenzae B
May be confused with hepatitis B immune globulin (H-Big)
Intravenous or (4)
May be interpreted with multiple meanings
Write out generic name
May be misinterpreted Write out intended meaning
Hs or hsEvery night or half-strength
May be interpreted with multiple meanings; may be interpreted
as every hour Write out “every night” or “half strength”
Humalog, 2 unit qac
Humalog, 2 units, before each meal
Write out completely
May be read as TID
Humalog, 2 units, every morning (q.am)
Write out “once daily”
Lente insulin or Lantus insulin
MgSO4 or MgSO4
complete product name—generic or trade
May be misinterpreted Write out generic or trade
May be confused with morphine sulfate Write out
MS03 Morphine sulfate
Write out complete product name—generic or
May be confused with magnesium sulfate
Write out full name
Nitroglycerin ointment May be read as mo
May be interpreted as once daily
May be interpreted as OD or OS Write out “orange juice”
IR may be interpreted as 1–2
per os Oral route of administration
Write out generic name
Write out “right eye” or “once daily”
Write out generic name
May be read as per OS (left eye) Write “orally” or “by mouth” or
Pharmacy to dose
May be confused with PTT
Write out complete instructions; do not
Pm or PM
May be confused with prn
Passiy Muir Value
May be confused with prenatal vitaminsWrite out intended meaning
Peripheral parenteral nutrition May be interpreted as TPN (total parenteral nutrition) Write
out intended meaning
QD or qd
QOD or qod
Every other day May be confused with qid (four times daily)
Every 6 hours
May be read as qid
Write out “daily”
May be confused with qD (every day)
Write out “every other
Write “hours” after entry
QD-HS Daily at hour of sleep
May be confused as QD & HS
May be confused with bid (twice daily) Write “daily for 2 days only”
Every day for 2 days
QID or qid
4 times daily
Avoid hyphens and write out
May be confused with once daily
Sq or sc Subcutaneous The “q” may be read as meaning “every”
SS or Ss Sliding scale
Spell out frequency
May be confused with one-half Write out “sliding scale”
Tetracaine, adrenaline, cocaine May be confused with triamcinolone
Triamcinolone May be confused with Tetracaine, adrenaline, cocaine Write out generic name
May be confused with tsp (teaspoon)
Write “15 mL”
Write out generic name
TID or tid
Three times daily
TIW or tiw
May be confused with PPD
May be misinterpreted Write out “every 8 hours”
May be misinterpreted as tablespoon
Spell out or use “5 mL”
3 times a week May be interpreted as 3 times a day or twice a week
Unit or units
May be interpreted as O, 4, or CC
Units per day
May be interpreted as 60 cc/hr or unit dose
May be confused with ferrous sulfate (FeSO4)
ZnSO4 Zinc sulfate
Z-PAK Azithromycin pack
Write “unit” or “units”
May be read as 2 packs Write out generic name
The following sections look more closely at specific parts of the medication use process.
Prescribing Process Issues: Benefits of an Integrated CPOE System in the Hospital
In paper-based (noncomputerized) prescriber ordering systems, a physician (or other authorized health
care provider) writes a medication order in a patient’s chart. A clerk may take a copy of that order and
fax it to the pharmacy, or put a hard copy of the order in an out-box to be picked up by a pharmacy
technician. A nurse will manually transcribe the physician’s order onto the patient’s medication
administration record (MAR), and later a different nurse will probably be the one giving the medication
to the patient. The key is to simplify this process and provide less opportunity for error. How? By
instituting computerized prescriber order entry (CPOE). The patient’s name and the drug’s name are
entered into the CPOE program, and the physician responds to several screen prompts about dosage,
duration, and route. Incorrect doses (those not presented as choices) and tenfold dosing errors that
occur because a decimal point was not recognized or a capital letter “U” (for units) was mistaken for a
zero are diminished or eliminated (see JCAHO’s 2004 Patient Safety Goal 2b in Exhibit 5.1 and also Table
5.1). Once the computer system approves and finalizes the order, that order simultaneously appears on
the patient’s MAR and goes to the pharmacy’s computer system for review, processing, and dispensing,
undergoing a second safety check there for drug interactions and contraindications.
One study indicated that CPOE decreased serious medication errors by 55 percent and reduced
potential adverse drug experiences (ADEs) by 84 percent (Bates and others, 1998). These numbers
translate directly to dollars. In 1992, Later Day Saints (LDS) Hospital in Salt Lake City had 567 ADEs,
which cost the hospital $1.1 million dollars in direct costs (not including the costs of injuries to patients
or legal costs). If half of these ADEs had been prevented, LDS Hospital would have saved over $500,000
(Classen and others, 1997). The AHRQ statistics on the effectiveness of instituting advances in reducing
medication errors and ADEs are impressive. CPOE saves money and lives (AHRQ, 2001).
AHRQ Findings: ADEs and Costs
Patients who experience ADEs have longer hospital stays.
A typical ADE costs $2,000 to 2,500; a preventable ADE costs ~$4,500.
Additional costs from an ADE range from $1,000 to $2,000 per day.
Computerized monitoring systems can reduce medication errors by 28 to 95 percent.
Hospitals can save millions of dollars in direct costs by reducing ADEs.
However, if not carefully and thoughtfully implemented, the use of computers in health care can create
Computer entry was the fifth leading cause of errors reported to [USP’s MEDMARX error reporting
program] for both 2000 and 2001 [USP, 2002a, 2002b]…. This underscores concerns and cautions raised
by researchers and patient safety leaders that if not done carefully, implementation of CPOE and other
computer-based programs can result in new errors.
Approximately 10% (35,747/345,600) of all MEDMARX records from September 1998 through December
2002 documented computer entry as a cause of error. Of those, 2% (635 records) indicated the patient
was harmed [USP 2003]. In 2001, computer entry was listed as a cause in 11% of all reports where an
error did occur (N = 94,498) [see Figure 5.3] [USP, 2002b].
As shown in Table 5.2, the most frequently reported types of error associated with computer entry
errors were improper dose or quantity, omission error (failure to administer), and prescribing error
(owing to drug-drug or drug-food allergies, patient’s condition, or an incomplete order).
FIGURE 5.3. TOP TEN CAUSES OF ERROR.
Source: USP, 2002b; based on 94,498 records.
Reports to MEDMARX reveal the common errors resulting from computer entry (USP, 2003):
Dosing errors Look-alike strengths in close proximity on screen (e.g., 40,000 versus 4,000 units/mL)
Multiple and differing sliding scales (e.g., potassium, insulin)
Inadequate dosing algorithms or adjustments for renal failure
Wrong drug errors
Numerous similar drug names within single drug class (e.g., insulins)
Incorrect drug selected for patient’s condition or current drug therapy regimen
Wrong patient errors
Incorrect patient selected from screen due to distractions or patients with similar names in same nursing
unit (e.g., Smith, Ron vs Smith, Robert) or similar names within an outpatient pharmacy computer
TABLE 5.2. TYPES OF ERROR ASSOCIATED WITH COMPUTER ENTRY.
Type of Error
Improper dose/quantity8,868 28.4
Omission error 6,526 20.9
Wrong patient 2,148 6.9
Wrong drug preparation
Wrong dosage form
Wrong administration technique
Note: Data from MEDMARX records from September 1998 through December 2002 (N = 31,272
aNew type added (2002).
Source: USP, 2003.
Preventing Computer Entry Errors
USP has disseminated the following general recommendations to reduce errors associated CPOE and
other computer entry activities (including CPOE) (USP, 2003):
Conduct a Failure Mode and Effects Analysis (FMEA) on the use of computers in the various stages of the
medication use process to identify potentially confusing abbreviations, dose designations, dosage forms,
drug names, and other problems that may be unique to the use of computers to convey information.
Standardize and simplify all dosing protocols, including sliding scales, to the extent possible prior to
implementing CPOE. Take into account complex or unique drug orders (e.g., “hold 4pm dose until …,”
Allocate ample space in the data fields that are used to communicate patient names, drug names,
dosing units, routes of administration, and frequencies. Include properly spaced commas for dose
numbers expressed in thousands (e.g., 4,000 units).
Use USP standard abbreviations for dosage units to express weights and measures in a consistent
manner as follows:
m (lower case) = meter
kg = kilogram
g = gram
mg = milligram
mcg = microgram (do not use the Greek letter mu [µ] which has been misread as mg)
L (upper case) = liter
mL (lower/upper case) = milliliter (do not use cc which has been misread as U or the number 4)
mEq = milliequivalent
mmol = millimole.
Carefully construct the clinical rules/decision support framework using appropriate in-house and outside
expertise. The quality of the clinical rules used will have a significant impact on the error-risk potential.
Establish the proper balance between sensitivity and specificity for computer warnings/alerts to reduce
… “false-alarm” fatigue [among practitioners] leading to frequent overrides of the warnings.
Interface CPOE with the medication administration record (MAR) as well as pharmacy and laboratory
computer systems to maximize the exchange of accurate and up-to-date patient information.
Establish a culture among prescribers and other practitioners that creates an openness and willingness
to transition to new operational methods for providing health care that uses electronic information
General USP Recommendations for Preventing Prescribing and Transcribing Errors
Prescribers should submit orders electronically and avoid the need for additional handwritten
Prescribers should order or select only standardized concentrations when ordering IV infusions.
Protocols for ordering IV infusions should be clear and should eliminate the possibility of calculation
Prescribers should avoid the use of unclear or unsafe abbreviations (Table 5.1).
Transcription of original orders to medication administration records should be timely, with minimal
interruption, and should be independently verified for accuracy. Optimally, transcription should be done
electronically, as the manual process (for example, recopying and assigning administration times) is
repetitive and has been shown to be error prone (Hicks, Cousins, and Williams, 2003).
Dispensing Process Issues
A medication order goes through numerous processing steps once it reaches the pharmacy. Initially, it
should be reviewed by a pharmacist, using clinical rules built into a computer system. This system should
evaluate the order against the patient’s other prescribed medications for potential drug-drug
interactions, contraindications, and inappropriate dose or route based on the patient’s age, disease
state(s), and clinical condition. If the order is deemed appropriate as written, the pharmacy prepares the
medication and delivers it to the nursing unit or patient care area, or ensures that a sufficient quantity
exists in the unit’s medication storage area or automated dispensing device (for example, Pyxis).
Delivery to the floor can be done in many ways. A pharmacist, pharmacy technician, or other designee
may bring medications to the floor and place them in a drop-off box, place them in a patient-specific
drug drawer, or simply place them in the medication room, cabinet, or automated dispensing machine.
Many larger hospitals use automation (for example, pneumatic tubes) to deliver drugs quickly to nursing
units spread across their expansive buildings. In addition to having various drug delivery modes, the vast
majority of hospitals use a combination of processes for balancing medication supply and demand in the
health care facility. The pharmacy may prepare and deliver a single, patient-specific, unit-dose package
for some types of orders and for others it may provide a twenty-four-hour supply. Frequently used
medications may be stocked routinely on the unit, and nurses may have to go to the medication storage
area, medication refrigerator, or automated dispensing machine to retrieve the proper medication for
Each of these scenarios provides an opportunity for error. The ideal situation is to have the pharmacy
dispense all ordered medications in unit-dose form (or as close as possible to it) to minimize the need
for further calculation or manipulation by nursing staff. Labeling on unit-dose packages should contain
the patient’s name, nursing unit, room number, medication name, dose, frequency of administration,
and route of administration (Hicks, Cousins, and Williams, 2003).
General USP Recommendations for Preventing Dispensing Errors
Pharmacy services should stock or prepare standardized concentrations for all IV medications.
Intravenous solution bags should be properly labeled with the complete patient name and should
display the product(s) or ingredient(s) name, the dosage(s), the final concentration of the product(s),
and the infusion rate, as appropriate.
Commercially prepared solutions should be used whenever possible. Use of different strengths of the
same solution or medication in a facility should be limited and standardized.
The pharmacy should always dispense a unit-of-use (as opposed to a multidose) package, to the extent
possible, to avert the potential for improper dose or quantity errors (Hicks, Cousins, and Williams, 2003).
Administering and Monitoring Medication
According to data from USP’s 2002 MEDMARX data summary report (Hicks, Cousins, and Williams,
2003), approximately one-third of reported errors occur in the administering stage of the medication
use process and 1 percent in the monitoring stage. Wrong-patient errors rank seventh in frequency
among the fourteen different types of error. With a computerized bar-coding system, both the patient’s
and medication’s identification can be incorporated into the medication label. If the patient wears a barcoded identification bracelet, then the patient also can be identified more easily, and facilities will move
halfway toward satisfying the JCAHO’s first Patient Safety Goal for 2004—“Improve the accuracy of
patient identification”— by using “at least two patient identifiers … whenever administering
medications or blood” (see Exhibit 5.1). Both the FDA and the NCC MERP agree that placing bar codes on
drug packaging can improve patient safety (FDA, 2004). Since 1999, Veterans Administration (VA)
hospitals have employed the VA Bar Code Administration Project (BCAP), which has prevented an
estimated 378,000 medication errors (Cannistra, 2002). The FDA estimates that bar coding could save
hospitals $41.4 billion in direct costs and an additional $7.6 billion in administrative costs by preventing
50 percent of their medication errors (Becker, 2003).
General USP Recommendations for Preventing Administering and Monitoring Errors
Staff should be familiar with the institution’s policies and procedures for medication administration.
Preprinted and standardized infusion rate charts should be readily accessible and widely used. These
charts offer some protection against calculation errors.
Programmable infusion devices (for example, smart pumps) that offer customized settings to meet the
hospital’s guidelines for selected drug dosages for specific patient types and specialized clinical care
areas should be widely used. Variability of types and models of infusion devices should be limited to
avoid staff confusion.
Infusion pump settings for initiation of high-alert medications and for any required dosage adjustment
should be independently confirmed by two qualified individuals.
Documentation of the medication infusion, adjustments, and independent confirmation should be
Infusion tubing should be traced from the infusion bag to the point of delivery (where it reaches the
patient). If multiple infusions and pumps are in use on a patient, each pump and its respective tubing
should be readily identifiable and labeled.
Free-flow errors can be avoided through proper and reasonable safety measures. IV administration
cassettes that offer anti-free-flow mechanisms should be used routinely. Free flow should be avoided
when the IV administration cassette has been removed from the pump (that is, during a gown change or
to clear air).
The patient’s clinical response to the medication infusion should be monitored according to critical
pathways that incorporate standardized flow sheets or monitoring protocols developed by
interdisciplinary committees. The therapeutic class of the medication, particularly of high-alert drugs,
should dictate the type of observation (for example, cardiopulmonary function or laboratory values)
(Hicks, Cousins, and Williams, 2003).
Medication Errors That Do Not Reach or Cause Harm to a Patient
Some individuals use the term near miss to refer to a medication error that did not reach the patient or
to describe a system that failed but did not cause harm to the patient. However, despite the fact that
the patient was not injured, an error did occur. That error was a signal that the system is not error-proof
and needs to be fixed. Excluding a near miss from an analysis of medication errors simply because a
patient was not injured prevents the risk management team from learning valuable information about
the system or process that failed. This information about near misses can be acted on to minimize or
prevent the recurrence of similar system failure–induced medication errors (Cohoon, 2003).
The Joint Commission on Accreditation of Healthcare Organizations recognizes the value of identifying
and analyzing near misses in its definition of a reportable sentinel event as “an unexpected occurrence
involving death or serious physical or psychological injury, or the risk thereof.” “Risk thereof ” means
that this definition includes “any process variation for which a recurrence would carry a significant
chance of serious adverse outcome” (JCAHO, 2002), indicating that if the process or system isn’t
reengineered, another error is likely to occur. Remember, those who misread history are doomed to
repeat the mistakes of the past!
Types of Drugs Most Commonly Involved in Medication Errors
JCAHO’s Sentinel Event Program has identified the following high-alert drugs (JCAHO, 1999).
TABLE 5.3. PRODUCT GROUPS CAUSING PATIENT HARM MOST COMMONLY REPORTED TO MEDMARX.
Product’s Generic Name
Errors involving harm (Categories E–I)
Note: Product groups include all dosage forms and formulations.
Source: Hicks, Cousins, and Williams, 2003.
High-Alert Drugs Identified by JCAHO
Concentrated electrolytes including KCl, Potassium phosphate, and NaCl (>0.9 percent)
Opiates, narcotics, and patient controlled analgesia (PCA)
IV anticoagulants, such as Heparin
USP’s MEDMARX program has identified a similar list of high-alert drugs, summarized in Table 5.3. In
addition a recent study (Benjamin and Pendrak, 2003) has summarized the drugs most frequently cited
in the PHICO Insurance Company’s Closed Claims project:
Drugs Involved in Claims at PHICO: Two out of Three Years Between 1996 and 1998.
TABLE 5.4. TYPES OF MEDICATION ERRORS TRIGGERING CLAIMS IN 1998.
Type of Error
Contraindicated drug given
IM technique issue
Wrong patient 3
Not classified, includes failure to monitor & failure to prescribe 17
Source: Data from Benjamin and Pendrak, 2003.
The PHICO data also captured the types of medication errors that triggered claims. It is apparent from
these data (Table 5.4) that the same basic medication errors (sometimes called the five wrongs)
continue to be made: wrong drug, wrong dose, wrong time, wrong route, and wrong patient.
Inadequate monitoring or failure to follow up can also lead to adverse drug reactions. However, first on
PHICO’s list is allergic or adverse reaction, which constituted 25 percent of medication error claims,
meaning that patients who were allergic to a drug or had a history of not tolerating a drug well received
the same drug again. Second on the list is contraindicated drug, with 22 percent of claims due to that
problem. A drug is contraindicated when the patient has a peculiar sensitivity to it, when the patient is
taking other medications that can interact with the proposed drug and increase its toxicity or decrease
its effectiveness, when the patient is pregnant or nursing and should not receive the drug because of
maternal-fetal or maternal-infant transmission, or when the patient has some pathological condition
(for example, decreased renal or hepatic function) that makes the drug more toxic in that patient.
Usually no strict contraindication exists for using drugs in patients with decreased renal or hepatic
function, but the dose must be greatly reduced. In the reported claims, significant injury must have
occurred or the patient would not have had sufficient damages to sustain a lawsuit. Drug-drug
interactions have also been the primary cause of the withdrawal from the market of many well-known
drugs, such as Seldane (terfenadine), Hismanyl (astemizole), Posicor (mibefradil), and Propulsid
(cisapride) (Benjamin, 2001a). Had CPOE been available, the computer could have alerted prescribers to
the potential drug-drug interactions, dosages could have been reduced, and both nationwide drug
recalls and patient injury could have been averted.
Medication Errors and Human Factors
People make errors. It is part of being human. Mental errors can occur during periods of high stress—
when the nursing unit or patient care area is under-staffed or the workload is unrealistically high—or
they can occur when people are distracted—by a phone call, page, banter, or ordering pizza. However,
the people we work with are not only staff, they are our colleagues and friends, and now more than
ever, health care is truly a team effort. The physician is the head of the team but needs assistance and
feedback from all team members in order to establish and maintain the needed culture of patient
safety. Each member of the team should collaborate with other members to establish a safety net
designed to catch errors and to identify system flaws (practices) that can lead to errors so these flaws
can be rectified before errors occur. Because of the large number of people and professions employed
in the institutional setting, professionals who supervise other employees are responsible for reviewing
their work and ensuring that they are adequately trained and performing at an acceptable level.
Pharmacists should check medications prepared and dispensed by technicians before they are delivered
to nursing units or floor stock or placed in automated dispensing devices. Nurses should care for sicker
patients and let nursing assistants care for patients who do not need to be monitored as closely. No one
should program an IV pump, PCA machine, or other piece of equipment unless he or she is well
educated about the equipment (see JCAHO’s 2004 Patient Safety Goal 5 in Exhibit 5.1). The American
Society for Healthcare Risk Management (ASHRM) discusses many aspects of human factors in
medication error reduction and reviews much of the recent medication error literature in its publication
Risk Management PEARLS for Medication Error Reduction (Benjamin and others, 2001).
Defining Medication Error
The National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) was
established in 1995 and includes representation from the U.S. Pharmacopeia, FDA, JCAHO, Institute for
Safe Medical Practices (ISMP), American Hospital Association (AHA), American Medical Association
(AMA), American Nursing Association (ANA), American Society for Healthcare Risk Management
(ASHRM), and American Society of Health-System Pharmacists (ASHP). NCC MERP has developed a
comprehensive medication error taxonomy and definition that has come to be widely recognized and
accepted by health care professionals. The council defines a medication error as “any preventable event
that may cause or lead to inappropriate medication use or patient harm while the medication is in the
control of the healthcare professional, patient, or consumer. Such events may be related to professional
practice, healthcare products, procedures, and systems, including prescribing; order communication;
product labeling, packaging, and nomenclature; compounding; dispensing; distribution; administration;
education; monitoring; and use” (NCC MERP, 2004). Of course, the most important word in the
definition is preventable.
What Small Facilities with Limited Resources Can Do to Prevent Medication Errors
The first thing for small facilities to do when they cannot afford to computerize or adopt other
innovative technology is to review the lists of high-risk drugs and common medication errors given in
this chapter. This is an important starting place. All of these drugs can cause serious injury or death to
patients. In order to reduce the likelihood of patient injury, prepare a memorandum containing the
names and classes of these drugs and circulate the memo to physicians, pharmacists, and nurses in your
facility. Announce to your staff that these are the drugs that can cause the most harm and that staff
should be cautious and vigilant when prescribing, dispensing, or administering them to patients.
To eliminate the risk of excessive doses, consider developing special protocols or standardized order
forms for high-risk drugs that list the lowest or most common doses, frequencies, and routes of
administration. Such preprinted order forms allow the prescriber to simply check off the desired
prescription requirements for the patient. Also include known drug-drug interactions and clinical
conditions and contraindications that need to be ruled out prior to initiation of therapy. Remember also
that the drugs listed in this chapter are not the only drugs that can cause unintended injury to a patient.
Aminoglycoside antibiotics and digoxin must be given at lower doses to patients with renal failure to
avoid toxicity. Antihypertensives, tricyclic antidepressants, alpha-blockers (now used to facilitate
urination in patients with benign prostatic hyperplasia), and narcotics all can cause fainting or
orthostatic (positional) hypotension and lead to falls in the hospital and at home.
Problems with ordering drugs can be ameliorated by implementing a training program for your
prescribers. An excellent and free resource for training prescribers can be found on the Tufts University
School of Medicine Web site. This Web-based teaching program, titled “Prescription Writing: A Mini
Learning Module” (Shader and Benjamin, 2001), has been used to retrain physicians who have lost their
prescribing privileges, medical students, nurse practitioners, and physician assistants. The course also
offers a review of abbreviations, acronyms, and symbols that should and should not be used in writing
prescriptions and provides guidance on how to write a prescription or drug order that communicates
the desired information in the most effective, least ambiguous way, as recommended in JCAHO’s 2004
Patient Safety Goal 2b (see Exhibit 5.1 and Table 5.1).
In addition to identifying faulty processes and potential system failures, health care facilities must also
update their philosophies for dealing with people and events that have led or could lead to medication
errors. Contemporary experts in reducing medication errors all stress developing a culture of safety and
dispensing with the anachronistic practice of finding someone to blame. Today’s objectives are to
identify high-risk practices and failure-prone areas of the medication use process and then fix them!
When a medication error occurs, do a root cause analysis (RCA), not because JCAHO requires it of
accredited hospitals but in order to figure out why the error occurred. Hold monthly quality
improvement meetings, and review the RCA results with staff. Also ask members of the medical,
nursing, and pharmacy staffs to bring areas of concern to the attention of the rest of the staff. You may
want to begin by having medicine, pharmacy, and nursing personnel meet separately and then bring the
entire group together after everyone is comfortable with the process.
Several years ago, one of the authors of this chapter (DMB) was asked to conduct a risk management
audit of a large Midwestern hospital that had experienced ten respiratory depressions (and two deaths)
over the past two years in patients receiving patient controlled analgesia. During the course of the site
visit, he asked the hospital risk manager if she ever got the doctors, the pharmacists, and the nurses
together to talk about any concerns any of them had regarding medication ordering, dispensing, or
administration. The answer was no. He recommended that the risk management practitioner function
establish such a committee. About a year later, the nurse risk manager called him to tell him she had
been accepted to law school and also mentioned that she had established a committee of doctors,
pharmacists, and nurses to discuss medication errors. She also said that as a result of some of his
specific recommendations for reducing the concomitant use of narcotics, Benadryl, hydroxyzine,
Phenergan, and a benzodiazepine “sleeping pill” in the same patient and his urging to “bring everything
out in the open” under the peer review function of the hospital, there had been no further respiratory
depressions (or deaths) in the last year. The lesson to be learned is, if you don’t identify the problem,
analyze the problem, and take steps to change those areas that lead to patient injury, you can’t prevent
that problem from occurring again.
Be very selective about adding new drugs to the hospital formulary. Newly approved drugs have been
tested in only approximately 5,000 patients. Safety data about rare but severe adverse drug reactions
like Stevens-Johnson syndrome, toxic epidermal necrolysis (TEN), and acute renal failure may not have
been reported or included in the labeling (package insert), and these reactions could occur even though
you don’t anticipate them (Benjamin, 1998). For this reason, it is also important to review a product’s
labeling every year to determine whether new warnings have been added because of postmarketing
reports of adverse drug reactions.
Lastly, review JCAHO’s current National Patient Safety Goals (NPSGs) (for example, Exhibit 5.1) and
implement as many goals (or acceptable alternatives) as possible, whether your organization is a JC…
Why Work with Us
Top Quality and Well-Researched Papers
We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.
Professional and Experienced Academic Writers
We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.
Free Unlimited Revisions
If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.
Prompt Delivery and 100% Money-Back-Guarantee
All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.
Original & Confidential
We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.
24/7 Customer Support
Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.
Try it now!
How it works?
Follow these simple steps to get your paper done
Place your order
Fill in the order form and provide all details of your assignment.
Proceed with the payment
Choose the payment system that suits you most.
Receive the final file
Once your paper is ready, we will email it to you.
No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.
No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.
Admission Essays & Business Writing Help
An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.
Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.
If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.