response week 3

1/20/22, 10:58 AM Enterprise Architecture and Information Technology Infrastructure

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150604/View 1/3

Enterprise Architecture and Information
Technology Infrastructure

Enterprise Architecture

While the growth of IT provides opportunities for new business models and processes,

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

management teams face many challenges in making sound IT investments. Investments in

technology do not guarantee the viability and profitability of an organization. Too often,

firms adopt a solution just because it uses the latest technology and then find that is not a

good fit for the organization.

The financial impact of a failed IT project can include not only the expenditures for

hardware and software but also the time spent implementing a failed solution, including

the time spent redefining business processes and training employees.

In previous weeks, we focused on how organizations analyze their environment, seek

competitive advantage, and set business strategy. Now it’s time to begin focusing on how

information systems fit into that picture. Organizations analyze their business and identify

processes for opportunities to improve profitability and performance with the use of

information technology.

Enterprise Architecture is the management practice of identifying an overall design to

help organizations with understanding, managing, and expanding their IT infrastructure

and systems. This is a strategic high-level design that looks at the organization’s business

vision, strategy and goals, and identifies how information technology fits into that design.

Enterprise architecture is composed of three major components: the application

architecture, the information architecture, and the technical architecture. The application

architecture is a breakdown of the business processes and shows which processes are

supported by which application systems and how these applications integrate and relate

to each other. The application architecture also has functional applications, such as

finance, human resources, etc.

Learning Resource

1/20/22, 10:58 AM Enterprise Architecture and Information Technology Infrastructure

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150604/View 2/3

The information architecture defines where and how the important information is

maintained and secured. Frequently, the information architecture includes information

about all the data, how the data relate to each other, and how data flows throughout the

organization and its systems.

The technical architecture (sometimes referred to as the IT infrastructure) describes the

hardware and software used to design and build the systems. The technical architecture

describes what is already in place in an organization and how the organization wants to

evolve technically. You could think of the technical architecture as a blueprint, much like a

blueprint of the architecture of a building. The blueprint shows where everything is

located and how it fits together. If a system were developed without consideration of the

technical architecture, the chances are very high that it would not work in the

environment. For example, if a web-based system were developed or acquired for an

organization with no internet access, the effort would be futile. Technical architecture also

defines the standards and protocols for the organization, including security requirements.

A fully developed enterprise architecture should be able to tell us anything we need to

know about the business processes, the data used, and the underlying technology and

how it supports the business strategy. A solid enterprise architecture includes everything

from documentation to business concepts to the components discussed above.

IT Infrastructure

The major components of the IT infrastructure are:

1. Services—the people or organizations that run, support, and manage the other

infrastructure components; can be internal staff or external contractors or service

providers.

2. Hardware—devices that perform the input, storage, processing, and output

functions.

3. Software—instructions that enable the hardware to perform its functions, enabling

these assets to meet the needs of the business; includes (1) operating systems that

control the hardware, (2) data management software that stores and provides access

to data, and (3) application software, which supports the business processes.

4. Telecommunications—the tools that provide connectivity and communication among

individuals, companies, governments, or hardware assets; includes networking

hardware and software and telecommunications services (audio, video and data).

This includes internet access.

1/20/22, 10:58 AM Enterprise Architecture and Information Technology Infrastructure

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150604/View 3/3

5. Facilities—the buildings or spaces that house the equipment and staff that provide

service and support.

Individuals need to understand the basics of these components to help the organization

recognize what is necessary to effectively implement and maintain information systems.

Because a business IT infrastructure can be regarded as the “nervous system,” it is

imperative that it be stable, robust, secure, and flexible so that it can support business

requirements reliably, especially in times of heavy usage. Consistency with the

infrastructure and enterprise architecture is an important consideration in making IT

decisions. The infrastructure must be able to accept both changes in the business and

radical changes in technology. Because of the constant changes in technology, an

infrastructure must change to take advantage of those changes that will provide a

business benefit to the company. This must be part of the IT plan so that transitions to

newer technology can be integrated smoothly, with no disruption or degradation of

service.

Suppose a new computer is under evaluation to replace an aging computer to gain the

advantages of increased speed and more storage. The impact on all of the components of

the infrastructure must be considered:

Will our existing peripherals operate with the new computer?

Will our existing software work on the new computer?

If it does, will it still permit us to achieve the benefits of the new computer?

If not, will new software have to be purchased?

Will our applications run on the new computer, or will changes have to be made?

Will our communication protocols work?

Will our networks support the higher volume of data, or will there be a bottleneck

that will prevent the new computer from functioning as well as we planned?

Will users or the technical staff require training to support the new computer

hardware and software?

Will our physical facilities (may or may not be a dedicated data center) have the

power, cooling and space capacity and space required by the new computer?

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

1/20/22, 10:54 AM

Business Process Modeling

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150603/View 1/1

Business Process Modeling

Before identifying requirements for an information technology solution to support a

process, it is critical to understand how a process is conducted currently—this is often

referred to as the “as-is” process. Frequently, people within a process only understand

their part of the process and even within the same group of users, the process may not be

consistently (or correctly) followed. An important first step is to gather representatives of

the process stakeholders to define collectively the current process. This information can

be gathered through stakeholder interviews and/or a face-to-face session where

individuals are together and map out the process on paper throughout the room. In

addition to understanding what is performed in each step, it is important to understand

why. For example, does the information need to be provided to another area in the

organization to enable a related process to be performed?

Once the current process is documented and understood, it’s time to focus on the best

way to perform the series of steps needed to perform a task—this is referred to as the “to-

be” process. Otherwise, it’s possible to implement a technology solution that only

succeeds in performing a bad process faster rather than actually gaining the

improvements desired to help achieve the organization’s strategy. The section Business

Processes provides a simple example of a before (as-is) process and then an improved (to-

be) process for purchasing textbooks at a college bookstore.

Understanding how a process can best be accomplished lays the foundation for defining

requirements for a technology solution. Failure to clearly define all requirements can

result in a solution that is incomplete. This results in a waste of resources and won’t result

in the expected benefits.

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

Learning Resource

1/20/22, 10:34 AM

How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 1/6

How Information Supports Decision Making

Now that you have been introduced to the basics of data, how it can be stored, and the

importance of data quality, let’s look at how data transformed into information supports

organizational decision making. In their simplest form, information systems are all about

getting the right information in the most usable format to the right people, at the right

time and place. Advances in integrated software applications, the internet, and better data

management practices provide businesses with better tools to support that goal.

A key competitive advantage of an organization is the ability to react to changes quickly.

Being able to make the right decision to address a potential threat or seize an opportunity

could make the difference in whether or not the company stays in business or continues

to increase profits. The key to making good decisions is having the relevant

information

readily available in the form that is needed. There are three basic levels of decision making

in an organization: operational, managerial, and strategic as illustrated below.

Let’s look at the process of creating an invoice. An invoice contains several pieces of data,

such as customer name, number, address, shipping method, items ordered, and quantities.

This data is required at an operational level to update inventories, handle logistics, add to

Learning Resource

1/20/22, 10:34 AM How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 2/6

accounts receivable, and so forth. At the mid-level of our pyramid, the management level,

the data from each individual invoice are not as important as the cumulative information

that many invoices can provide. For example, sales have increased 25% on product A,

orders for product B are shipping consistently behind schedule, and shipping costs with

shipper X are increasing more than with other shippers. With this information on trends or

patterns, management can investigate further and make decisions on

production

schedules, supplier relationships, or preferred shipping vendors.

At the senior or executive level of an organization, the company leadership is less

concerned than middle management about the trends or patterns—their concerns are

strategic. Senior management looks at information, both from within the organization and

external. For example, suppose a key component needed in the manufacturing process is

petroleum-based. Rising oil prices, coupled with industry forecasts that prices will

continue to rise, call for addressing this situation at a strategic level. Senior management

might consider whether a price increase can be justified, how much of an increase the

market can bear, or whether there are alternatives that would not degrade the product.

A primary advantage of an information system is its ability to support and improve

decision making throughout the organization by turning data into useful information.

However, the system is just a tool and does not replace the human factor; people are still

required to make the choices involved in the decisions. Individuals at all levels of the

organization can use the information provided by the system as they make their decisions.

In the invoice example above, the creation and use of the invoice data could all be done

by hand, using paper invoices. However, the use of a system to capture, store, and share

that information throughout the organization significantly increases the efficiency and

effectiveness of the process and makes the information immediately and readily available

to those who need it to make their decisions.

We can see that information moves through the organization and is viewed for different

purposes by different levels within the organization. However, the data are captured at

the operational level (transaction-processing systems) and made available in appropriate

forms (summary of product, customer, geographic distribution differences, and so on) at

the various managerial levels.

It is important to note that information can flow both up and down the levels within an

organization. Information that is useful for monitoring (“How are we doing?”) typically

flows from the operational level upward. Control information (“Is business going as

planned?”) typically flows from the top level downward. For example, a senior manager

notes that sales figures are declining. She queries down through the organization to find

more information to control the declining sales. From mid-level management, she may

1/20/22, 10:34 AM How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 3/6

learn that only the Midwest region is experiencing a decline. From the operational level,

she may learn that the sales force in that region has had significant turnover and that 40

percent of its sales representatives have fewer than six months of experience.

More specifically, let’s look at some examples of possible types of information and

decisions different levels of the organization based on information from an invoice

processing system based on the graphic above.

Level

Types of

Information

Area of

Focus or

Concern

Decision

Example

Supporting

Information

from the IT

System

Strategic Overall sales

figures

Amount of

increase in

market share.

Monitor sales
volume vs.

projected

sales.

Decide to

discontinue

under-

performing

products.

The system

could

produce a

report of
products

where the

sales volume

is not

meeting the

projected
volume.

Strategic Overall Sales

Figures

Determine

manufacturin

g capacity

requirements
and resource

utilization.

Identify

increasing

costs of raw

materials due
to

increased

oil prices.

Decide

whether to

reduce

production of
products that

use

significant

petroleum-

based

ingredients.

The system

could

provide

a report on

products that
include more

than 10%

petroleum-
based
ingredients.

1/20/22, 10:34 AM How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 4/6

Level
Types of
Information
Area of
Focus or
Concern
Decision
Example
Supporting
Information
from the IT
System

Managerial Monthly

Invoices

Plan

monthly

production

schedule.

Schedule
employees.

Plan

maintenance

schedules.

Manage

inventory.

Decide to

increase

production

schedule to
meet

increased

demands on

certain

products.
The system

would

provide

product sales
volume

information

to indicate

high-demand

products.
Managerial Monthly
Invoices

Impact on

monthly

payroll;

overtime
hours

worked.

Decide to
increase

number of

employees in
certain

departments

to reduce

excessive

overtime.

The system

could provide

a report

indicating
where sales

exceeded

projected

demand by

15%.

1/20/22, 10:34 AM How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 5/6

Level
Types of
Information
Area of
Focus or
Concern
Decision
Example
Supporting
Information
from the IT
System

Operational Invoice Data Update

inventory,

schedule

production.
Coordinate

shipping.

Decide to

negotiate

shipping

rates with
most-used

shippers.

The system
could
produce a

report of the
volume of

shipping

done with

each

shipping

vendor and
their

shipping

rates.

To provide a more personal example, think about the information you can gain from your

online bank account system. The system can show your current balance, total of deposits,

total of withdrawals, pending payments (if you use online bill paying), etc. Then based on

information the system provides, you can make more informed decisions about your

budgeting and spending. If the system showed information that last month your total

withdrawals at ATM machines had increased significantly, on average you were hitting the

ATM machine 3 or 4 times each week, and the withdrawals averaged $50 per withdrawal,

you could decide to limit yourself to once-a-week ATM withdrawals of no more than

$100. Further analysis of your spending habits could show a significant amount of money

being spent daily on eating lunch out. You could then decide to pack your lunch two days

a week. This shows how you could make fact-based decisions supported by information

from the banking information system.

Keep in mind that information technology is simply a tool. Knowing how to use the tool

correctly is instrumental to overall effectiveness. The key to using IT successfully is

knowing what data an information system contains and how the data can be converted

1/20/22, 10:34 AM How Information Supports Decision Making

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150601/View 6/6

into useful information to support decision making at each level in the organization. This

helps organizations achieve their business strategy and maintain or increase its

competitive advantage.

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

1/20/22, 11:00 AM

Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 1/14

Networking and Communication

Introduction

In the early days of computing, computers were seen as devices for making calculations,

storing data, and automating business processes. However, as the devices evolved, it

became apparent that many of the functions of telecommunications could be integrated

into the computer. During the 1980s, many organizations began combining their once-

separate telecommunications and information-systems departments into an information

technology, or IT, department. This ability for computers to communicate with one

another and, maybe more important, to facilitate communication between individuals and

groups, has been an important factor in the growth of computing over the past several

decades.

Computer networking really began in the 1960s with the birth of the internet, as we’ll see

below. However, while the internet and web were evolving, corporate networking was

also taking shape in the form of local area networks and client-server computing. In the

1990s, when the internet came of age, internet technologies began to pervade all areas of

an organization. Now, with the internet a global phenomenon, it would be unthinkable to

have a computer that did not include communications capabilities. This reading will review

the different technologies that have been put in place to enable this communications

revolution and a key information systems component, networking communication.

A Brief History of the Internet

In the Beginning: ARPANET

The story of the internet, and networking in general, can be traced back to the late 1950s.

The United States was in the depths of the Cold War with the USSR, and each nation

closely watched the other to determine which would gain a military or intelligence

advantage. In 1957, the Soviets surprised the US with the launch of Sputnik, propelling us

into the space age. In response to Sputnik, the US government created the Advanced

Learning Resource

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 2/14

Research Projects Agency (ARPA), whose initial role was to ensure that the US was not

surprised again. It was from ARPA, now called Defense Advanced Research Projects

Agency (DARPA), that the internet first sprang.

ARPA was the center of computing research in the 1960s, but there was just one problem:

Many of the computers could not talk to each other. In 1968, ARPA sent out a request for

proposals for a communication technology that would allow different computers located

around the country to be integrated together into one network. Twelve companies

responded to the request, and a company named Bolt, Beranek, and Newman (BBN) won

the contract. They began work right away and were able to complete the job just one year

later: In September, 1969, the ARPANET was turned on. The first four nodes were at

UCLA, Stanford, MIT, and the University of Utah.

The Internet and the World Wide Web

Over the next decade, the ARPANET grew and gained popularity. During this time, other

networks also came into existence. Different organizations were connected to different

networks. This led to a problem: The networks could not talk to each other. Each network

used its own proprietary language, or protocol (see “An Internet Vocabulary Lesson” for

the definition), to send information back and forth. This problem was solved by the

invention of transmission control protocol/internet protocol (TCP/IP). TCP/IP was

designed to allow networks running on different protocols to have an intermediary

protocol that would allow them to communicate. As long as your network supported

TCP/IP, you could communicate with all of the other networks running TCP/IP. TCP/IP

quickly became the standard protocol and allowed networks to communicate with each

other. It is from this breakthrough that we first got the term internet, which simply means

“an interconnected network of networks.”

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 3/14

An Internet Vocabulary Lesson

Networking communication is full of some very technical concepts based on some

simple principles. Learn the terms below, and you’ll be able to hold your own in a

conversation about the internet.

Packet: The fundamental unit of data transmitted over the internet. When a

device intends to send a message to another device (for example, your PC

sends a request to YouTube to open a video), it breaks the message down into

smaller pieces, called packets. Each packet has the sender’s address, the

destination address, a sequence number, and a piece of the overall message to

be sent.

Hub: A simple network device that connects other devices to the network and

sends packets to all the devices connected to it.

Bridge: A network device that connects two networks together and only

allows packets through that are needed.

Switch: A network device that connects multiple devices together and filters

packets based on their destination within the connected devices.

Router: A device that receives and analyzes packets and then routes them

toward their destination. In some cases, a router will send a packet to another

router; in other cases, it will send it directly to its destination.

IP Address: Every device that communicates on the internet, whether it is a

personal computer, a tablet, a smartphone, or anything else, is assigned a

unique identifying number called an Internet Protocol (IP) address. Historically,

the IP-address standard used has been IPv4 (version 4), which has the format

of four numbers between 0 and 255 separated by a period. For example, the

domain Saylor.org has the IP address of 107.23.196.166. The IPv4 standard

has a limit of 4,294,967,296 possible addresses. As the use of the internet has

proliferated, the number of IP addresses needed has grown to the point where

the use of IPv4 addresses will be exhausted. This has led to the new IPv6

standard. The IPv6 standard is formatted as eight groups of four hexadecimal

digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334. The IPv6

standard has a limit of 3.4×1038 possible addresses.

Domain name: If you had to try to remember the IP address of every web

server you wanted to access, the internet would not be nearly as easy to use.

A domain name is a human-friendly name for a device on the internet. These

names generally consist of a descriptive text followed by the top-level domain

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 4/14

(TLD). For example, Wikipedia’s domain name is

wikipedia.org; wikipedia describes the organization and .org is the top-level

domain. In this case, the .org TLD is designed for nonprofit organizations.

Other well-known TLDs include .com, .net, and .gov.

DNS: DNS stands for domain name system, which acts as the directory on the

internet. When a request to access a device with a domain name is given, a

DNS server is queried. It returns the IP address of the device requested,

allowing for proper routing.

Packet-switching: When a packet is sent from one device out over the

internet, it does not follow a straight path to its destination. Instead, it is

passed from one router to another across the internet until it is reaches its

destination. In fact, sometimes two packets from the same message will take

different routes. Sometimes, packets will arrive at their destination out of

order. When this happens, the receiving device restores them to their proper

order.

Protocol: In computer networking, a protocol is the set of rules that allow two

(or more) devices to exchange information back and forth across the network.

Worldwide internet use over a 24-hour period

As we moved into the 1980s, computers were added to the internet at an increasing rate.

These computers were primarily from government, academic, and research organizations.

Much to the surprise of the engineers, the early popularity of the internet was driven by

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 5/14

the use of electronic mail (see “Email Is the “Killer” App for the Internet” below).

Using the internet in these early days was not easy. In order to access information on

another server, you had to know how to type in the commands necessary to access it, as

well as know the name of that device. That all changed in 1990, when Tim Berners-Lee

introduced his World Wide Web project, which provided an easy way to navigate the

internet through the use of linked text (hypertext). The World Wide Web gained even

more steam with the release of the Mosaic browser in 1993, which allowed graphics and

text to be combined together as a way to present information and navigate the internet.

The Mosaic browser took off in popularity and was soon superseded by Netscape

Navigator, the first commercial web browser, in 1994. The internet and the World Wide

Web were now poised for growth.

The Dot-Com Bubble

In the 1980s and early 1990s, the internet was being managed by the National Science

Foundation (NSF). The NSF had restricted commercial ventures on the internet, which

meant that no one could buy or sell anything online. In 1991, the NSF transferred its role

to three other organizations, thus getting the US government out of direct control over

the internet and essentially opening up commerce online.

This new commercialization of the internet led to what is now known as the dot-com

bubble. A frenzy of investment in new dot-com companies took place in the late 1990s,

running up the stock market to new highs on a daily basis. This investment bubble was

driven by the fact that investors knew that online commerce would change everything.

Unfortunately, many of these new companies had poor business models and ended up

with little to show for all of the funds that were invested in them. In 2000 and 2001, the

bubble burst and many of these new companies went out of business. Many companies

also survived, including the still-thriving Amazon (started in 1994) and eBay (1995). After

the dot-com bubble burst, a new reality became clear: In order to succeed online, e-

business companies would need to develop real business models and show that they

could survive financially using this new

technology.

Web 2.0

In the first few years of the World Wide Web, creating and putting up a website required

a specific set of knowledge: You had to know how to set up a server on the World Wide

Web, how to get a domain name, how to write web pages in HTML, and how to

troubleshoot various technical issues as they came up. Someone who did these jobs for a

website became known as a webmaster.

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 6/14

As the web gained in popularity, it became more and more apparent that those who did

not have the skills to be a webmaster still wanted to create online content and have their

own piece of the web. This need was met with new technologies that provided a website

framework for those who wanted to put content online. Blogger and Wikipedia are

examples of these early Web 2.0 applications, which gave anyone with something to say a

place to go and say it, without the need for understanding HTML or web-server

technology.

Starting in the early 2000s, Web 2.0 applications began a second bubble of optimism and

investment. It seemed that everyone wanted their own blog or photo-sharing site. Here

are some of the companies that came of age during this time: MySpace (2003),

Photobucket (2003), Flickr (2004), Facebook (2004), WordPress (2005), Tumblr (2006),

and Twitter (2006). The ultimate indication that Web 2.0 had taken hold was when Time

magazine named “You” its “Person of the Year” in 2006.

Email is the “Killer” App for the Internet

When the personal computer was created, it was a great little toy for technology

hobbyists and armchair programmers. As soon as the spreadsheet was invented,

however, businesses took notice, and the rest is history. The spreadsheet was the

killer app for the personal computer: people bought PCs just so they could run

spreadsheets.

The internet was originally designed as a way for scientists and researchers to share

information and computing power among themselves. However, as soon as

electronic mail was invented, it began driving demand for the internet. This wasn’t

what the developers had in mind, but it turned out that people connecting to people

was the killer app for the internet.

We are seeing this again today with social networks, specifically Facebook. Many

who weren’t convinced to have an online presence now feel left out without a

Facebook account. The connections made between people using Web 2.0

applications like Facebook on their personal computer or smartphone is driving

growth yet again.

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 7/14

The Internet and the World Wide Web Are Not the Same Thing

Many times, the terms “internet” and “World Wide Web,” or even just “the web,” are

used interchangeably. But really, they are not the same thing at all. The internet is

an interconnected network of networks. Many services run across the internet:

electronic mail, voice and video, file transfers, and, yes, the World Wide Web.

The World Wide Web is simply one piece of the internet. It is made up of web

servers that have HTML pages that are being viewed on devices with web browsers.

It is really that simple.

The Growth of Broadband

In the early days of the internet, most access was done via a modem over an analog

telephone line. A modem (short for “modulator-demodulator”) was connected to the

incoming phone line and a computer in order to connect you to a network. Speeds were

measured in bits-per-second (bps), with speeds growing from 1200 bps to 56,000 bps

over the years. Connection to the internet via these modems is called dial-up access. Dial-

up was very inconvenient because it tied up the phone line. As the web became more and

more interactive, dial-up also hindered usage, as users wanted to transfer more and more

data. As a point of reference, downloading a typical 3.5 mb song would take 24 minutes at

1200 bps and 2 minutes at 28,800 bps.

A broadband connection is defined as one that has speeds of at least 256,000 bps,

though most connections today are much faster, measured in millions of bits per second

(megabits or mbps) or even billions (gigabits). For the home user, a broadband connection

is usually accomplished via the cable television lines or phone lines (DSL). Both cable and

DSL have similar prices and speeds, though each individual may find that one is better

than the other for their specific area. Speeds for cable and DSL can vary during different

times of the day or week, depending upon how much data traffic is being used. In more

remote areas, where cable and phone companies do not provide access, home internet

connections can be made via satellite. The average home broadband speed is anywhere

between 3 mbps and 30 mbps. At 10 mbps, downloading a typical 3.5 mb song would

take less than a second. For businesses who require more bandwidth and reliability,

telecommunications companies can provide other options, such as T1 and T3 lines.

Broadband access is important because it impacts how the internet is used. When a

community has access to broadband, it people can interact more online and increases the

usage of digital tools overall. Access to broadband is now considered a basic human right

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 8/14

by the United Nations, as declared in their 2011 statement:

“Broadband technologies are fundamentally transforming the way we live,” the

Broadband Commission for Digital Development, set up last year by the UN

Educational Scientific and Cultural Organization (UNESCO) and the UN

International Telecommunications Union (ITU), said in issuing “The Broadband

Challenge” at a leadership summit in Geneva.

“It is vital that no one be excluded from the new global knowledge societies we

are building. We believe that communication is not just a human need—it is a

right.”

Wireless Networking

Today we are used to being able to access the internet wherever we go. Our smartphones

can access the internet; Starbucks provides wireless “hotspots” for our laptops or tablets.

These wireless technologies have made internet access more convenient and have made

devices such as tablets and laptops much more functional. Let’s examine a few of these

wireless technologies.

Wi-Fi

Wi-Fi is a technology that takes an internet signal and converts it into radio waves. These

radio waves can be picked up within a radius of approximately 65 feet by devices with a

wireless adapter. Several Wi-Fi specifications have been developed over the years,

starting with 802.11b in 1999, followed by the 802.11g specification in 2003, and

802.11n in 2009. Each new specification improved the speed and range of Wi-Fi, allowing

for more uses. One of the primary places where Wi-Fi is being used is in the home. Home

users are purchasing Wi-Fi routers, connecting them to their broadband connections, and

then connecting multiple devices via Wi-Fi.

Mobile Network

As the cell phone has evolved into the smartphone, the desire for internet access on these

devices has led to data networks being included as part of the mobile phone network.

While internet connections were technically available earlier, it was really with the release

of the 3G networks in 2001 (2002 in the US) that smartphones and other cellular devices

could access data from the internet. This new capability drove the market for new and

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 9/14

more powerful smartphones, such as the iPhone, introduced in 2007. In 2011, wireless

carriers began offering 4G data speeds, giving the cellular networks the same speeds that

customers were used to getting via their home connection.

Why Doesn’t My Cell Phone Work When I Travel Abroad?

As mobile phone technologies have evolved, providers in different countries have

chosen different communication standards for their mobile phone networks. In the

United States, both of the two competing standards exist: Global System for Mobile

Communications (GSM; used by AT&T and T-Mobile) and Code-Division Multiple

Access (CDMA; used by the other major carriers). Each standard has its pros and

cons, but the bottom line is that phones using one standard cannot easily switch to

the other. In the United States, this is not a big deal because mobile networks exist

to support both standards. But when you travel to other countries, you will find that

most of them use GSM networks, with the one big exception being Japan, which has

standardized on CDMA. It is possible for a mobile phone using one type of network

to switch to the other type of network by switching out the SIM card, which

controls your access to the mobile network. However, this will not work in all cases.

If you are traveling abroad, it is always best to consult with your mobile provider to

determine the best way to access a mobile network.

Bluetooth

While Bluetooth is not generally used to connect a device to the internet, it is an

important wireless technology that has enabled many functionalities that are used every

day. When created in 1994 by Ericsson, it was intended to replace wired connections

between devices. Today, it is the standard method for connecting nearby devices

wirelessly. Bluetooth has a range of approximately 300 feet and consumes very little

power, making it an excellent choice for a variety of purposes. Some applications of

Bluetooth include connecting a printer to a personal computer, connecting a mobile

phone and headset, connecting a wireless keyboard and mouse to a computer, and

connecting a remote for a presentation made on a personal computer.

VoIP

A growing class of data being transferred over the internet is voice data. A protocol called

voice over IP (VoIP) enables sounds to be converted to a digital format for transmission

over the internet and then recreated at the other end. By using many existing

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 10/14

technologies and software, voice communication over the internet is now available to

anyone with a browser (think Skype, Google Hangouts). Beyond this, many companies are

now offering VoIP-based telephone service for business and home use.

Organizational Networking

LAN and WAN

Local and Wide Area Networks

Scope of business networks

While the internet was evolving and creating a way for organizations to connect to each

other and the world, another revolution was taking place inside organizations. The

proliferation of personal computers inside organizations led to the need to share

resources such as printers, scanners, and data. Organizations solved this problem through

the creation of local area networks (LANs), which allowed computers to connect to each

other and to peripherals. These same networks also allowed personal computers to hook

up to legacy mainframe computers.

An LAN is (by definition) a local network, usually operating in the same building or on the

same campus. When an organization needed to provide a network over a wider area (with

locations in different cities or states, for example), they would build a wide area network

(WAN).

Client-Server

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 11/14

The personal computer originally was used as a stand-alone computing device. A program

was installed on the computer and then used to do word processing or number crunching.

However, with the advent of networking and LANs, computers could work together to

solve problems. Higher-end computers were installed as servers, and users on the local

network could run applications and share information among departments and

organizations. This is called client-server computing.

Intranet

Just as organizations set up websites to provide global access to information about their

business, they also set up internal web pages to provide information about the

organization to the employees. This internal set of web pages is called an intranet. Web

pages on the intranet are not accessible to those outside the company; in fact, those

pages would come up as “not found” if an employee tried to access them from outside the

company’s network.

Extranet

Sometimes an organization wants to be able to collaborate with its customers or suppliers

while at the same time maintaining the security of being inside its own network. In cases

like this, a company may want to create an extranet, which is a part of the company’s

network that can be made available securely to those outside of the company. Extranets

can be used to allow customers to log in and check the status of their orders, or for

suppliers to check their customers’ inventory levels.

Sometimes, an organization will need to allow someone who is not located physically

within its internal network to gain access. This access can be provided by a virtual private

network (VPN). VPNs will be discussed further in the reading, Information Systems

Security.

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 12/14

Microsoft’s SharePoint Powers the Intranet

As organizations begin to see the power of collaboration between their employees,

they often look for solutions that will allow them to leverage their intranet to enable

more collaboration. Since most companies use Microsoft products for much of their

computing, it is only natural that they have looked to Microsoft to provide a

solution. This solution is Microsoft’s SharePoint.

SharePoint provides a communication and collaboration platform that integrates

seamlessly with Microsoft’s Office suite of applications. Using SharePoint,

employees can share a document and edit it together—no more emailing that Word

document to everyone for review. Projects and documents can be managed

collaboratively across the organization. Corporate documents are indexed and made

available for search. No more asking around for that procedures document—now

you just search for it in SharePoint. For organizations looking to add a social

networking component to their intranet, Microsoft offers Yammer, which can be

used by itself or integrated into SharePoint.

Cloud Computing

The universal availability of the internet combined with increases in processing power and

data-storage capacity have made cloud computing a viable option for many companies.

Using cloud computing, companies or individuals can contract to store data on storage

devices somewhere on the internet. Applications can be “rented” as needed, giving a

company the ability to quickly deploy new applications. You can read about cloud

computing in more detail in the reading Software.

Metcalfe’s Law

Just as Moore’s Law describes how computing power is increasing over time,

Metcalfe’s Law describes the power of networking. Specifically, Metcalfe’s Law

states that the value of a telecommunications network is proportional to the

square of the number of connected users of the system. Think about it this way: If

none of your friends were on Facebook, would you spend much time there? If no

one else at your school or place of work had email, would it be very useful to you?

Metcalfe’s Law tries to quantify this value.

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 13/14

Summary

The networking revolution has completely changed how the computer is used. Today, no

one would imagine using a computer that was not connected to one or more networks.

The development of the internet and World Wide Web, combined with wireless access,

has made information available at our fingertips. The Web 2.0 revolution has made us all

authors of web content. As networking technology has matured, the use of internet

technologies has become a standard for every type of organization. The use of intranets

and extranets has allowed organizations to deploy functionality to employees and

business partners alike, increasing efficiencies and improving communications. Cloud

computing has truly made information available everywhere and has serious implications

for the role of the IT department.

Study Questions

1. What were the first four locations hooked up to the internet (ARPANET)?

2. What does the term packet mean?

3. Which came first, the internet or the World Wide Web?

4. What was revolutionary about Web 2.0?

5. What was the so-called killer app for the internet?

6. What makes a connection a broadband connection?

7. What does the term VoIP mean?

8. What is a LAN?

9. What is the difference between an intranet and an extranet?

10. What is Metcalfe’s Law?

References

United Nations, United Nations News Center. (2011). UN sets goal of bringing broadband

to half developing world’s people by 2015. Retrieved from

http://www.un.org/apps/news/story.asp?Cr=broadband&NewsID=40191#.Ut7JOmTTk1J

Licenses and Attributions

1/20/22, 11:00 AM Networking and Communication

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150606/View 14/14

Chapter 5: Networking and Communication

(https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20

and%20Beyond ) from Information Systems for Business and Beyond by David T.

Bourgeois is available under a Creative Commons Attribution 3.0 Unported

(https://creativecommons.org/licenses/by/3.0/) license. © 2014, David T. Bourgeois.

UMGC has modified this work and it is available under the original license.

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20and%20Beyond

https://creativecommons.org/licenses/by/3.0/

1/20/22, 10:59 AM

Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 1/18

Software

Introduction

Software, another key component of an information system, is the set of instructions that

tell the hardware what to do. Software is created through the process of programming.

Without software, the hardware would not be functional.

Types of Software

Software can be broadly divided into two categories: operating systems and application

software. Operating systems manage the hardware and create the interface between the

hardware and the user. Application software is the category of programs that do

something useful for the user.

Learning Resource

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 2/18

Operating System and

Application Software

Functions

Connection between the hardware and the user

Operating Systems

The operating system provides several essential functions, including:

1. managing the hardware resources of the computer;

2. providing the user-interface components; and

3. providing a platform for software developers to write applications.

All computing devices run an operating system. For personal computers, the most popular

operating systems are Microsoft Windows, Apple OS X, and different versions of Linux.

Smartphones and tablets run operating systems as well, such as Apple iOS and Android.

Early personal-computer operating systems were simple by today’s standards; they did

not provide multitasking and required the user to type commands to initiate an action.

The amount of memory that early operating systems could handle was limited as well,

making large programs impractical to run. The most popular of the early operating

systems was IBM’s Disk Operating System, or DOS, which was actually developed for IBM

by Microsoft.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 3/18

In 1984, Apple introduced the Macintosh computer, featuring an operating system with a

graphical user interface. Though not the first graphical operating system, it was the first

one to find commercial success. In 1985, Microsoft released the first version of Windows.

This version of Windows was not an operating system, but instead was an application that

ran on top of the DOS operating system, providing a graphical environment. It was quite

limited and had little commercial success.

It was not until the 1990 release of Windows 3.0 that Microsoft found success with a

graphical user interface. Because of the hold of IBM and IBM-compatible personal

computers on business, it was not until Windows 3.0 was released that business users

began using a graphical user interface, ushering us into the graphical-computing era. Since

1990, both Apple and Microsoft have released many new versions of their operating

systems, with each release adding the ability to process more data at once and access

more memory. Features such as multitasking, virtual memory, and voice input have

become standard features.

Mac vs. Windows

Are you a Mac or a PC user? Ever since its introduction in 1984, users of the Apple

Macintosh have been quite biased about their preference for the Macintosh

operating system (now called OS X) over Microsoft’s. When Microsoft introduced

Windows, Apple sued Microsoft, claiming that they copied the “look and feel” of the

Macintosh operating system. In the end, Microsoft successfully defended itself.

Over the past few years, Microsoft and Apple have traded barbs with each other,

each claiming to have a better operating system and software. While Microsoft has

always had the larger market share (see below), Apple has been the favorite of

artists, musicians, and the technology elite. Apple also provides a lot of computers

to elementary schools, thus gaining a following among the younger generation.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 4/18

Why Is Microsoft Software So Dominant in the Business World?

If you’ve worked in the world of business, you may have noticed that almost all of

the computers run a version of Microsoft’s Windows operating system. Why is this?

On almost all college campuses, you see a preponderance of Apple Macintosh

laptops. In elementary schools, Apple reigns as well. Why has this not extended into

the business world?

Almost all businesses used IBM mainframe computers back in the 1960s and 1970s.

These same businesses shied away from personal computers until IBM released the

PC in 1981.

When executives had to make a decision about purchasing personal computers for

their employees, they would choose the safe route and purchase IBM. The saying

then was: “No one ever got fired for buying IBM.” So over the next decade,

companies bought IBM personal computers (or those compatible with them), which

ran an operating system called DOS. DOS was created by Microsoft, so when

Microsoft released Windows as the next iteration of DOS, companies took the safe

route and started purchasing Windows.

Microsoft soon found itself with the dominant personal-computer operating system

for businesses. As the networked personal computer began to replace the

mainframe computer as the primary way of computing inside businesses, it became

essential for Microsoft to give businesses the ability to administer and secure their

networks. Microsoft developed business-level server products to go along with their

personal computer products, thereby providing a complete business solution. And

so now, the saying goes: “No one ever got fired for buying Microsoft.”

A third personal-computer operating system family that is gaining in popularity is Linux

(pronounced “linn-ex”). Linux is a version of the Unix operating system that runs on a

personal computer. Unix is an operating system used primarily by scientists and engineers

on larger minicomputers. These are expensive computers, and software developer Linus

Torvalds wanted to find a way to make Unix run on less expensive personal computers.

Linux was the result. Linux has many variations and now powers a large percentage of

web servers in the world. It is also an example of open-source software, a topic we will

cover later in this reading.

Application Software

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 5/18

The second major category of software is application software. Application software is,

essentially, software that allows the user to accomplish some goal or purpose. For

example, if you have to write a paper, you might use Microsoft Word. If you want to listen

to music, you might use iTunes. To surf the web, you might use Chrome or Firefox. Even a

computer game could be considered application software.

The “Killer” App

VisiCalc running on an Apple II

First personal spreadsheet

Public Domain

When a new type of digital device is invented, there are generally a small group of

technology enthusiasts who will purchase it just for the joy of figuring out how it works.

However, for most of us, until a device can actually do something useful, we are not going

to spend our hard-earned money on it. A “killer” application is one that becomes so

essential that large numbers of people will buy a device just to run that application.

For the personal computer, the killer application was the spreadsheet. In 1979, VisiCalc,

the first personal-computer spreadsheet package, was introduced. It was an immediate hit

and drove sales of the Apple II. It also solidified the value of the personal computer

beyond the relatively small circle of technology geeks. When the IBM PC was released,

another spreadsheet program, Lotus 1-2-3, was the killer app for business users.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 6/18

Productivity Software

Suite

Word

Processing Spreadsheet Presentation Other

Microsoft

Office

Word Excel Powerpoint Outlook

(email),

Access

(database),

OneNote

(information

gathering)

Apple iWork Pages Numbers Keynote Integrates

with iTunes,

iCloud, and

other Apple
software

OpenOffice Writer Calc Impress Base

(database),

Draw

(drawing),
Math

(equations)

Google Drive Document Spreadsheet Presentation Gmail (email),

Forms (online

form data

collection),
Draw

(drawing)

Along with the spreadsheet, several other software applications have become standard

tools for the workplace. These applications, called productivity software, allow office

employees to complete their daily work. Many times, these applications come packaged

together, such as in Microsoft’s Office suite. Here is a list of these applications and their

basic functions:

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 7/18

Word processing: This class of software provides for the creation of written

documents. Functions include the ability to type and edit text; format fonts and

paragraphs; and add, move, and delete text throughout the document. Most modern

word-processing programs also have the ability to add tables, images, and layout and

formatting features to the document. Word processors save their documents as

electronic files in a variety of formats. By far, the most popular word- processing

package is Microsoft Word, which saves its files in the DOCX format. This format

can be read/written by many other word-processor packages.

Spreadsheet: This class of software provides a way to do numeric calculations and

analysis. The working area is divided into rows and columns, where users can enter

numbers, text, or formulas. It is the formulas that make a spreadsheet powerful,

allowing the user to develop complex calculations that can change based on the

numbers entered. Most spreadsheets also include the ability to create charts based

on the data entered. The most popular spreadsheet package is Microsoft Excel,

which saves its files in the XLSX format. Just as with word processors, many other

spreadsheet packages can read and write to this file format.

Presentation: This class of software provides for the creation of slideshow

presentations. Harkening back to the days of overhead projectors and

transparencies, presentation software allows its users to create a set of slides that

can be printed or projected on a screen. Users can add text, images, and other media

elements to the slides. Microsoft’s PowerPoint is the most popular software, saving

its files in PPTX format.

Some office suites include other types of software. For example, Microsoft Office

includes Outlook, its email package; and OneNote, an information-gathering

collaboration tool. The professional version of Office also includes Microsoft Access,

a database package.

Microsoft popularized the idea of the office-software productivity bundle with their

release of Microsoft Office. This package continues to dominate the market and most

businesses expect employees to know how to use this software. However, many

competitors to Microsoft Office exist and are compatible with the file formats used by

Microsoft (see table below). Recently, Microsoft has begun to offer a web version of its

Office suite. Similar to Google Drive, this suite allows users to edit and share documents

online using cloud-computing technology.

Utility Software and Programming Software

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 8/18

“PowerPointed” to Death

As presentation software, specifically Microsoft PowerPoint, has gained acceptance

as the primary method to formally present information in a business setting, the art

of giving an engaging presentation is becoming rare. Many presenters now just read

the bullet points in the presentation, which those in attendance can already read for

themselves.

The real problem is not with PowerPoint as much as it is with the person creating

and presenting. The software used to help you communicate should not duplicate

the presentation you want to give, but instead it should support it.

Software developers are becoming aware of this problem as well. New digital

presentation technologies are being developed, with the hopes of becoming “the

next PowerPoint.” One notable innovative presentation application is Prezi. Prezi is a

presentation tool that uses a single canvas for the presentation, allowing presenters

to place text, images, and other media on the canvas, and then navigate between

these objects as they present. Just as with PowerPoint, Prezi should be used to

supplement the presentation. And we must always remember that sometimes the

best presentations are made with no digital tools.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 9/18

I Own This Software, Right? Well . . .

When you purchase software and install it on your computer, are you the owner of

that software? Technically, you are not! When you install software, you are actually

just being given a license to use it. When you first install a software package, you

are asked to agree to the terms of service or the license agreement. In that

agreement, you will find that your rights to use the software are limited. For

example, in the terms of the Microsoft Office Excel 2010 software license, you will

find the following statement: “This software is licensed, not sold. This agreement

only gives you some rights to use the features included in the software edition you

licensed.”

For the most part, these restrictions are what you would expect: you cannot make

illegal copies of the software and you may not use it to do anything illegal. However,

there are other, more unexpected terms in these software agreements. For example,

many software agreements ask you to agree to a limit on liability. Again, from

Microsoft: “Limitation on and exclusion of damages. You can recover from Microsoft

and its suppliers only direct damages up to the amount you paid for the software.

You cannot recover any other damages, including consequential, lost profits, special,

indirect or incidental damages.” What this means is that if a problem with the

software causes harm to your business, you cannot hold Microsoft or the supplier

responsible for damages.

Two subcategories of application software worth mentioning are utility software and

programming software. Utility software includes software that allows you to fix or modify

your computer in some way. Examples include antivirus software and disk

defragmentation software. These types of software packages were invented to fill

shortcomings in operating systems. Many times, a subsequent release of an operating

system will include these utility functions as part of the operating system itself.

Programming software is software whose purpose is to make more software. Most of

these programs provide programmers with an environment in which they can write the

code, test it, and convert it into the format that can then be run on a computer.

Applications for the Enterprise

As the personal computer proliferated inside organizations, control over the information

generated by the organization began splintering. Say the customer service department

creates a customer database to keep track of calls and problem reports, and the sales

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 10/18

department also creates a database to keep track of customer information. Which one

should be used as the master list of customers? As another example, someone in sales

might create a spreadsheet to calculate sales revenue, while someone in finance creates a

different one that meets the needs of their department. However, it is likely that the two

spreadsheets will come up with different totals for revenue. Which one is correct? And

who is managing all of this information?

Enterprise Resource Planning

In the 1990s, the need to bring the organization’s information back under centralized

control became more apparent. The enterprise resource planning (ERP) system

(sometimes just called enterprise software) was developed to bring together an entire

organization in one software application. Simply put, an ERP system is a software

application using a central database that is implemented throughout the entire

organization. Let’s take a closer look at this definition:

A software application: An ERP is a software application that is used by many of an

organization’s employees.

Using a central database: All users of the ERP edit and save their information from

the data source. What this means practically is that there is only one customer

database, there is only one calculation for revenue, etc.

That is implemented throughout the entire organization: ERP systems include

functionality that covers all of the essential components of a business. Further, an

organization can purchase modules for its ERP system that match specific needs,

such as manufacturing or planning.

ERP systems were originally marketed to large corporations. However, as more and more

large companies began installing them, ERP vendors began targeting mid-sized and even

smaller businesses. Some of the more well-known ERP systems include those from SAP,

Oracle, and Microsoft.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 11/18

Y2K and ERP

The initial wave of software-application development began in the 1960s, when

applications were developed for mainframe computers. In those days, computing

was expensive, so applications were designed to take as little space as possible. One

shortcut that many programmers took was in the storage of dates, specifically the

year. Instead of allocating four digits to hold the year, many programs allocated two

digits, making the assumption that the first two digits were “19″. For example, to
calculate how old someone was, the application would take the last two digits of the

current year (for 1995, for example, that would be “95″) and then subtract the two
digits stored for the birthday year (“65″ for 1965). 95 minus 65 gives an age of 30,
which is correct.

However, as the year 2000 approached, many of these “legacy” applications were

still being used, and businesses were very concerned that any software applications

they were using that needed to calculate dates would fail. To update our age-

calculation example, the application would take the last two digits of the current

year (for 2012, that would be “12″) and then subtract the two digits stored for the
birthday year (“65″ for 1965). 12 minus 65 gives an age of -53, which would cause
an error. In order to solve this problem, applications would have to be updated to

use four digits for years instead of two. Solving this would be a massive undertaking,

as every line of code and every database would have to be examined.

This is where companies gained additional incentive to implement an ERP system.

For many organizations that were considering upgrading to ERP systems in the late

1990s, this problem, known as Y2K (year 2000), gave them the extra push they

needed to get their ERP installed before the year 2000. ERP vendors guaranteed

that their systems had been designed to be Y2K compliant, which simply meant that

they stored dates using four digits instead of two. This led to a massive increase in

ERP installations in the years leading up to 2000, making the ERP a standard

software application for businesses.

In order to effectively implement an ERP system in an organization, the organization must

be ready to make a full commitment. All aspects of the organization are affected as old

systems are replaced by the ERP system. In general, implementing an ERP system can take

two to three years and several million dollars. In most cases, the cost of the software is

not the most expensive part of the implementation: it is the cost of the consultants.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 12/18

So why implement an ERP system? If done properly, an ERP system can bring an

organization a good return on its investment. By consolidating information systems across

the enterprise and using the software to enforce best practices, most organizations see an

overall improvement after implementing an ERP.

Customer Relationship Management

A customer relationship management (CRM) system is a software application designed to

manage an organization’s customers. In today’s environment, it is important to develop

relationships with your customers, and the use of a well-designed CRM can allow a

business to personalize its relationship with each of its customers. Some ERP software

systems include CRM modules. An example of a well-known CRM package is Salesforce.

Supply Chain Management

Many organizations must deal with the complex task of managing their supply chains. At

its simplest, a supply chain is the linkage between an organization’s suppliers, its

manufacturing facilities, and the distributors of its products. Each link in the chain has a

multiplying effect on the complexity of the process: if there are two suppliers, one

manufacturing facility, and two distributors, for example, then there are 2 x 1 x 2 = 4 links

to handle. However, if you add two more suppliers, another manufacturing facility, and

two more distributors, then you have 4 x 2 x 4 = 32 links to manage.

A supply chain management (SCM) system manages the interconnection between these

links, as well as the inventory of the products in their various stages of development. A

full definition of a supply chain management system is provided by the Association for

Operations Management: “The design, planning, execution, control, and monitoring of

supply chain activities with the objective of creating net value, building a competitive

infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and

measuring performance globally” (Supply Chain Management System, n.d.). Most ERP

systems include a supply chain management module.

Mobile Applications

Just as with the personal computer, mobile devices such as tablet computers and

smartphones also have operating systems and application software. In fact, these mobile

devices are in many ways just smaller versions of personal computers. A mobile app is a

software application programmed to run specifically on a mobile device.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 13/18

Smartphones and tablets are becoming a dominant form of computing, with many more

smartphones being sold than personal computers. This means that organizations will have

to get smart about developing software on mobile devices in order to stay relevant.

These days, most mobile devices run on one of two operating systems: Android or iOS.

Android is an open-source operating system purchased and supported by Google; iOS is

Apple’s mobile operating system.

As organizations consider making their digital presence compatible with mobile devices,

they will have to decide whether to build a mobile app. A mobile app is an expensive

proposition and it will only run on one type of mobile device at a time. For example, if an

organization creates an iPhone app, those with Android phones cannot run the

application. Each app takes several thousand dollars to create, so this is not a trivial

decision for many companies.

One option many companies have is to create a website that is mobile-friendly. A mobile

website works on all mobile devices and costs about the same as creating an app.

Cloud Computing

Historically, for software to run on a computer, an individual copy of the software had to

be installed on the computer, either from a disk or, more recently, after being downloaded

from the internet. The concept of “cloud” computing changes this, however.

To understand cloud computing, we first have to understand what the cloud is. The cloud

refers to applications, services, and data storage on the internet. These service providers

rely on giant server farms and massive storage devices that are connected via internet

protocols. Cloud computing is the use of these services by individuals and organizations.

You probably already use cloud computing in some forms. For example, if you access your

email via your web browser, you are using a form of cloud computing. If you use Google

Drive’s applications, you are using cloud computing. While these are free versions of cloud

computing, there is big business in providing applications and data storage over the web.

Salesforce is a good example of cloud computing: Its entire suite of CRM applications are

offered via the cloud. Cloud computing is not limited to web applications: It can also be

used for services such as phone or video streaming.

Advantages of Cloud Computing

No software to install or upgrades to maintain.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 14/18

Available from any computer that has access to the internet.

Can scale to a large number of users easily.

New applications can be up and running very quickly.

Services can be leased for a limited time on an as-needed basis.

Your information is not lost if your hard disk crashes or your laptop is stolen.

You are not limited by the available memory or disk space on your computer.

Disadvantages of Cloud Computing

Your information is stored on someone else’s computer—how safe is it?

You must have internet access to use it. If you do not have access, you’re out of luck.

You are relying on a third-party to provide these services.

Cloud computing has the ability to really impact how organizations manage technology.

For example, why is an IT department needed to purchase, configure, and manage

personal computers and software when all that is really needed is an internet connection?

Using a Private Cloud

Many organizations are understandably nervous about giving up control of their data and

some of their applications by using cloud computing. But they also see the value in

reducing the need for installing software and adding disk storage to local computers. A

solution to this problem lies in the concept of a private cloud. While there are various

models of a private cloud, the basic idea is for the cloud service provider to section off

web server space for a specific organization. The organization has full control over that

server space while still gaining some of the benefits of cloud computing.

Virtualization

One technology that is used extensively as part of cloud computing is virtualization.

Virtualization is the process of using software to simulate a computer or some other

device. For example, using virtualization, a single computer can perform the functions of

several computers. Companies such as EMC provide virtualization software that allows

cloud service providers to provision web servers to their clients quickly and efficiently.

Organizations are also implementing virtualization in order to reduce the number of

servers needed to provide the necessary services.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 15/18

Software Creation

How is software created? If software is the set of instructions that tells the hardware

what to do, how are these instructions written? If a computer reads everything as ones

and zeroes, do we have to learn how to write software that way?

Modern software applications are written using a programming language. A programming

language consists of a set of commands and syntax that can be organized logically to

execute specific functions. This language generally consists of a set of readable words

combined with symbols. Using this language, a programmer writes a program (called the

source code) that can then be compiled into machine-readable form, the ones and zeroes

necessary to be executed by the CPU. Examples of well-known programming languages

today include Java, PHP, and various flavors of C (Visual C, C++, C#). Languages such as

HTML and Javascript are used to develop web pages. Most of the time, programming is

done inside a programming environment; when you purchase a copy of Visual Studio from

Microsoft, it provides you with an editor, compiler, and help for many of Microsoft’s

programming languages.

Software programming was originally an individual process, with each programmer

working on an entire program, or several programmers each working on a portion of a

larger program. However, newer methods of software development include a more

collaborative approach, with teams of programmers working on code together.

Open-Source Software

When the personal computer was first released, it did not serve any practical need. Early

computers were difficult to program and required great attention to detail. However,

many personal-computer enthusiasts immediately banded together to build applications

and solve problems. These computer enthusiasts were happy to share any programs they

built and solutions to problems they found; this collaboration enabled them to more

quickly innovate and fix problems.

As software began to become a business, however, this idea of sharing everything fell out

of favor, at least with some. When a software program takes hundreds of man-hours to

develop, it is understandable that the programmers do not want to just give it away. This

led to a new business model of restrictive software licensing, which required payment for

software, a model that is still dominant today. This model is sometimes referred to as

closed source, as the source code is not made available to others.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 16/18

There are many, however, who feel that software should not be restricted. Just as with

those early hobbyists in the 1970s, they feel that innovation and progress can be made

much more rapidly if we share what we learn. In the 1990s, with internet access

connecting more and more people together, the open-source movement gained steam.

Open-source software is software that makes the source code available for anyone to

copy and use. For most of us, having access to the source code of a program does us little

good, as we are not programmers and won’t be able to do much with it. The good news is

that open-source software is also available in a compiled format that we can simply

download and install. The open-source movement has led to the development of some of

the most-used software in the world, including the Firefox browser, the Linux operating

system, and the Apache web server. Many also think open-source software is superior to

closed-source software. Because the source code is freely available, many programmers

have contributed to open-source software projects by adding features and fixing bugs.

Many businesses are wary of open-source software precisely because the code is

available for anyone to see. They feel that this increases the risk of an attack. Others

counter that this openness actually decreases the risk because the code is exposed to

thousands of programmers who can incorporate code changes to quickly patch

vulnerabilities.

There are many arguments on both sides for the benefits of the two models. Some

benefits of the open-source model are:

The software is available for free.

The software source-code is available; it can be examined and reviewed before it is

installed.

The large community of programmers who work on open-source projects leads to

quick bug-fixing and feature additions.

Some benefits of the closed-source model are:

By providing financial incentive for software development, some of the brightest

minds have chosen software development as a career.

The company that developed the software provides technical support.

Today there are thousands of open-source software applications available for download.

For example, as discussed previously, you can get the productivity suite from Open Office.

One good place to search for open-source software is sourceforge.net, where thousands

of software applications are available for free download.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 17/18

Study Questions

1. Come up with your own definition of software. Explain the key terms in your

definition.

2. What are the functions of the operating system?

3. Which of the following are operating systems and which are applications:

Microsoft Excel, Google Chrome, iTunes, Windows, Android, Angry Birds?

4. What is your favorite software application? What tasks does it help you

accomplish?

5. What is a “killer” app? What was the killer app for the PC?

6. How would you categorize the software that runs on mobile devices? Break

down these apps into at least three basic categories and give an example of

each.

7. Explain what an ERP system does.

8. What is open-source software? How does it differ from closed-source

software? Give an example of each.

9. What does a software license grant?

10. How did the Y2K (year 2000) problem affect the sales of ERP systems?

Summary

Software gives the instructions that tell the hardware what to do. There are two basic

categories of software: operating systems and applications. Operating systems provide

access to the computer hardware and make system resources available. Application

software is designed to meet a specific goal. Productivity software is a subset of

application software that provides basic business functionality to a personal computer:

word processing, spreadsheets, and presentations. An ERP system is a software

application with a centralized database that is implemented across the entire organization.

Cloud computing is a method of software delivery that runs on any computer that has a

web browser and access to the internet. Software is developed through a process called

programming, in which a programmer uses a programming language to put together the

logic needed to create the program. While most software is developed using a closed-

source model, the open-source movement is gaining more support today.

1/20/22, 10:59 AM Software

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150608/View 18/18

References

Supply Chain Management System. (n.d.). http://www.apics.org/dictionary/dictionary-

information?ID=3984

Licenses and Attributions

Chapter 3: Software

(https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20

and%20Beyond ) from Information Systems for Business and Beyond by David T.

Bourgeois is available under a Creative Commons Attribution 3.0 Unported

(https://creativecommons.org/licenses/by/3.0/) license. © 2014, David T. Bourgeois.

UMGC has modified this work and it is available under the original license.

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20and%20Beyond

https://creativecommons.org/licenses/by/3.0/

1/20/22, 10:47 AM

Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 1/14

Business Processes

Introduction

The fourth component of information systems is process. But what is a process and how

does it tie into information systems? And in what ways do processes have a role in

business? This reading will look to answer those questions and also describe how business

processes can be used for strategic advantage.

What Is a Business Process?

We have all heard the term process before, but what exactly does it mean? A process is a

series of tasks that are completed in order to accomplish a goal. A business process,

therefore, is a process that is focused on achieving a goal for a business. If you have

worked in a business setting, you have participated in a business process. Anything from a

simple process for making a sandwich at Subway to building a space shuttle utilizes one or

more business processes.

Processes are something that businesses go through every day in order to accomplish

their mission. The better their processes, the more effective the business. Some

businesses see their processes as a strategy for achieving competitive advantage. A

process that achieves its goal in a unique way can set a company apart. A process that

eliminates costs can allow a company to lower its prices (or retain more profit).

Documenting a Process

Every day, each of us will conduct many processes without even thinking about them:

getting ready for work, using an ATM, reading our email, etc. But as processes grow more

complex, they need to be documented. For businesses, it is essential to do this because it

allows them to ensure control over how activities are undertaken in their organization. It

also allows for standardization: McDonald’s has the same process for building a Big Mac in

all of its restaurants.

Learning Resource

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 2/14

The simplest way to document a process is to simply create a list. The list shows each step

in the process; each step can be checked off upon completion. For example, a simple

process, such as how to create an account on eBay, might look like this:

1. Go to ebay.com.

2. Click on “register.”

3. Enter your contact information in the “Tell us about you” box.

4. Choose your user ID and password.

5. Agree to User Agreement and Privacy Policy by clicking on “Submit.”

For processes that are not so straightforward, documenting the process as a checklist may

not be sufficient. For example, here is the process for determining if an article for a term

needs to be added to Wikipedia:

1. Search Wikipedia to determine if the term already exists.

2. If the term is found, then an article is already written, so you must think of another

term. Go to 1.

3. If the term is not found, then look to see if there is a related term.

4. If there is a related term, then create a redirect.

5. If there is not a related term, then create a new article.

This procedure is relatively simple—in fact, it has the same number of steps as the

previous example—but because it has some decision points, it is more difficult to track

with a simple list. In these cases, it may make more sense to use a diagram to document

the process:

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 3/14

Wikipedia Term Search Process

Process for determining if a new term should be

added to Wikipedia.

Public Domain

Managing Business Process Documentation

As organizations begin to document their processes, it becomes an administrative task to

keep track of them. As processes change and improve, it is important to know which

processes are the most recent. It is also important to manage the process so that it can be

easily updated! The requirement to manage process documentation has been one of the

driving forces behind the creation of the document management system. A document

management system stores and tracks documents and supports the following functions:

Versions and timestamps. The document management system will keep multiple

versions of documents. The most recent version of a document is easy to identify

and will be served up by default.

Approvals and workflows. When a process needs to be changed, the system will

manage both access to the documents for editing and the routing of the document

for approvals.

Communication. When a process changes, those who implement the process need

to be made aware of the changes. A document management system will notify the

appropriate people when a change to a document is approved.

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 4/14

Of course, document management systems are used not only for managing business

process documentation. Many other types of documents are managed in these systems,

such as legal documents or design documents.

ERP Systems

An enterprise resource planning (ERP) system is a software application with a centralized

database that can be used to run an entire company. Let’s take a closer look at the

definition of each of these components:

An Enterprise Resource Planning (ERP)

System

A software application with a centralized

database that can be used to run an

entire company

A software application: The system is a software application, which means that it has

been developed with specific logic and rules behind it. It has to be installed and

configured to work specifically for an individual organization.

With a centralized database: All data in an ERP system is stored in a single, central

database. This centralization is key to the success of an ERP—data entered in one

part of the company can be immediately available to other parts of

the company.

That can be used to run an entire company: An ERP can be used to manage an entire

organization’s operations. If they so wish, companies can purchase modules for an

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 5/14

ERP that represent different functions within the organization, such as finance,

manufacturing, and sales. Some companies choose to purchase many modules;

others choose a subset of the modules.

An ERP system not only centralizes an organization’s data, but the processes it enforces

are the processes the organization adopts. When an ERP vendor designs a module, it has

to implement the rules for the associated business processes. A selling point of an ERP

system is that it has best practices built right into it. In other words, when an organization

implements an ERP, it also gets improved best practices as part of the deal!

For many organizations, the implementation of an ERP system is an excellent opportunity

to improve their business practices and upgrade their software at the same time. But for

others, an ERP brings them a challenge: Is the process embedded in the ERP really better

than the process they are currently utilizing?

And if they implement this ERP, and it happens to be the same one that all of their

competitors have, will they simply become more like them, making it much more difficult

to differentiate themselves?

This has been one of the criticisms of ERP systems: that they commoditize business

processes, driving all businesses to use the same processes and thereby lose their

uniqueness. The good news is that ERP systems also have the capability to be configured

with custom processes. For organizations that want to continue using their own processes

or even design new ones, ERP systems offer ways to support this through customization.

But there is a drawback to customizing an ERP system: organizations have to maintain the

changes themselves. Whenever an update to the ERP system comes out, any organization

that has created a custom process will be required to add that change to their ERP. This

will require someone to maintain a listing of these changes and will also require retesting

the system every time an upgrade is made. Organizations will have to wrestle with this

decision: When should they go ahead and accept the best-practice processes built into

the ERP system and when should they spend the resources to develop their own

processes? It makes the most sense to only customize those processes that are critical to

the competitive advantage of the company.

Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.

Business Process Management

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 6/14

Organizations that are serious about improving their business processes will also create

structures to manage those processes. Business process management (BPM) can be

thought of as an intentional effort to plan, document, implement, and distribute an

organization’s business processes with the support of information technology.

BPM is more than just automating some simple steps. While automation can make a

business more efficient, it cannot be used to provide a competitive advantage. BPM, on

the other hand, can be an integral part of creating that advantage.

Not all of an organization’s processes should be managed this way. An organization should

look for processes that are essential to the functioning of the business and those that may

be used to bring a competitive advantage. The best processes to look at are those that

include employees from multiple departments, those that require decision-making that

cannot be easily automated, and processes that change based on circumstances.

To make this clear, let’s take a look at an example.

Suppose a large clothing retailer is looking to gain a competitive advantage through

superior customer service. As part of this, they create a task force to develop a state-of-

the-art returns policy that allows customers to return any article of clothing, no questions

asked. The organization also decides that, in order to protect the competitive advantage

that this returns policy will bring, they will develop their own customization to their ERP

system to implement this returns policy. As they prepare to roll out the system, they

invest in training for all of their customer-service employees, showing them how to use

the new system and specifically, how to process returns. Once the updated returns

process is implemented, the organization will be able to measure several key indicators

about returns that will allow them to adjust the policy as needed. For example, if they find

that many women are returning their high-end dresses after wearing them once, they

could implement a change to the process that limits the time (e.g., 14 days) after the

original purchase that an item can be returned. As changes to the returns policy are made,

the changes are rolled out via internal communications, and updates to the returns

processing on the system are made. In our example, the system would no longer allow a

dress to be returned after 14 days without an approved reason.

If done properly, business process management will provide several key benefits to an

organization, which can be used to contribute to competitive advantage. These benefits

include:

Empowering employees. When a business process is designed correctly and

supported with information technology, employees will be able to implement it on

their own authority. In our returns-policy example, an employee would be able to

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 7/14

accept returns made before 14 days or use the system to make determinations on

what returns would be allowed after 14 days.

Built-in reporting. By building measurement into the programming, the organization

can keep up to date on key metrics regarding their processes. In our example, these

can be used to improve the returns process and also, ideally, to reduce returns.

Enforcing best practices. As an organization implements processes supported by

information systems, it can work to implement the best practices for that class of

business process. In our example, the organization may want to require that all

customers returning a product without a receipt show a legal ID. This requirement

can be built into the system so that the return will not be processed unless a valid ID

number is entered.

Enforcing consistency. By creating a process and enforcing it with information

technology, it is possible to create consistency across the entire organization. In our

example, all stores in the retail chain can enforce the same returns policy. And if the

returns policy changes, the change can be instantly enforced across the entire chain.

Business Process Reengineering

As organizations look to manage their processes to gain a competitive advantage, they

also need to understand that their existing ways of doing things may not be the most

effective or efficient. A process developed in the 1950s is not going to be better just

because it is now supported by technology.

In 1990, Michael Hammer published an article in the Harvard Business Review entitled

“Reengineering Work: Don’t Automate, Obliterate.” This article put forward the thought

that simply automating a bad process does not make it better. Instead, companies should

“blow up” their existing processes and develop new processes that take advantage of the

new technologies and concepts. He states in the introduction to the article:

Many of our job designs, work flows, control mechanisms, and organizational

structures came of age in a different competitive environment and before the

advent of the computer. They are geared towards greater efficiency and

control. Yet the watchwords of the new decade are innovation and speed,

service, and quality.

It is time to stop paving the cow paths. Instead of embedding outdated

processes in silicon and software, we should obliterate them and start over.

We should “reengineer” our businesses: use the power of modern information

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 8/14

technology to radically redesign our business processes in order to achieve

dramatic improvements in their performance. (Hammer, 1990)

Business process reengineering (BPR) is not just taking an existing process and

automating it. BPR is fully understanding the goals of a process and then dramatically

redesigning it from the ground up to achieve dramatic improvements in productivity and

quality. But this is easier said than done. Most of us think in terms of how to do small,

local improvements to a process; complete redesign requires thinking on a larger scale.

Hammer (1990) provided some guidelines for how to go about doing business process

reengineering:

Organize around outcomes, not tasks. This simply means to design the process so

that, if possible, one person performs all the steps. Instead of repeating one step in

the process over and over, the person stays involved in the process from start to

finish.

Have those who use the outcomes of the process perform the process. Using

information technology, many simple tasks are now automated, so we can empower

the person who needs the outcome of the process to perform it. The example

Hammer gives here is purchasing: instead of having every department in the

company use a purchasing department to order supplies, have those who need the

supplies order them directly by using an information system.

Subsume information-processing work into the real work that produces the

information. When one part of the company creates information (like sales or

payment information), it should be processed by that same department. There is no

need for one part of the company to process information created in another part of

the company.

Treat geographically dispersed resources as though they were centralized. With the

communications technologies in place today, it becomes easier than ever to not

worry about physical location. A multinational organization does not need separate

support departments (such as IT, purchasing, etc.) for each location anymore.

Link parallel activities instead of integrating their results. Departments that work in

parallel should be sharing data and communicating with each other during their

activities instead of waiting until each group is done and then comparing notes.

Put the decision points where the work is performed, and build controls into the

process. The people who do the work should have decision-making authority, and

the process itself should have built-in controls using information technology.

Capture information once, at the source. Requiring information to be entered more

than once causes delays and errors. With information technology, an organization

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 9/14

can capture it once and then make it available whenever needed.

These principles may seem like common sense today, but in 1990 they took the business

world by storm. Hammer (1990) gave example after example of how organizations

improved their business processes by many orders of magnitude without adding any new

employees, simply by changing how they did things (see “Reengineering the College

Bookstore” below).

Unfortunately, business process reengineering got a bad name in many organizations. This

was because it was used as an excuse for cost cutting that really had nothing to do with

BPR. For example, many companies simply used it as an excuse for laying-off part of their

workforce. Today, however, many of the principles of BPR have been integrated into

businesses and are considered part of good business-process management.

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 10/14

Reengineering the College Bookstore

The process of purchasing the correct textbooks in a timely manner for college

classes has always been problematic. And now, with online bookstores such as

Amazon competing directly with the college bookstore for students’ purchases, the

college bookstore is under pressure to justify its existence

But college bookstores have one big advantage over their competitors: They have

access to students’ data. In other words, once a student has registered for classes,

the bookstore knows exactly what books that student will need for the upcoming

term. To leverage this advantage and take advantage of new technologies, the

bookstore wants to implement a new process that will make purchasing books

through the bookstore advantageous to students. Though it may not be able to

compete on price, it can provide other advantages, such as reducing the time it

takes to find the books and the ability to guarantee that the book is the correct one

for the class. In order to do this, the bookstore will need to undertake a process

redesign.

The goal of the process redesign is simple: to capture a higher percentage of

students as customers of the bookstore. After diagramming the existing process and

meeting with student focus groups, the bookstore comes up with a new process. In

the new process, the bookstore utilizes information technology to reduce the

amount of work the students need to do in order to get their books. In this new

process, the bookstore sends the students an email with a list of all the books

required for their upcoming classes. By clicking a link in this email, the students can

log into the bookstore, confirm their books, and purchase the books. The bookstore

will then deliver the books to the students.

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 11/14

Business Process Reengineering

College bookstore process redesign

ISO Certification

International Standards Organization

(ISO) Certification

ISO defines quality standards

organizations must meet to show

effective business process management

Many organizations now claim that they are using best practices when it comes to

business processes. In order to set themselves apart and prove to their customers (and

potential customers) that they are indeed doing this, these organizations are seeking out

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 12/14

an ISO 9000 certification. ISO is an acronym for International Standards Organization.

This body defines quality standards that organizations can implement to show that they

are, indeed, managing business processes in an effective way. The ISO 9000 certification

is focused on quality management.

In order to receive ISO certification, an organization must be audited and found to meet

specific criteria. In its most simple form, the auditors perform the following review:

Tell me what you do (describe the business process).

Show me where it says that (reference the process documentation).

Prove that this is what happened (exhibit evidence in documented records).

Over the years, this certification has evolved, and many branches of the certification now

exist. ISO certification is one way to separate an organization from others.

Summary

The advent of information technologies has had a huge impact on how organizations

design, implement, and support business processes. From document management systems

to ERP systems, information systems are tied into organizational processes. Using

business process management, organizations can empower employees and leverage their

processes for competitive advantage. Using business process reengineering, organizations

can vastly improve their effectiveness and the quality of their products and services.

Integrating information technology with business processes is one way that information

systems can bring an organization lasting competitive advantage.

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 13/14

Study Questions

1. What does the term business process mean?

2. What are three examples of business process from a job you have had or an

organization you have observed?

3. What is the value in documenting a business process?

4. What is an ERP system? How does an ERP system enforce best practices for

an organization?

5. What is one of the criticisms of ERP systems?

6. What is business process reengineering? How is it different from incrementally

improving a process?

7. Why did BPR get a bad name?

8. List the guidelines for redesigning a business process.

9. What is business process management? What role does it play in allowing a

company to differentiate itself?

10. What does ISO certification signify?

References

Hammer, M. (1990). Reengineering work: don’t automate, obliterate. Harvard Business

Review, 68.4: 104–112.

Licenses and Attributions

Chapter 8: Business Processes

(https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20

and%20Beyond ) from Information Systems for Business and Beyond by David T.

Bourgeois is available under a Creative Commons Attribution 3.0 Unported

(https://creativecommons.org/licenses/by/3.0/) license. © 2014, David T. Bourgeois.

UMGC has modified this work and it is available under the original license.

© 2022 University of Maryland Global Campus

https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20and%20Beyond

https://creativecommons.org/licenses/by/3.0/

1/20/22, 10:47 AM Business Processes

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150602/View 14/14

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

1/20/22, 10:59 AM

Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 1/19

Hardware

Introduction

The physical parts of computing devices—those that you can actually touch—are referred

to as hardware. In this reading, we will take a look at this component of information

systems, learn a little bit about how it works, and discuss some of the current trends

surrounding it.

As stated above, computer hardware encompasses digital devices that you can physically

touch, such as the following:

desktop computers

laptop computers

mobile phones

tablet computers

e-readers

storage devices, such as flash drives

input devices, such as keyboards, mice, and scanners

output devices such as printers and speakers

Besides these more traditional computer hardware devices, many items that were once

not considered digital devices are now becoming computerized themselves. Digital

technologies are now being integrated into many everyday objects, so the days of a

device being labeled categorically as computer hardware may be ending. Examples of

these types of digital devices include automobiles, refrigerators, and even soft- drink

dispensers. Let’s explore digital devices, beginning with defining the term.

Digital Devices

Learning Resource

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 2/19

A digital device processes electronic signals that represent either a one (“on”) or a zero

(“off”). The on state is represented by the presence of an electronic signal; the off state is

represented by the absence of an electronic signal. Each one or zero is referred to as a bit

(a contraction of binary digit); a group of eight bits is a byte. The first personal computers

could process 8 bits of data at once; modern PCs can now process 64 bits of data at a

time, which is where the term 64-bit processor comes from.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 3/19

Understanding Binary

As you know, the system of numbering we are most familiar with is base-ten

numbering. In base-ten numbering, each column in the number represents a power

of 10, with the far-right column representing 10^0 (ones), the next column from the

right representing 10^1 (tens), then 10^2 (hundreds), then 10^3 (thousands), etc.

For example, the number 1010 in decimal represents: (1 x 1000) + (0 x 100) + (1 x

10) + (0 x 1).

Computers use the base-two numbering system, also known as binary. In this

system, each column in the number represents a power of two, with the far-right

column representing 2^0 (ones), the next column from the right representing 2^1

(twos), then 2^2 (fours), then 2^3 (eights), etc. For example, the number 1010 in

binary represents (1 x 8 ) + (0 x 4) + (1 x 2) + (0 x 1). In base ten, this evaluates to 10.

As the capacities of digital devices grew, new terms were developed to identify the

capacities of processors, memory, and disk storage space. Prefixes were applied to

the word byte to represent different orders of magnitude. Since these are digital

specifications, the prefixes were originally meant to represent multiples of 1024

(which is 210), but have more recently been rounded to mean multiples of 1000.

A List of Binary Prefixes

Prefix Represents Examples

kilo one thousand kilobyte=one thousand

bytes

mega one million megabyte=one million

bytes

giga one billion gigabyte=one billion

bytes

tera one trillion terabyte=one trillion

bytes

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 4/19

Tour of a PC

All personal computers consist of the same basic components: a CPU, memory, circuit

board, storage, and input/output devices. It also turns out that almost every digital device

uses the same set of components, so examining the personal computer will give us insight

into the structure of a variety of digital devices. So let’s take a “tour” of a personal

computer and see what makes it function.

Processing Data: The CPU

As stated above, most computing devices have a similar architecture. The core of this

architecture is the central processing unit, or CPU. The CPU can be thought of as the

“brains” of the device. The CPU carries out the commands sent to it by the software and

returns results to be acted upon.

The earliest CPUs were large circuit boards with limited functionality. Today, a CPU is

generally on one chip and can perform a large variety of functions. Today there are many

manufacturers of CPUs for personal computers; the leaders are Intel and Advanced Micro

Devices (AMD).

The speed (“clock time”) of a CPU is measured in hertz. A hertz is defined as one cycle per

second. Using the binary prefixes mentioned above, we can see that a kilohertz

(abbreviated kHz) is one thousand cycles per second, a megahertz (mHz) is one million

cycles per second, and a gigahertz (gHz) is one billion cycles per second. The CPU’s

processing power has increased at an amazing rate (see “Moore’s Law,” below). Besides a

faster clock time, many CPU chips now contain multiple processors per chip. These chips,

known as dual-core (two processors), quad-core (four processors), etc., increase the

processing power of a computer by providing the capability of multiple CPUs.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 5/19

Moore’s Law

We all know that computers get faster every year. Many times, we are not sure if we

want to buy today’s model of smartphone, tablet, or PC because next week it won’t

be the most advanced any more. Gordon Moore, one of the founders of Intel,

recognized this phenomenon in 1965, noting that microprocessor transistor counts

had been doubling every year (Moore, 1965). His insight eventually evolved into

Moore’s Law, which states that the number of transistors on a chip will double every

two years. This has been generalized into the concept that computing power will

double every two years for the same price point. Another way of looking at this is to

think that the price for the same computing power will be cut in half every two

years. Though many have predicted its demise, Moore’s Law has held true for over

40 years.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 6/19

Moore’s Law

A graphical representation of Moore’s Law from 1971 to 2011

CC-BY-SA: Wgsimon

There will be a point, someday, where we reach the limits of Moore’s Law, where we

cannot continue to shrink circuits any further. But engineers will continue to seek

ways to increase performance.

Motherboard

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 7/19

Motherboard

A computer’s main circuit

board

The motherboard is the main circuit board on the computer. The CPU, memory, and

storage components, among other things, all connect into the motherboard.

Motherboards come in different shapes and sizes, depending upon how compact or

expandable the computer is designed to be. Most modern motherboards have many

integrated components, such as video and sound processing, which used to require

separate components.

The motherboard provides much of the bus of the computer (the term bus refers to the

electrical connection between different computer components). The bus is an important

determiner of the computer’s speed: the combination of how fast the bus can transfer

data and the number of data bits that can be moved at one time determine the speed.

Random-Access Memory

When a computer starts up, it begins to load information from the hard disk into its

working memory. This working memory, called random-access memory (RAM), can

transfer data much faster than the hard disk. Any program that you are running on the

computer is loaded into RAM for processing. In order for a computer to work effectively,

some minimal amount of RAM must be installed. In most cases, adding more RAM will

allow the computer to run faster. Another characteristic of RAM is that it is “volatile.” This

means that it can store data as long as it is receiving power; when the computer is turned

off, any data stored in RAM is lost.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 8/19

Dual-inline Memory Module (DIMM)

Means by which RAM is installed on a personal computer

RAM is generally installed in a personal computer through the use of a dual-inline memory

module (DIMM). The type of DIMM accepted into a computer is dependent upon the

motherboard. As described by Moore’s Law, the amount of memory and speeds of DIMMs

have increased dramatically over the years.

Hard Disk

Computer Hard Disk Enclosure

Location of long-term data storage

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 9/19

While the RAM is used as working memory, the computer also needs a place to store data

for the longer term. Most of today’s personal computers use a hard disk for long-term

data storage. A hard disk is where data is stored when the computer is turned off and

where it is retrieved from when the computer is turned on. It is called a hard disk because

it consists of a stack of disks inside a hard metal case. A floppy disk (discussed below) was

a removable disk that, in some cases at least, was flexible, or “floppy.”

Solid-State Drives

A relatively new component becoming more common in some personal computers is the

solid-state drive (SSD). The SSD performs the same function as a hard disk: long-term

storage. Instead of spinning disks, the SSD uses flash memory, which is much faster.

SSDs are currently quite a bit more expensive than hard disks. However, the use of flash

memory instead of disks makes them much lighter and faster than hard disks. SSDs are

primarily utilized in portable computers, making them lighter and more efficient. Some

computers combine the two storage technologies, using the SSD for the most accessed

data (such as the operating system) while using the hard disk for data that is accessed less

frequently. As with any technology, Moore’s Law is driving up capacity and speed, and

lowering prices of SSDs, which will allow them to proliferate in the years to come.

Removable Media

Besides fixed storage components, removable storage media are also used in most

personal computers. Removable media allows you to take your data with you. And just as

with all other digital technologies, these media have gotten smaller and more powerful as

the years have gone by. Early computers used floppy disks, which could be inserted into a

disk drive in the computer. Data was stored on a magnetic disk inside an enclosure. These

disks ranged from 8″ in the earliest days down to 3 1/2″.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 10/19

Floppy Disks (8″ to 5 1/4″ to
3 1/2″)

Removable storage used in

early computers

Public Domain

Around the turn of the century, the USB flash drive was developed (more about the USB

port later in the chapter), and beginning in the late 1990s, the universal serial bus (USB)

connector became standard on all personal computers. As with all other storage media,

flash drive storage capacity has skyrocketed over the years, from initial capacities of 8

megabytes to current capacities of 64 gigabytes and still growing.

Network Connection

When personal computers were first developed, they were stand-alone units, which

meant that data was brought into the computer or removed from the computer via

removable media, such as the floppy disk. Beginning in the mid-1980s, however,

organizations began to see the value in connecting computers together via a digital

network. Because of this, personal computers needed the ability to connect to these

networks. Initially, this was done by adding an expansion card to the computer that

enabled the network connection, but by the mid-1990s, a network port was standard on

most personal computers. As wireless technologies began to dominate in the early 2000s,

many personal computers also began including wireless networking capabilities. Digital

communication technologies will be discussed further in Networking and Communication.

Input and Output

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 11/19

USB Connector

Connector device for

input and output

devices

In order for a personal computer to be useful, it must have channels for receiving input

from the user and channels for delivering output to the user. These input and output

devices connect to the computer via various connection ports, which generally are part of

the motherboard and are accessible outside the computer case. In early personal

computers, specific ports were designed for each type of output USB connector device.

The configuration of these ports has evolved over the years, becoming more and more

standardized over time. Today, almost all devices plug into a computer through the use of

a USB port. This port type, first introduced in 1996, has increased in its capabilities, both

in its data transfer rate and power supplied.

Bluetooth

Besides USB, some input and output devices connect to the computer via a wireless-

technology standard called Bluetooth. Bluetooth was first invented in the 1990s and

exchanges data over short distances using radio waves. Bluetooth generally has a range of

100 to 150 feet. For devices to communicate via Bluetooth, both the personal computer

and the connecting device must have a Bluetooth communication chip installed.

Input Devices

All personal computers need components that allow the user to input data. Early

computers used simply a keyboard to allow the user to enter data or select an item from a

menu to run a program. With the advent of the graphical user interface, the mouse

became a standard component of a computer. These two components are still the primary

input devices to a personal computer, though variations of each have been introduced

with varying levels of success over the years. For example, many new devices now use a

touch screen as the primary way of entering data.

Besides the keyboard and mouse, additional input devices are becoming more common.

Scanners allow users to input documents into a computer, either as images or as text.

Microphones can be used to record audio or give voice commands. Webcams and other

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 12/19

types of video cameras can be used to record video or participate in a video chat session.

Output Devices

Output devices are essential as well. The most obvious output device is a display, visually

representing the state of the computer. In some cases, a personal computer can support

multiple displays or be connected to larger-format displays such as a projector or large-

screen television. Besides displays, other output devices include speakers and printers.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 13/19

What Hardware Components Contribute to the Speed of a Computer?

The speed of a computer is determined by many elements, some related to

hardware and some related to software. In hardware, speed is improved by giving

the electrons shorter distances to traverse to complete a circuit. Since the first CPU

was created in the early 1970s, engineers have constantly worked to figure out how

to shrink these circuits and put more and more circuits onto the same chip. And this

work has paid off—the speed of computing devices has continuously improved ever

since.

The hardware components that contribute to the speed of a personal computer are

the CPU, the motherboard, RAM, and the hard disk. In most cases, these items can

be replaced with newer, faster components. In the case of RAM, simply adding more

RAM can also speed up the computer. The table shows how each of these

contributes to the speed of a computer. Besides upgrading hardware, there are

many changes that can be made to the software of a computer to make it faster.

Component

Speed

measured by Units Description

CPU Clock speed gHz The time it takes to complete

a circuit.

Motherboard Bus speed mHz How much data can moveacross

the bus simultaneously.

RAM Data transfer

rate

MB/s The time it takes for data to

be transferred from memory

to

system.

Hard Disk Access time ms The time it takes before the disk

can transfer data.

Hard Disk Data transfer

rate

MBit/s The time it takes for data to

be transferred from disk to

system.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 14/19

Other Computing Devices

A personal computer is designed to be a general-purpose device. That is, it can be used to

solve many different types of problems. As the technologies of the personal computer

have become more commonplace, many of the components have been integrated into

other devices that previously were purely mechanical. We have also seen an evolution in

what defines a computer. Ever since the invention of the personal computer, users have

clamored for a way to carry them around. Here we will examine several types of devices

that represent the latest trends in personal computing.

Portable Computers

Mac Laptop

Apple computer

In 1983, Compaq Computer Corporation developed the first commercially successful

portable personal computer. By today’s standards, the Compaq PC was not very portable;

weighing in at 28 pounds, this computer was portable only in the most literal sense: it

could be carried around. But this was no laptop; the computer was designed like a

suitcase, to be lugged around and laid on its side to be used. Besides portability, the

Compaq was successful because it was fully compatible with the software being run by

the IBM PC, which was the standard for business.

In the years that followed, portable computing continued to improve, giving us laptop and

notebook computers. The “luggable” computer has given way to a much lighter clamshell

computer that weighs from 4 to 6 pounds and runs on batteries. In fact, the most recent

advances in technology give us a new class of laptop that is quickly becoming the

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 15/19

standard: extremely light and portable, and using less power than their larger

counterparts. The MacBook Air is a good example of this: it weighs less than three pounds

and is only 0.68 inches thick!

Finally, as more organizations and individuals have moved much of their computing to the

internet, many laptops use “the cloud” for all of their data and application storage. These

laptops are also extremely light because they do not need a hard disk. Samsung’s

Chromebook is a good example of this type of laptop (sometimes called a netbook).

Smartphones

The first modern-day mobile phone was invented in 1973. Resembling a brick and

weighing in at two pounds, it was priced out of reach for most consumers at nearly

$4000. Since then, mobile phones have become smaller and less expensive and are a

modern convenience available to all levels of society. As mobile phones evolved, they

became more like small computers. These smartphones have many of the same

characteristics as a personal computer, such as an operating system and memory. The first

smartphone was the IBM Simon, introduced in 1994.

In January 2007, Apple introduced the iPhone. Its ease of use and intuitive interface made

it an immediate success and solidified the future of smartphones. Running on an operating

system called iOS, the iPhone was really a small computer with a touch-screen interface.

In 2008, the first Android phone was released, with similar functionality.

Tablet Computers

A tablet computer is one that uses a touch screen as its primary input and is small enough

and light enough to be carried around easily. Tablets generally have no keyboard and are

self-contained inside a rectangular case. The first tablet computers appeared in the early

2000s and used an attached pen as a writing device for input. These tablets ranged in size

from small personal digital assistants (PDAs), which were handheld, to full-sized, 14-inch

devices. Most early tablets used a version of an existing computer operating system, such

as Windows or Linux.

These early tablet devices were, for the most part, commercial failures. In January 2010,

Apple introduced the iPad, which ushered in a new era of tablet computing. Instead of a

pen, the iPad used the finger as the primary input device. Instead of using the operating

system of their desktop and laptop computers, Apple chose to use iOS, the operating

system of the iPhone. Because the iPad had a user interface that was the same as the

iPhone, consumers felt comfortable and sales took off. The iPad has set the standard for

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 16/19

tablet computing. After the success of the iPad, computer manufacturers began to

develop new tablets that utilized operating systems that were designed for mobile

devices, such as Android.

The Rise of Mobile Computing

Mobile computing has had a huge impact on the business world. The use of smartphones

and tablet computers is replacing the use of PCs for many purposes. It is expected that

the use of PCs will continue to decline as mobile computing increases.

Integrated Computing

Along with advances in computers themselves, computing technology is being integrated

into many everyday products. From automobiles to refrigerators to airplanes, computing

technology is enhancing what these devices can do and is adding capabilities that would

have been considered science fiction just a few years ago. The smart house and the self-

driving car are two of the latest ways that computing technologies are being integrated

into everyday products

The Commoditization of the Personal Computer

Over the past 30 years, as the personal computer has gone from technical marvel to part

of our everyday lives, it has also become a commodity. The PC has become a commodity

in the sense that there is very little differentiation between computers and the primary

factor that controls their sale is their price. Hundreds of manufacturers all over the world

now create parts for personal computers. Dozens of companies buy these parts and

assemble the computers. As commodities, there are essentially no differences between

computers made by these different companies. Profit margins for personal computers are

razor-thin, leading hardware developers to find the lowest-cost manufacturing.

There is one brand of computer for which this is not the case—Apple. Because Apple does

not make computers that run on the same open standards as other manufacturers, they

can make a unique product that no one can easily copy. By creating what many consider

to be a superior product, Apple can charge more for their computers than other

manufacturers. Just as with the iPad and iPhone, Apple has chosen a strategy of

differentiation, which, at least at this time, seems to be paying off.

The Problem of Electronic Waste

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 17/19

Electronic Waste

Discarded electronic equipment

Public Domain

Personal computers have been around for more than 35 years. Millions of them have been

used and discarded. Mobile phones are now available in even the remotest parts of the

world and, after a few years of use, they are discarded. Where does this electronic debris

end up?

Often, it gets routed to any country that will accept it. Many times, it ends up in dumps in

developing nations. These dumps are beginning to be seen as health hazards for those

living near them. Though many manufacturers have made strides in using materials that

can be recycled, electronic waste is a problem for all of us.

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 18/19

Summary

Information systems hardware consists of the components of digital technology that you

can touch. We reviewed the components that make up a personal computer, with the

understanding that the configuration of a personal computer is very similar to that of any

type of digital computing device. A personal computer is made up of many components,

most importantly the CPU, motherboard, RAM, hard disk, removable media, and

input/output devices. We also reviewed some variations on the personal computer, such

as the tablet computer and the smartphone. In accordance with Moore’s Law, these

technologies have improved quickly over the years, making today’s computing devices

much more powerful than the devices of just a few years ago. Finally, we discussed two of

the consequences of this evolution: the commoditization of the personal computer and

the problem of electronic waste.

Study Questions

1. Write your own description of what the term information systems hardware

means.

2. What is the impact of Moore’s Law on the various hardware components

described in this chapter?

3. Write a summary of one of the items mentioned in the “Integrated Computing”

section.

4. Explain why the personal computer is now considered a commodity.

5. The CPU can also be thought of as the _____________ of the computer.

6. List the following in increasing order (slowest to fastest): megahertz, kilohertz,

gigahertz.

7. What is the bus of a computer?

8. Name two differences between RAM and a hard disk.

9. What are the advantages of solid-state drives over hard disks?

10. How heavy was the first commercially successful portable computer?

References

1/20/22, 10:59 AM Hardware

https://learn.umgc.edu/d2l/le/content/622997/viewContent/25150605/View 19/19

Moore, Gordon E. (1965). Cramming more components onto integrated circuits.

Electronics Magazine, p. 4.

Licenses and Attributions

Chapter 2: Hardware

(https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20

and%20Beyond ) from Information Systems for Business and Beyond by David T.

Bourgeois is available under a Creative Commons Attribution 3.0 Unported

(https://creativecommons.org/licenses/by/3.0/) license. © 2014, David T. Bourgeois.

UMGC has modified this work and it is available under the original license.

© 2022 University of Maryland Global Campus

All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity

of information located at external sites.

https://www.saylor.org/site/textbooks/Information%20Systems%20for%20Business%20and%20Beyond

https://creativecommons.org/licenses/by/3.0/

Respond to the discusion and include citation

Isaac

Fire department training report tracking process:

Instructor teaches a class and records time when class started and ended.

After completion of class instructor logs into Fire Department PC that is connected to the department Intranet.

Opens a web browser and goes to Emergency Reporting website.

 

Instructor logs into Emergency Reporting with username and password.

Selects “Training 3.0” tab on left hand side.

Select “add class” at the top for new class information. 

Enter Class name the date and time of when the class took place.

Select add class.

In Info window enter class category, Station that the class took place, Instructors giving the class, Training Codes (this will contain class duration), Location (can differ from station), Evaluation Method(s), and Objective.(Everything in red is required information)

In Narrative tab, enter in details that took place during the class.

In Files tab, enter in all files and source materials used in class as a file.

In people tab, add all agency personnel that participated in the class including non agency personnel. Select if personnel passed or failed, grade of any test taken, total hours of individuals present, and pay grade of personnel.

In Authorize tab, enter password and select class if tabs info, Narrative, Files, and People are green (Elkin Fire Department 2015).

After Instructor completes inputting record into Emergency Reporting it is transferred to Assistant Chief(AC) of Training for Review.

AC of Training reviews inputted class.

AC of Training then makes a copy of class record on desktop and uploads into Department SharePoint in Training section.

 

1)  Explain why you picked that process. 

I selected this process as I’m a Instructor at my Firehouse that teaches other firefighters various subjects. All fire department personnel have annual training hours that must be met under NFPA guidelines. NFPA is the National Fire Protection Association that is the governing body for Firefighter certification and regulations (NFPA 2022). 

2)  Explain the steps you might take to analyze how to improve the process.

The only step I would have improved would require connecting Emergency Reporting with the Fire Department SharePoint to streamline storing records in a secondary location without the need to manual transfer of data.

3)  Who should be involved with you? 

The Assistant Chief of Training and ones Supervisor are needed to teach the process to new instructors when using the software. Once competent with the software the AC of Training will review reports for accuracy and final approval of record.

4)  What are some of the questions you should ask about the current process? 

Currently I have no questions with the current process as it has been greatly improved from all record being manually entered as a word document and then a copy being printed out. One copy was stored in Department hard drives, while the paper copy was stored in filing cabinets taking up space. This current process for tracking training is only five years old at my Department. 

5)  How will you know if the process was actually improved?

It is already improved over previous methods of manual documentation with paper records. In the old process we lost tons of records to hard drives failing and to paper copies being damaged or destroyed due to time or from hazards. The paper copies for a long time where carbon copies of records that fade easily.  

References(Elkin’s Fire department is a different department than where I work):

Elkin Fire Department (July 13, 2015) Emergency Reporting: entering a training report. 

NFPA (retrieved January 27, 2022) NFPA 1401 Recommended Practice for Fire Service Training Reports and Records 

https://www.nfpa.org/codes-and-standards/all-codes-and-standards/list-of-codes-and-standards/detail?code=1401

Respond to the discussion and include citation

Deborah

NonAppropriated Funds Central Cashier

If there are revenue generating activities at in Air Force Installation, there is a Central Cashier. 

The central cashier has a list of duties. I will focus on only one deposits.

The central cashier:

Receives deposits from all revenue generating activities on the base using AF Form 1876

Verifies their totals and issues a receipt (AF Form 2557)

After each transaction, transcribes the totals to an automated system that tracks the number and denominations of all currency

Totals in the drawer with the automated tracking system at the end of the day

Prepares AF From 1877 and report to the Shared Service Center (SSC) in San Antonio Texas only if both the tracking system and central cashier balance. If not, all cash must be re-verified.

When the deposit is complete, the AF Form 1877 and all supporting documents are stored for 7 years in a file cabinet. (USAF, 2019)

1. Explain why you picked this process? I picked this process because I had this duty daily 10 years ago.  The system is updated from 10 years ago, there was no automated system and the AF Form 1877 was copied, faxed, and printed at the SSC on a copier. 

 2. Explain the steps you might take to analyze how to improve the process.  Quantify the time it takes to file the forms vs the time it takes scan the information to a cloud based system. Sometimes times cannot be measured but for this process it should be.  

3. Who should be involved with you? The person that should be involved with me is the Air Force Services Agency Chief who is a GS-15 in San Antonio, Deputy, and the Chief Resource Manger for Air Force Services.  

4. What are some of the questions you should ask about the current process?

Why aren’t we looking at ways to safe space in our offices?

Can we find a system that will interface with the bank’s system to view deposits the same day? 

5. How will you know if the process was actually improved? When a new process is introduced, it is advertised. I am not a central cashier anymore. I do know people who work for Services that supervise a central cashier. I would definitely ask if there are changes to how they do business. 

References

USAF. (2019, October 2). USAF E-Publication. Retrieved from Air Force Manual 34-209: 

https://static.e-publishing.af.mil/production/1/af_a1/publication/afman34-209/afman34-209

 
Respond to the discussion and include citation

Donye

There are currently 165 board of directors (BOD) in my organization that benefits from technology (computers, cellphones, etc.), home internet, and other travel-related items while performing their official duties. Each benefit has a maximum amount that can be reimbursed during a three-year term. My department is tasked with tracking and ensuring the BOD receives the allotted amount using an Excel spreadsheet. The process is the department receives the requested expense reimbursement from the BOD via email. It is then assigned to a team member to audit and record the expenses in the spreadsheet. Each time a new request is received again by BOD, a different person may process another additional board member’s request. The selection of the team member who receives it is based on workload. Once the auditing process is complete, the team member inputs into the accounting system for payment.

1)  Explain why you picked that process. 

 I chose to highlight this because it is a process but not an effective one. It does not help us ensure that BOD receives only the benefits they can have. The data tracking for benefits is not accurately captured and does not satisfy the organization’s goal to ensure they are not overpayment of benefits (Business Processes, n.d., p.1 para. 1-2). Lastly, the turnaround time to process the expenses is three days. Due to the volume, it has led to benefits not being entered into the spreadsheet.

 2)  Explain the steps you might take to analyze how to improve the process.

 Look at how the BOD reimbursements are distributed amongst the team. Determine comfortability with Excel amongst team members. Lastly, take a hard look to determine if Excel is the most efficient solution based on the need for accuracy, the volume of input, and the number of people who use the spreadsheet. 

 3)  Who should be involved with you? 

 The people who should be involved are our manager, the team responsible for auditing and updating the spreadsheet, accounting, and the IT manager familiar with the organization’s platforms.

4)  What are some of the questions you should ask about the current process? 

 Is there quality control being used to ensure the benefits are correctly captured? Are the BOD receiving more or fewer reimbursements due to the process being used? What measurement is being used to determine if we are reaching our goal? 

 5)  How will you know if the process was actually improved?

 A business process management (BPM) should be established to determine the plan moving forward. Currently, there is no documented process outlining the process specifically. This should be done, and no real training and communication of the process (Business Processes, n.d., p. 5, para. 1). One way to gauge improvement would be for everyone to use the same process. Each individual doing things their own way is one of the most significant issues. Consistency will minimize errors leading to higher rates of accuracy.

 Reference:

 University of Maryland Global Campus. (n.d.). Business Processes. Document posted in UMGC IFSM 300 6384 online classroom, archived at 

https://learn.umgc.edu

Order your essay today and save 25% with the discount code: GREEN

Order a unique copy of this paper

600 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
Top Academic Writers Ready to Help
with Your Research Proposal

Order your essay today and save 25% with the discount code GREEN