PUTTING ISSUES INTO PERSPECTIVE
Advances in artificial intelligence impacts EMS
By A.J. Heightman, MPA, EMT-P
In the early 1980s, Jack Stout wrote a series in JEMS introducing the concepts of fractile response times, system status
management (SSM) and high-performance
EMS. For the next 15 years, he worked tirelessly to explore new thinking about how EMS
was delivered. 1
Jack showed us that knowledge is gained
by capturing data, and that the insights gained
could give us powerful information and make
us more efficient.
SSM was revolutionary because Jack
showed us all how to correlate computer aided
dispatch data, incident dates, times and locations to predict peak times in order to schedule
the right number of ambulances in the proper
geographic post locations to manage call volume in an efficient and cost-effective manner.
SSM merged multiple data reports, often
from multiple computer systems, into one
readable report that factually calculated information that was undeniably correct. It was
then up to system status managers to use the
data and program computers, often manually,
to deploy their resources.
If you think SSM was a powerful tool
for EMS, wait until we introduce you to the
use of drones and artificial intelligence (AI)
in EMS at EMS Today: The JEMS Conference in Charlotte, N.C. It’s like SSM
We’ll be introducing AI in a comprehensive, full-day preconference workshop
on Tuesday, Feb. 20, and we’ll also present
an amazing, fast-paced session on the same
technologies on Wednesday, Feb. 21, from
10:00–11: 30 a.m.
For those thinking this is like Star Wars and
not applicable to their EMS system, hold on
because I’m going to take you on a fast, explanatory ride into this new technology galaxy.
& MACHINE LEARNING
AI and machine learning (ML) are going to
allow us to do millions of complex things in
Although the terms are often used interchangeably, they’re different. AI is the broader
concept of computers with the ability to act
intelligently enough to perform tasks usually
attributed to humans. ML is often referred to
as a subset of AI, and is what most of us know
as the current state-of-the-art. 2 Although AI
encapsulates a host of different approaches to
solving complex tasks, ML builds on a specific
premise around the system being able to learn
on its own from data.
ML is all about transitioning toward a paradigm where the data speaks for itself through
the models, allowing us to capture knowledge
that wouldn’t other wise be apparent to humans.
AI is classified into two groups: vertical and
general. Vertical AI exists today and includes
systems that can intelligently trade stocks and
shares, or operate autonomous vehicles. They’re
designed to excel at a specific purpose, but they
don’t generalize about a broader set of problems.
Generalized AIs aren’t yet available. The
concept is focused on the idea of a “thinking”
machine. At some point in the future, systems
will actually process information independently
like a brain would, and handle complex tasks
independent of human action—learning from
its mistakes. 2
Today’s AI and ML solutions are already
extremely powerful. Both concepts can hold,
store and process more information and research
than traditional computers can today. Consequently, both can work faster and in a more
comprehensive manner than the human brain.
The result: AI and ML are now able to
predict (i.e., tell us independently) what the
patient’s problem is before a full set of vital
signs is taken.
This concept isn’t new. What’s new is that
we’ve finally developed ways to implement it.
A recent article in The New York Times discussed how new technologies were testing the
limits of computer semiconductors and how
researchers have been able to closely mimic
specific brain functions. 3
The first challenge engineers faced was the
enormous amount of computing power needed
to move into the AI era. Computers, though
powerful, were maxed out in their capabilities
and necessitated a whole new way of thinking.
Engineers and programmers have now
learned how to mimic the brain and turn computers into multicenter processing systems that
can store incredible amounts of data, analyze it,
perform multiple simultaneous processes and
calculations from different sensory and stimulus areas, blend/merge them together into
an incredible “data milkshake,” and conduct
incredibly complex actions that we take for
granted—all in a nanosecond.
Take a 1-year-old child, not yet mature
enough to know that a hot pot on a stove is
dangerous. He climbs up the stove and touches
the hot pot. The infant’s beautiful but uneducated, preprogrammed brain, biologically
loaded with an amazing amount of data, processes what’s occurring, and instantly sends a
report (corrective action) to multiple pathways.
In a nanosecond, the infant’s brain senses
the danger, processes the location and extent
of the hazard, charts a path for escape, and
sends signals to multiple programmable action
pathways that allow independent biological
systems to take immediate actions: the infant
rapidly pulls back his hand, cries out to alert
his parents of his plight, turns his body, flees
the hazard area and runs to his parents for care.