FROM THE EDITOR
Engineers are now able to recognize the
limits of computers and build AI machines
that think and act like the human brain, where
a central brain stem oversees the nervous system and offloads tasks—like hearing and
seeing—to the surrounding cortex. 3
What’s amazing is that AI machines work
on systems that allow them to navigate the
physical world by themselves.
THE BIG CHANGE
For five decades, computer makers built systems
around a single, do-it-all chip, the central processing unit (CPU). The next generation systems, capable of AI, are now are dividing work
into tiny pieces and spreading them among
vast “farms” of simpler, specialized chips that
consume less power. 3
For example, Google’s servers now have
enormous banks of custom-built chips that
work alongside the CPU, running the computer algorithms that drive speech recognition
and other forms of AI. 3
In 2011, Google engineers led a team that
explored the idea of neural networks and computer algorithms that can learn tasks on their
own, like recognizing words spoken into phones
or faces in a photograph.
To support this innovation, Google had to
more than double its data center capacity. An
engineer proposed that Google build its own
computer chip just to run this kind of AI.
Experts now know that machines spreading computations across a vast number of tiny,
specialized, low-power chips can make more
efficient use of the energy at its disposal to
allow them to do many more things seamlessly.
For example, Microsoft builds software that
runs on an Intel CPU. Windows can’t reprogram the chip because it’s hardwired to only
perform certain tasks.
The new chips, called field programmable gate arrays (FPGAs), are chips that can
be reprogrammed for new jobs on the fly, just
like reprogramming an EMS officer’s radio
in the field.
In the fall of 2016, a team of Microsoft
researchers built a neural network that could
recognize spoken words more accurately than
the average human could. 3
The leading internet companies are now
training their neural networks with help from
another type of low-powered chip called a
graphics processing unit (GPU) that can process the math required by neural networks far
more efficiently than CPUs. 3
WHAT DOES IT MEAN FOR EMS?
If you watch Jeopardy, you may know that
IBM programmed Watson, its AI machine,
to play against the game show’s top winners
of all time.
Watson won, hands down. Why? Because
Watson was loaded with more facts in an
hour than the average brain can absorb and
process in a lifetime.
When 8,000 research papers on cancer
are published around the world every day, it’s
impossible for one team of exceptional physicians to keep up with it all. On a segment
of 60 Minutes, IBM illustrated how it programmed Watson to read medical literature in
one week. With this knowledge, Watson then
read 25 million papers on cancer in another
week. Watson then scanned the internet for
open clinical trials. 4
With this information, Watson was able
to present new treatment options to learned
physicians at a molecular tumor board meeting. AI can hold more, process more, recognize and recommend actions faster—and
perhaps better—than the best medical minds
in the country.
Andreas Cleve will be at EMS Today to
present on the AI power in Corti, an augmentation platform for emergency dispatchers that’s presently in use in the Copenhagen
EMS communications center in Denmark.
A Watson-like AI system, Corti helps the
call-taker come to fast and precise conclusions
by finding patterns in the caller’s description
of what’s going on. Corti can do this because
it can process audio 70 times faster than real
time, allowing for advanced live computations. It’s like having an additional dispatcher
on every call.
Corti analyzes the full spectrum of the
audio signal, including acoustic signal, symptom descriptions, tone and sentiment of the
caller, as well as background noises and voice
biomarkers. These distinctive features of the
call are immediately and automatically sent
through multiple layers of artificial neural
networks that look for patterns that might
be useful for the dispatcher.
There are three diverse types of actions that
Corti can immediately initiate or propose:
1. Question-answer patterns: What to ask next
to uncover the worst-case scenario;
2. Detections: When the model is confident,
it can alert us to potential situations such
as stroke or cardiac arrest;
3. Extractions: It can automatically pull
information from the call (e.g., address
detection and validation) and immedi-
ately send it to other systems. This can be
invaluable in the case of terrorist or mass
shooting events when callers are unsure of
where they are.
Corti can also transcribe calls in real time,
and it not only understands different dialects, but also helps the dispatcher understand
what’s being said.
Corti deploys synthetic voice technology
to help public safety access points (PSAPs)
(particularly those that are extremely busy or
understaffed) convert the passive waiting time
some calls encounter to active triaging time.
Corti can answer basic caller questions while
a call is on hold and then send this information to the dispatcher when they’re available.
Today’s PSAPs record calls, but the recordings often end up on a server, only to be heard
in rare cases. Corti’s platform has a built-in
recording solution that’s embedded with AI
models that can analyze every call and, as
the call volume increases, predict which calls
should be the focus of additional training for
dispatchers, which calls should be checked for
quality assurance and which calls potentially
hold new models about patterns formerly
unknown to Corti.
Imagine how useful it would be to have
Corti provide dispatchers with a monthly
training session where they listen and train on
the five to 10 calls that hold the most learning potential. It has the potential to improve
dispatch quality with very little effort.
Google’s recent announcement that the
company will release a headset that uses AI
to instantly translate different languages and
an image recognition app that will allow us to
point at objects and instantly retrieve information also may be new tools that prove invaluable to EMS crews in the field. 5
FIND SEPSIS PATIENTS &
Nashville-based Intermedix, also exhibiting
at EMS Today, has a data science arm (
formerly WPC Healthcare) that’s developed an
amazing ML system. The system distilled
90,000 articles related to sepsis and correlated 2,200 risk attributes (i.e., variables).
Using this knowledge, it can rapidly analyze
each patient encounter in a single facility over
the last 24 months to score whether or not a
patient has a high likelihood for sepsis—based
solely on a few data parameters entered at