It was definitely hot in Long Beach during the annual SIIM
conference; the outside temperature
reached over 100 degrees during the
conference; a record since 1970. There were a couple of hot topics during the
sessions and on the exhibit floor of the SIIM conference, although overall,
much less than I expected and I have been used to. I personally think that SIIM
has lost its niche and definitely not gotten its “mojo back” as one of the
other writers claimed, but I’ll talk about that some more later, after listing
some of the topics I found worthwhile. So, first of all, here is my “hot topic”
list:
A typical view from the exhibit hall |
1.
Dashboards:
I always like a conference when I can learn about new tools and tricks. My
latest favorite tool is Splunk. I had not even heard about it prior to this
conference, but this tool (which is free if you use it for a small amount of
data) allows one to create an amazing dashboard in almost no time. At this
particular SIIM workshop, which was amazingly lightly attended, we were using a
test data set in an Excel spreadsheet format that included patient visit
related events, and made an impressive dashboard in less than 30 minutes. One
of the real-life issues is how to get access to the data. This software takes a
simple HL7 feed from an interface engine into an open source database and indexes
the data, producing an elegant solution to data access. Another advantage of
indexing the data in the “middleware,” i.e. a database that functions as a data
warehouse, is that it cuts down on the work and related cost that Splunk has to
do, as the software license depends on the amount of data it indexes.
Dashboards have been talked about for many years but I feel that now the tools
are available to really implement them, I highly recommend anyone wanting to do
some type of dashboarding to check this out.
2.
Decision support:
There was a lot of talk about “big data,” but not a lot of talk of what to do
with it, and I think that is not quite clear yet anyway. Using the big data for the purpose of decision
support was mostly considered at the conference in terms of placing the
appropriate orders, which is valuable, but there are so many more opportunities
than just at order placement. Interestingly enough, just sharing the data about
who orders what for what condition, seems
to already be making an impact on the behavior of the ordering physicians as it
allows them to see how they compare with their peers. One of the main
objectives for decision support has traditionally been to save money, for
example, why would you order a CT, if a chest X-ray would suffice for a
particular case. However, a more important reason for decision support is to
increase the quality of care as it would support selecting the appropriate
procedure for the specific condition of the patient. In some cases it could
justify a CT scan instead of a regular chest X-ray and improve the diagnosis, and
thus not necessarily save money. Decision support using big data is still in
its infancy; it will be a while before mature implementations become mainstream.
3.
Analytics:
One of the main challenges with doing analytics is that a lot of the data is
collected and measured for the wrong reasons. For example, a commonly measured
yardstick is the report turn-around time, i.e. the time between a diagnostic procedure
being completed and a report being available to the physician. A typical time goal
might be anywhere between 15 and 30 minutes, which is quite achievable using
speech recognition and good workflow support by the PACS. However, this time as
a measure is useless if you don’t put it into clinical context, i.e. what is
important is that the information is available at the right point, at the right
time, which in many cases is the time of the patient’s physician appointment.
If the appointment is not for another two hours later, a turnaround time of 30
minutes sounds great but does not provide any additional value over a
turnaround time of 90 minutes. However, if it concerns an in-patient in the ICU
or someone just admitted to the ER while the physician is waiting to get the
results, 15 minutes might even be too long. The problem is that a radiology
department typically does not have access to appointment data, unless it has
access to the EMR or hospital information system. Therefore, agreements with
the corporate IT department have to be made to allow access to that
information. Another lesson learned is that to do analytics, one preferably
would access a copy of the data, basically building one’s own data warehouse as
accessing the real-time data with additional queries could significantly impact
the performance of the operational system. So, analytics in a vacuum, i.e. on a
department level, is often not useful and has to be extended to the enterprise,
which requires the corresponding information access.
4.
Quality
Improvement: An important quality improvement is the avoidance of
Left/Right mix-ups. The result of a mix-up can be a procedure performed on the
wrong extremity (operating on the left knee instead of the right one) or on an incorrect
part of the body (perform a biopsy of the wrong lung, the incorrect breast). These
errors are not uncommon and can potentially be avoided by a simple Left/Right
check in the report. This is implemented by scanning the report for the words
“left or right,” which are subsequently highlighted in the report on the screen
to provide a double-check to the radiologist. The speaker at the conference
showed a plug-in of the reporting software that provides a “check PACS button”
as part of the menu, upon which these words are highlighted. A study using this feature was done over a
period of 7 months when 140,000 exams were reported, 45,000 of them had the
check done (for some reason, not all users used the feature). There were 32
left/right mix-ups identified in this sample. One should realize that not all
errors are covered, for example, a not uncommon case whereby a procedure is
performed on a different side of the body than was ordered would not be
detected. The extra mouse click seems to cause some of the users to either
forget or be unwilling to take the time to perform the check. However, it would
not be hard for a reporting vendor to automate this as part of the regular
workflow, e.g. highlight all of these as part of the sign-off. For now, during
this trial there were 32 errors prevented by an amazingly simple mechanism.
5.
Ultrasound
structured reporting: Ultrasound structured reporting has been around for
more than 10 years. It works as follows: a technologist does measurements on
the ultrasound image, which are standardized and structured, i.e. always taking
same measurements for certain exams, for example, always measuring the
circumference of a fetus’ head to monitor growth on a monthly basis. Instead of
a radiologist having to read the measurements for his or her report from the
screen, or from a piece of paper that was used by the technologist to record
them and re-enter them or repeating them using speech recognition, these
measurements can be exported by the ultrasound in a standard DICOM template,
i.e. Structured Report (SR) that can be interpreted by the reporting software
and used to automatically fill the information into the reporting template. Doing
this in an automated manner provides about a 30 percent improvement in report
time for certain procedures, which does not take into account the time saved for
the technologist who does not have to fill out worksheets. Pretty much every ultrasound
exports these SR’s today, but unfortunately, very few reporting voice
recognition (VR) systems have a SR input. This seems remarkable to me, i.e. why
would not every radiologist use this and be able to save 30 percent of their
time, or go home early as the work is so much more efficient. I don’t know the
answer to that, I do know that there is at least one vendor that sells
middleware that takes the SR data and fills in the information using a
proprietary interface to the most popular speech recognition system. Implementing
DICOM SR is not for the faint-hearted as it is not trivial, and also there is
quite a variation between the different ultrasound vendors on what they report,
something that the DICOM committee is currently addressing by coming up with a
more simplified template. However, these weaknesses should not keep any vendor
from implementing SR. Hopefully through more user pressure, the VR vendors will
start implementing it.
6.
DICOMWeb
and HL7 FHIR: One of the major barriers to implementing new applications
that use and exchange medical imaging and related information is the complexity
of the data formats and protocols and its overhead. HL7 version 3 definitely
did not help in that with its verbosity and complexity, and DICOM has
traditionally been somewhat complex to understand, especially for novice users.
In this day and age, developers want to build new apps using the 5-5-5 rule,
i.e. 5 seconds to find the document on the computer or internet, 5 minutes to
grasp what it is all about, and 5 hours to build a prototype. That is how you
can build an app today, for example, for your smartphone. However that is not
how you build an image enabling application in your EMR. That is why both DICOM
and HL7 are working on simplifying their protocols and formats using standard
web-based technologies such as REST. The HL7 FHIR specification is in the draft
stage; with about 100-150 reusable objects that the committee thinks are
needed, there are already 50 built and ready to be used (see more details on
FHIR here). DICOM is the process of specifying the necessary new services as
well, there is a DICOM pull available called WADO (Web Access to DICOM Objects),
which has a new version using REST technology, there is a query available, called
QIDO, and a store, that is called “post” in the form of STOW-RS. One of the
issues with reporting is to coordinate different jobs using the appropriate
work lists, which is being addressed by the DICOM committee with development of
the Unified Procedure Step. You might ask whether this is going to change all
of the installed base, for example how a CT is talking to a PACS system today.
The answer is no, but if the PACS wants to post images to an EMR, communicate
with any smart phones or iPads for a radiologist to share his images with a
patient, or post an anonymized image on facebook for consultation, then the
answer is very likely yes. This will allow DICOM and HL7 to go mainstream as
developers, who have no domain knowledge because they worked on banking or
manufacturing or billing software prior to the medical field, to pick up the
thread very quickly and easily and start implementing these new applications.
As a matter of fact, at the SIIM “hack-a-thon,” people could actually get a
feel for how to build one of these apps very quickly.
7.
Digital
Breast Tomosynthesis: the session on digital breast tomosynthesis (DBT) did
not really
reveal any new information except for the fact that it seems to be
going mainstream. One of the issues noted was that there are no really good
hanging protocols available yet at the vendor’s PACS systems for viewing the many
thin slices being created. However, radiologists are getting better and more
efficient at interpreting them and it appears that the average length of
interpreting a DBT exam vs a regular mammogram is only about one-and-half to two
times longer, much less than the initial times when this technology was
introduced . The DBT images are supposed to detect more cancers, but one has to
realize that as of now the patient gets two exams, the regular four-view and
the additional DBT exam, which increases the radiation dose exposure. There is
still a lot of work to be done by the vendors, for example, one user reported
that their software has trouble distinguishing between the regular mammogram and
DBT synthetic reconstructed images, something which is addressed in the DBT IHE
profile, which is apparently not yet widely supported. This was a hot topic
last year, and still is, as it is becoming mainstream, but the PACS viewers
have apparently not yet caught on.
Vendor presentations in the exhibit hall |
I can typically list at least 10 hot topics from every
conference, but not from this one, which was an indication of SIIM2014: there
were not many new things to show. It appears to me that SIIM wants to be all
things to too many people, i.e. different professional groups with a different
skill level, and in the process of that, it becomes too widespread and
scattered. As an example, this year the program included general sessions, hot
topic sessions, scientific sessions, five different learning tracks, learning
labs, innovation theatre sessions, roundtable discussions, and study groups. Sounds
confusing to me, I would rather have a few well-defined choices.
One of the main benefits of these types of meetings is also
the opportunity for networking,
however, if there is not a large enough
audience, the networking opportunity is limited. Even though the attendance was
a little bit more than last year, it is still less than half of what it was a
few years back, and I missed many of my colleagues who did not attend.
I always enjoy meeting my old students, here with Omar who attended my training in Cairo in 2009 |
It was apparent that
SIIM wanted to please the vendors by having sessions that consisted of vendors
only. I believe that most attendees (including myself) did not really care to
hear how a particular vendor has the latest and greatest solution but rather
would have liked to see users giving testimonies on what works and does not
work.
One of the better attended booths was EchoPixel showing their 3-D display next to a well visited PARCA booth |
My most memorable part of SIIM was the 5 k run along the bay,here with my running partner Mohamed Shoura |
It is hard to look into a crystal ball and predict how SIIM
is going to evolve over the next several years, but I believe that they need a
new direction to get their old mojo back. The next meeting is in Washington DC,
I might skip a year (or two). Hopefully the slides dating back to 2007 will be updated
by then (but who knows).