Thursday, January 30, 2014

Will FHIR make the Electronic Health Record look like Facebook?

Being able to attend a working group meeting in San Antonio in the spring is one of the
best benefits that membership in the Health Level Seven International (HL7) organization offers. Imagine, walking along the river walk, temperature in the 70’s, plenty of good places to eat, what else does your heart desire?

The conference center was, virtually speaking, on fire (spelled FHIR in HL7 terminology) as FHIR was the hot topic of the meeting Jan. 19-24. FHIR stands for Fast Health Interoperable Resources, which is basically using web technology to exchange medical information. It is touted as the new interface standard that will eventually replace the versions 2 and 3. But before describing what this is all about, let’s take a step back and find out how HL7 got to this new venture.

HL7 has a lot of experience when it comes to defining new interface standards and there have been good learning experiences. The first widely implemented iteration, version 2.x has been a huge success from an implementation perspective as it has pretty much become the way that the vast majority of the healthcare applications exchange information in the US and many other countries. The main complaint is that “if you have seen one interface, you have seen one interface,” meaning that no two implementations are alike. Also, even though the messaging is kind of ugly and ancient, people know the weaknesses and use Interface engines extensively to map the messages to provide interoperability. So, in short, “it kind of works.”

Version 3 was supposed to solve many of the flaws in version 2, however, except for a few implementations in Canada and the UK, it has gained very little traction. The main complaint of version 3 has been its complexity and verbosity. Take specification of patient sex as an example. In version 2 one would specify in the eighth element of the Person Identifier (PID) Segment, the value of “M” for male. In version 3 this same element would have to be encoded in XML and basically specify that “of a particular code system with a specific code name,” the AdminsitrativeGenderCode would be “F,” with the display name being “Female” etc. In other words, what takes a single character in version 2 takes at least 3 lines of text in version 3 to describe exactly the same information. Therefore, early implementers would find out that going from version 2 to version 3  would choke the bandwidth of the system due to the lengthy message exchanges. This is in addition to the fact that there are not very many version 3 tools, and it is complex. Furthermore, its information model (Reference Information Model or RIM) is trying to cover all of the possible constraints, use cases, and specializations, and that makes it very cumbersome. In a nutshell, it is great for academics, but a pain for implementers.

There are quite a few implementations of the Clinical Document Architecture (CDA) in the US, as the Meaningful Use (MU) requirements for Electronic Health Record (EHR) implementations have made it a major priority. However, the level of interoperability for these documents has been a challenge. Yes, the definitions of templates such as the Consolidated CDA (CCDA) has been a major help, but they are still relatively verbose and complex to create and parse. As with other v3 components, the standard does not have very many friends among implementers.

What does Fast Health Interoperable Resources, FHIR, do that you can’t do by using either version 2, version 3 or a CDA? First of all, the specification started with a clean slate, so instead of making an attempt to model the world of healthcare, which is what the RIM did for version 3, and trying to cover every possible use case, it started bottom up and defining the entities or what FHIR calls the resources that are needed to exchange information. The maximum number of these resources is estimated to be 150, the draft standard for trial use (DSTU) as published recently has about 50 to start with. Then it put restrictions on these resources by stipulating that they should cover 80 percent of the common scenarios, no less, but definitely no more. The remaining 20 percent that is critical to achieve interoperability is then achieved by profiling and extensions. The requirement is that these extensions be published, and therefore be accessible to anyone who is trying to exchange information.

The requirements for FHIR are that it should be very easy and fast to implement, it leverages web technologies, messages are human readable, and it supports multiple architectures. There are several reference implementations and test servers available, all in the public domain. There is also a big library of examples. Collections are represented using the ATOM syndication standard. Without going into too much technical detail, ATOM is how Facebook and Twitter feeds work, and has become a semi-standard in the web environment.

I initially thought to myself that FHIR was equivalent to REST (Representational State Transfer), in other words the equivalent of the recent DICOM extensions for web access to images (WADO-RS). REST considers everything to be a web resource, which can be identified by a uniform resource identifier (URI) or internet identification. However, FHIR is more than that, there are actually four different interoperability options, which are called paradigms. Yes it includes REST, but also a document standard, messaging standard, and a set of services. REST provides standard operations using http, using the widespread experience of implementers in this domain. The documents defined in FHIR are similar to CDA, and consist of a collection of resources. Example resources are a patient, practitioner, allergy, family history, care plan, etc. Resources can be sent as an ATOM feed, which allows also for sign-in and authentication. Messaging is also very similar to v2 and v3. The services are based on a Service Oriented Architecture (SOA). Remember that regardless of which paradigm is used, the resource, i.e. content of the information, is still the same, so it can be shared using different paradigms. Compare this with sending a letter from A to B using FEDEX and then from B to C using UPS. Similarly, the content might be a lab result that is received in a message and forwarded in a discharge document. The profile definitions that are critical for interoperability are also shared among the different paradigms.

Based on the many advantages, one might wonder why there are no implementations (yet) of FHIR to speak of. First of all, it is brand new, and there is obviously a learning curve for implementers (albeit very small compared with version 3 or CDA implementations). Second, there is no incentive to replace a huge installed base of v2 and a decent number of CDA implementations. However, for new domain areas where healthcare is just starting to get traction, such as mobile implementations, it is a great tool. In addition, for any social media applications, it would fit in very well too. To the extent that new applications are being driven by implementers, there could definitely be a major push towards its adoption.


FHIR has a lot of potential, not necessarily as a replacement for existing implementations but to augment new applications, which are based on web technologies. Its many examples, public domain reference implementations, and test servers are a major draw. During the recent FHIR connectathon at the HL7 meeting, it became obvious that it is relatively easy to create a simple implementation very quickly. FHIR might become the backbone of the next generation healthcare IT implementations, but then… who knows what might come along in another five years or so. Time will tell, in the meantime, there is no question that there was a lot of smoke around FHIR at the HL7 meeting.

Wednesday, January 29, 2014

RSNA 2013: My top 10 on what’s old and what’s new, part 3.

View at the exhibit "floor"
There are two types of people, those who hate the RSNA Annual Meeting and those who love it. Most “haters” despise the weather, the cab drivers, the expensive food, the long lines, and the running back and forth from meeting to meeting while trying to text to the next appointment that you are running late (assuming there is still juice left in your cell phone and there are not too many people using their phones simultaneously overloading the network). I count myself in the category of a love/hate relationship: despite the negatives there are always new things to see, and new people to meet, if not just in the bus to the conference, then while waiting in line. In any case, here is my third and final installment of this year’s review of what’s new and what’s old, concentrating on what’s old.

What’s old:
PACS is "old news"
1.    PACS is old news: PACS systems are mature. Even the catch-phrases were rehashed, from uni-viewer to VNA, cloud to zero-footprint, workflow to image enabling. The good news is that there are still new entrants coming into the PACS market, especially from outside the USA. I have seen vendors from Canada, the Middle East, South Korea, China, Germany, and I even talked with vendors from Latin America, notably Uruguay and Mexico who have just received FDA clearance or are in the process of filing for their PACS systems. There is no question that if you can start from scratch as many of these vendors have done, you can use the latest software technologies and tools as well as development methods, which will challenge the established vendor community. Users will see these slick implementations and ask their vendors why they can’t have these features. Also, don’t forget that these vendors are well positioned to address the needs in emerging countries. Just in Saudi Arabia alone, there is a market of 50-100 brand new hospitals to be built, in addition to converting existing institutions from film to digital. As another example, I visited Kenya this fall, and the number of PACS systems installed is minimal. In any case, there are still plenty of opportunities, either for replacement and upgrades to PACS 2.0 architectures (a write-up on that will be coming up) in the US and Western Europe or initial installs in the developing world.

Data entry, not in RIS but in CPOE
2.    The stand-alone Radiology Information System (RIS) is dead: RIS systems have traditionally provided ordering, scheduling, modality worklists, report distribution and operational support. Several of these functions are taken over by other systems: Computerized Physician Order Entry (CPOE) systems are used as part of the EMR or as a physician portal, reports are uploaded directly into the EMR, modality work lists are provided by the PACS, which has a direct HL7 interface, and many users opt for a RIS module as part of the HIS or EMR. In addition, many smaller imaging centers forgo the RIS totally and only have a practice management system and interface their PACS system directly to their ADT system, again relying on the worklist provided by their (mini-) PACS. Anyone who is considering upgrading their PACS and/or implementing an EMR might want to have a close look at their RIS and determine if they should be keeping it.
Hawaii or an X-ray room?

3.    It is all about the ambiance: The master of ambiance is obviously Philips, with its long-term presence in the consumer electronics and lighting business. But there are other (smaller) vendors that are starting to provide innovative solutions to reduce patient anxiety and claustrophobia. If you think that this is just “window dressing,” I suggest you wait until you have a MRI or CT done yourself and stare at the naked ceilings which are often the drop-down office type stained with brown condensation marks from sweating air-conditioning vents. If adults feel somewhat anxious you can imagine how youngsters might feel. I have seen a lot of hospitals and most children’s institutions do a pretty good job, but those are exceptions. Hopefully more institutions are paying attention to their ambiance as it has been proven to makes a difference and have an impact on the healing process.

One beam, multiple image sources
4.    Operating theatre integration: Last year there were demonstrations of “video over IP” whereby multiple imaging sources could be “mashed” to a single screen. This is now pretty much a given for any cath lab or OR where image and other information such as waveforms from different sources are being integrated. A surgeon can see real-time measurements, while a laparoscopic or endoscopic camera can view the actual pathology while looking at a previous exam, or images from different modalities. There is a significant after-market to upgrade your old multiple monitor systems.

Hawking gadgets
5.    Ultrasound massage: I missed the massage chairs this year, as well as the ladies with the skin cream, and the booth selling a small handy-cam to be used as a Christmas gift. The only “off-side” product I could find was an ultrasound massager, for a conference special of only $175 (online it is listed for $250). I should have been smart and tweeted it right from the show and take orders on my PayPal account with a 25 percent markup, which might have paid for my dinner. Oh well, next year I’ll be better prepared.

Wireless at the bedsite
6.    Wireless: Wireless has penetrated the display imaging chain to show image and provide access to an EMR, browsing the web for teaching file cases, and showing an imaging study to patients. In addition to the wireless display, much of digital X-ray acquisition has become wireless. No need for a digital radiology plate to be connected to a cable anymore, as information is transferred to a review monitor within seconds for a technologist to be reviewed and sent to a physician.

7.    Image sharing: RSNA was showing off the image sharing initiative again at a dedicated booth and even at a so-called “town-meeting” (when did RSNA get into politics?) that allowed participants to present their experiences. It is touted as a pilot project, funded by the National Institute for Biomedical Imaging and Bioengineering (NIBIB) and administered by RSNA. And a pilot project it is, as its scale is surprisingly small, comprised of 5 major sites that are participating and the number of participants is less than 10,000. One would think that people are starting to get tired of importing and exporting images on CD’s and carrying them around with all the related risks of losing them, not being able to read them because of “rogue” CD generators, etc. I bet that it will be shown again next year, hopefully with reports of more participating institutions and patients signing up. One would think that for $10 million dollars+, which is the size of the grant awarded to RSNA, there would be more people interested in participating.

Small CT for training
8.    Educational CT scanner: I have to admit that I have a weakness for everything related to CT’s as I started my early medical career writing software for one of the first Philips CT scanners built (Tomoscan 300). Therefore, I found this product very interesting, it is based on optical scanning but has the same technology as if one would use X-ray as a source. There is even a complete set of exercises to go with it so any teaching institution can just take it and include it in their training program. Very cool.

Smart phones
becoming
medical devices
9.    Yes, there is an app for that: There are a plethora of medical applications out there for everything ranging from taking your pulse to eventually measuring your glucose level. I wouldn’t be surprised if in one of the next trade shows someone will have come up with a wireless ultrasound probe talking back to an oversized phone (“phablet”), which by the way can also post images on facebook and pull down any comparisons from a Health Information Exchange. The FDA is actually getting rather concerned with the lack of oversight and has recently published a ruling on the use of these devices for medical applications. In the meantime, most of these apps are used for accessing data for decision support and accessing Google images. I had firsthand experience when my physician, after looking at my ultrasound did a simple search of images with similar cases as a comparison using Google image searches. I am sure this category of applications is going to explode and provide tools that were unheard of a few years back and even today.

Xmas in Chicago at the Magnificent Mile
10.  The Magnificent mile: Last but not least, nothing beats walking the “golden mile” at night. And although not inexpensive, there is good food to be found and it is fun to walk at least a mile.  It was somewhat cold, so me, being unprepared had to run into a local Walgreens and get a hat (we don’t carry or wear those in Texas). It seems as if the number of designer stores have increased, which makes shopping definitely out of my price range, but people-watching and having a good dinner is always a joy and by the time it gets too cold, I’ll take a cab back to the hotel.


Well, that was RSNA 2013:  smaller, uneventful, except for a few incremental product improvements, mostly “old news,” but fun as always. At least I still enjoy it even if it is already my 30th year (I think they should give ribbons for that as well). See you at RSNA 2014, and if you won’t be able to attend, you can enjoy my write-up and taste (and feel the cold) of a little bit of Chicago!

Thursday, January 16, 2014

Implementing a VNA: Challenges and things to look for.

The Vendor Neutral Archive (VNA) was initially touted as the greatest thing since sliced bread. It offers no more data migration, the ability to manage your own data, and freedom from being held hostage by a PACS vendor with proprietary implementations of annotations, measurements, key images and non-standard compression algorithms of the image data. In addition, a VNA promises to be able to handle non-DICOM data, communicate with outside Health Information Exchanges (HIE’s) using standard protocols as defined by IHE and, last but not least, allows access to a uni-viewer through a standard interface.

But then, what happened? Reality struck and it appeared not to be as easy as people thought. Many users are going through some serious growing pains and are on the “bleeding edge” with early implementations. I gathered the top ten most commonly heard issues here so you might be prepared when you are ready to implement a VNA in your organization.

1.       Not all VNA’s are equal: despite the fact that there has been an effort to categorize and label the different VNA’s according to the various level of implementations (see related article), this has not been universally accepted. I found several vendors at the latest RSNA meeting advertising that they have a “level 5” VNA, however, it appears that vendors do not like to advertise that they have a level 4, 3, 2 or even only 1. This seems to make sense as they would not want to advertise what they don’t have instead of what they do have. In any case, many institutions are struggling with a VNA that misses the functionality needed to allow it to work optimally due to the fact that they have been either improperly labeled or not labeled at all, creating unrealistic expectations.

2.       Synchronization has become a major issue: this is related to the missing functionality as mentioned under item (1), i.e. the VNA doesn’t have the capability for a PACS to synchronize updates and changes with the VNA. In one particular example, the VNA has to be updated manually with all the changes that are made at the PACS. Imagine an image to be deleted, updated, or patient and/or study to be merged or split. A PACS administrator knows to do this on the PACS, but if he or she has to do this again on the VNA it might be forgotten completely, or not done in a timely fashion. This leads to the information being out of sync, and if these are critical updates such as names or Left/Right indicators, there is a potential safety and legal liability. IOCM (Imaging Object Change Management) support will go a long way to resolving this.

3.       IOCM support is a requirement: IOCM is a standard that specifies how local changes that are applied to existing imaging objects can be communicated to other devices managing these objects. A PACS connected to a VNA is a good example of two of these systems. IOCM not only communicates rejection of the images due to quality or patient safety reasons, or incorrect patient selection resulting in misidentification, but also the expiration due to data retention requirements. Information is exchanged using rejection notes that are encoded as DICOM Key Object notes, thereby following existing mechanisms for encoding and communicating this information. Not every VNA supports this yet.

4.       Tag morphing is not simple: tag morphing involves changing the image header to make a PACS and the workstation behave in a way to facilitate efficient workflow and preferences. There are typically two steps: the first one is “normalizing” the data upon entry or inbound channel in the VNA to fix any violations and/or standardize patient and related study information. The second step involves facilitating specific needs and peculiarities of a specific PACS system outbound. The most important tags to be changed relate to the decisions on what to prefetch as prior studies, and the ones configuring the proper hanging protocols. This is especially critical when using a VNA for multiple institutions, as the wide variety of non-standardized procedures, series descriptions and scan protocols will become obvious. That in addition to peculiar behavior of certain PACS workstations, makes proper tag morphing a challenge and might severely impact workflow. Hopefully with better PACS workstation implementations and further standardization of the header parameters such as the protocols and other information, this will become less of an issue.

5.       Universal worklist is becoming a necessity: The universal or aggregate worklist provides a physician with a list of images to be read from different PACS systems. Realize that the connection between the PACS (or in some cases RIS) workflow manager and the vendor workstations is proprietary and very tight as the worklist manager needs to coordinate, for example, multiple readers all reading from the same list, which can be sorted by modality, body part, or other user preferences. This allows for synchronization, so that, for example, as soon as a chest radiologist picks a study to read, his colleague who reads the same specialty will see in his/her worklist that the study is already being read. On the other hand, they would not see the nuclear medicine or PET/CT exams as those might be reserved in another worklist for another physician to read. A VNA by contrast, allows access from multiple PACS systems, but there is no easy manner to synchronize this access between different radiologists. There is an existing DICOM workstation worklist standardization, but it is rarely implemented. Therefore using the VNA as a source for day-to-day reading is not quite yet possible.

6.       Migration is not trivial: a typical scenario would be for an institution to purchase a VNA, migrate the bulk of the data from the existing PACS to the VNA, followed by replacement of the PACS, which is connected to the VNA with access all studies of let’s say older than 3 to 6 months. Depending on how “dirty” the initial data is, the number of studies that cannot be migrated ranges from 1-5 percent. It is not uncommon for a typical mid-size institution to have anywhere from 5,000 to 10,000 unreconciled studies as a result of the migration process. This will take a full-time person several months to resolve. This activity is often under estimated or forgotten. A good practice is to have a data analysis of your data done whereby a certain subset of your old PACS is evaluated for its “dirtyness” so you can plan accordingly. Most migration companies will provide this service as part of their migration contract.

7.       Prefetching is hard: studies need to be retrieved from the VNA based on certain criteria, which are encoded in the header. Proper prefetching requires both allowing proper tag morphing to make sure the information is present to make the prefetch decisions AND having a set of sophisticated rules that can be programmed by a user. The prefetch rules depend on the modality, for example for mammography it might include a certain number of previous studies, which could depend on the user preferences. Rules also depend on body part, for example it does not make sense to prefetch a previous MRI of a knee to compare with a head CT, but it does apply to the head MRI. This assumes again that we have standard protocols and body descriptions because a routing engine might not know that a skull is the same as a head.

8.       Report storage should not be forgotten: reports have always been kind of a stepchild but are very important, especially if there is no easy access to prior images, and if it has detailed quantitative information such as measurements used with ultrasound and cardiology. Some PACS systems store reports in the PACS archive, some rely on the RIS, some rely on the report server, and some have it stored in a broker. It makes sense to take these reports and migrate them and store them in the same study as the images, typically identified with a different series description. A potential solution would be to migrate them using a HL7 outbound interface and map them into a DICOM structured report. Again a non-trivial effort especially if the corresponding image study cannot always be easily identified because of mismatches.

9.       Patient ID and accession number reconciliation is a must: it is not uncommon for a cardiology PACS to use a different patient ID than a radiology PACS, even within the same institution. If the VNA is used to serve multiple facilities, including outpatient clinics that might not be on the same registration system, it is a given that they are different and non-unique and reconciling these ID’s is needed. Therefore, some kind of Master Patient Index (MPI) functionality is needed, i.e. reconciling patients that have a different ID and assigning an internal unique number. The same applies for accession numbers as these are typically only unique based on a so-called “filler-order” number that is issued by a department scheduling system. Many PACS systems require unique accession numbers, therefore part of the normalization and/or tag morphing could include modifying these. A common fix for the accession number would be to prefix them with a two-character origination code, assuming that the original number does not exceed 14 characters (accession number maximum length is 16 characters).

10.   Foreign exam management needs to be addressed: CD’s are typically imported locally into a PACS system, which assigns the correct patient ID, makes sure the patient information is consistent with the existing information in the local system, and preferably stores the change history in the DICOM header according to the IHE profile for information reconciliation. That study is then sent to the VNA, which applies its own normalization rules and tag morphing. It is easier and makes more sense to do the import/export directly in the VNA thereby bypassing this first step.


The issues listed above are the ones that have surfaced so far. I expect that as the VNA gains a more crucial role in image distribution, additional issues will come up with deploying uni-viewers, with implementing Health Information Exchanges, and connecting other “ologies” beyond the most common radiology and cardiology. As an example, most level 5 VNA implementations have provided a XDS-I image exchange capability, however, there have been very few “takers” to actually use this feature as image exchange is still very much done through cloud services and brokers instead. In any case, I suggest users and vendors first address the issues listed above and then take the next steps to further integration, and be prepared to address image exchange issues, as they will definitely arise.