Tuesday, February 26, 2013

A primer on troubleshooting tools for healthcare imaging and IT.


Many healthcare imaging and IT professionals have to deal with troubleshooting, testing, and validating connectivity and interoperability. There can be multiple objectives for this testing:

·         A software engineer might test his or her newly developed software,
·         An integration engineer needs to test connectivity between different devices,
·         Application engineers test interoperability,
·         Service and support people try to determine why something does not work or stopped working, and
·         System administrators need to deal with finger pointing between different vendors and locate the problem source.
Another important test activity is acceptance testing by a user, often represented by a consultant, to determine whether the system works as specified and meets the initial requirements.
A common categorization of the different test tools is as follows: Test systems, simulators, validators, sniffing tools and test data sets. Many of them are available for free or as open source, some require a modest licensing fee. Test data is generated by either standards organizations or trade associations. Below outlines the characteristics of these tools - when they are used, what tools I recommend followed by a list of where to download them and find tutorials on how to use them.

1.       Test systems:

Test systems are either a copy of the system to be diagnosed or a system with equivalent or very similar behavior. Specifically, if you have a PACS, you might have a “test-server,” which is another license of the same database as used for the production PACS. The test system could be running on a high-powered, stand-alone mini-server, which could possibly store images for a week or so. Most users purchase or negotiate a test system to be available as part of the new purchase. A recent OTech survey showed that about 40 percent of the PACS users have a test system. Another option, in case you don’t have a test system, is to use a free or open source PACS application such as Conquest or DCM4CHEE.

In addition to the PACS “back-bone,” one should always have at least one and preferably two additional test viewers from different vendors for displaying images.  Examples of these viewers would be KPACS, ClearCanvas, Osirix for Mac’s and several others, all of these are freely available.

For EMR’s , I found that it is uncommon for users to have a test system available, which is kind of surprising. In many cases, a production server might be loaded with test data at system installation time, but as soon as the user training is complete and the system goes operational, this information is typically wiped to get ready for the production data. EMR’s are also quite different with regard to their functionality and interfaces; therefore a free or open source EMR might not be as useful as a test PACS system. But one could use the CPRS/VistA EMR, which is developed by the Department of Veterans Affairs and is available as open source.

The best-known interface engine that is available for free is Mirth. This device maps several different interface protocols, but is at its best when being used for HL7 Version 2 message mapping. I found it somewhat hard to use, but there is (paid) support available for someone who needs to configure the mapping rules.

One can use a test system to test new modality connections to a PACS, new interfaces (e.g. lab or pharmacy) to an EMR, or for reproducing certain errors. In the case of a new image acquisition modality connection, one could create test orders that would show up at a test worklist (the DCM4CHEE PACS has this capability), and query the worklist from the test system. This allows for the proper mapping to be tested from the orders to the DICOM worklist, and to tune any additional configuration needed to make sure that the worklist does not have too many or too few entries. The same applies for external interfaces, e.g. for lab or pharmacy to an EMR. It is usually better to test connectivity prior to actually going live.

There are those who use these test systems as a basis for their production, i.e. as their primary clinical system. For example, it is not inconceivable to use the VistA as an EMR, the Mirth interface engine as the HL7 router, DCM4CHEE as a PACS and modality worklist provider, and ClearCanvas for image viewing. However there are potential liability issues for using non-FDA approved and/or non-certified software for medical purposes, especially if used for primary diagnoses of humans. But, for veterinary use, these test PACS systems are relatively widespread in clinical use. I would not recommend using any of these in a production environment unless you have a strong IT background or can rely on a strong IT department or consultant.

2.        Simulators:

A simulator is a hardware and/or software device that looks to the receiver to be identical to or similar to the device it is simulating. An example would be a modality simulator that issues a worklist query to a scheduler, such as provided by a RIS, and can send images to a PACS. If the simulator assumes the same addressing (AE-Title, port number, IP address) as the actual modality, such as a MRI, and sends a copy of the same images, the receiver would recognize the data the same as if the transaction came from the actual device. The same can be done for a lab simulator to an EMR, exchanging orders and results, and for a CPOE simulator sending orders and arrival messages. The advantage is that these simulators provide a “controlled“ environment, while providing extensive logging.
These simulators are typically used to test connectivity prior to having an actual operational system available in order to simulate and resolve error conditions and troubleshoot connectivity issues. They can also be used for stress testing and evaluating performance issues. One should note however, that a simulator does not exactly reproduce the behavior of the device it is intended to simulate. If there are timing related issues, or there are semi-random problems, one would try to keep the original configuration intact as much as possible and use sniffers instead to find out what is going on. I use an HL7 CPOE simulator and DICOM modality simulator, available from OTech. One could also use the various DVTK simulators, but these are not trivial to use and are therefore almost exclusively used by test and integration engineers. DVTK simulation tools also are programmable using a proprietary script, which makes them very useful for exception, performance and error testing and simulation.

3.       Validators:

 A validator is a software application that validates a protocol or data messaging format against a standard set of requirements or good practices. These are extremely useful for testing by development and integration engineers, especially for new releases and new products.
I am amazed by how many errors I find when running a simple DICOM image against a validator. I personally believe that there is no excuse for these errors as these tools are available for free in the public domain. DICOM protocol and data formats can be validated using DVTK.

Another useful tool provided by DVTK is “file compare.” If there is any suspicion about data integrity, i.e. whether a vendor adds or removes information from a header, which could cause problems, one can simply compare the original and “processed” one to see the differences. In addition, this compare tool can be configured to filter certain attributes and highlight the ones that one is looking for. I have used this tool to determine any changes in software, which supposedly did not impact the data format by running this against the image before and after the upgrade.

The HL7 messaging workbench, developed by the late Pete Rontey of the VA, is a great test simulation and validation tool for HL7 Version 2 messaging.

For information exchanges between EMR’s, the CDA data format is emerging as the standard. This is an area where we might expect a lot of potential issues in the near future as these EMR’s are being rolled out. Its data format and compliance with the required templates can be verified on-line on the NIST website.

4.       Sniffing software: 

Sniffing software requires access to the information that is exchanged, which can be done by having the software installed at one of the devices that interacts with the connection to be monitored, which could be the device sending or receiving the information, or at a network switch, or connecting the sniffer to the link to be intercepted by a simple hub. This could be somewhat of an issue as many institutions clamp down on their networks and do not allow for a “listening” device to be connected as they are afraid that it compromises the network integrity. The de-facto standard for sniffing and analyzing DICOM connections is Wireshark, which used to be called Ethereal.

However, one could use Wireshark for the sniffing only, and asking the network engineer to provide you with the so-called .cap file, which can be captured on any of the available commercial sniffer and network management applications. The analysis can then be done separately using Wireshark. Sniffers to detect any semi-random and not easily reproduced errors, or to troubleshoot in situations where the error logs are either non-comprehensible or non-accessible, or to prove that changes are made in data before sending the information. A combination of a sniffer and validator is especially powerful, for example, one could upload a capture file into DVTK analyzer/validator and analyze both the protocol and data format.

Using a sniffer is often the last resort, but it is an essential tool for those hard to diagnose problems. As examples, I have been able to diagnose a device that randomly would issue an abort, which would cause part of the study to fail to be transferred, or to determine the errors as exchanged with the status code of the DICOM responses, and to find that query responses do not quite match the requests, and to resolve many other semi-random problems. One can easily configure the sniffer to capture all of the traffic from a certain source or destination, store it in a rotating buffer and when the problem occurs, start analyzing the information.

5.       Test data

If a problem occurs with clinical data, it is often hard to determine whether the problem is caused by corrupt or incorrectly captured data, or whether it is a result of the communication and processing of the information. Therefore, having a “gold standard” of data is essential. Imagine a radiologist complaining that an image looks “flat” too “dark or light” or just does not have the characteristics he is used to seeing.  In that case, being able to pull up a reference image is invaluable. There are not only sample images, there are also sample presentation states, structured reports and CDA documents available.

Most of the test data objects are created by IHE to test conformance with any of their profiles. For example, there are extensive data sets available to test proper display of all of the different positions indicators (and there are quite a few) on digital mammography images, together with correct mapping of CAD marks.

The same applies for testing the imaging pipeline, for which there are more than a hundred different test images which are encoded using almost every possible combination and permutation of pixel sizes, photometric interpretation, including presentation states. The nice thing is that the data is encoded such that the effect of the image display always is identical. For example, one might have an image for which the header says that the information should be inverted with the data to be inverted so that the ultimate end effect is that the image looks the same as the non-inverted test sample.

It is easy to load all of these images onto a workstation where you’ll see almost immediately for which image the pipeline is broken. This is a great test tool to be used when doing an acceptance test or to check after a new software upgrade is installed at your workstation, and you would be surprised how many systems do not render all of these correctly.

For verifying display and print consistency, the AAPM has created a set of recommendations and test images, both clinical and synthetic, which are invaluable to determine whether or not your display or printer supports the Grayscale Standard Display Function, also referred as “the DICOM curve,” and, if so, if it is properly calibrated according to that standard. A simple visual check will make sure that certain parts of the test pattern are visible and indicate compliance or the potential need for recalibration. Even if one uses non-medical grade displays, there is no reason NOT to calibrate a monitor or printer according to this standard (there are after-market devices and software available to do this) and to make sure they stay in calibration.

In conclusion, I am convinced that any connectivity issue can be visualized, located and resolved by using the right set of test, simulation, and validation tools using a wide variety of test data. It is just a matter of learning how to use these tools and applying them for the appropriate circumstance. In addition, they are also invaluable for acceptance testing and to prevent potential issues. Healthcare IT systems are not plug-and-play and never will be, and a healthcare imaging and IT professionals therefore will need to master these tools to ensure data integrity.

Where to find information and how to access the tools mentioned above: (all of them are free or open source unless mentioned otherwise)

Monday, February 4, 2013

AA: Risks of a make-over.


Living in the Dallas Metroplex, I am highly dependent of American Airlines for all my air travel and therefore follow its developments closely. When I left recently for my meeting, I spotted one of the first airplanes at the DFW airport that had gone through a make-over, i.e. with new colors and logo. The rumor has it that the flight attendants also will be outfitted with brand new uniforms. I guess the reason for the make-over is to create a perception that this is a new beginning and supposedly resulting in a more customer focused corporation.

Here are some of my recommendations for AA getting more customer focus:

-Be focused on leaving on-time: One time, there were not enough meals, which caused the crew to call the catering representative, who had to have his supervisor re-count the meals, and then the supervisor of the supervisor recount it as well, till finally 45 minutes later 3 additional meals were brought in and we could leave. This migth have caused some of my fellow travelers to almost miss their connection. I experienced with another airline that the captain himself ran out across the hall and got a few hamburgers from McDonald to cover for the missing meals and we left in time.

-Keep your bathrooms clean: Anyone who ever has flown a 10+ hour flight knows that the toilets look like a battlefield at the end of the flight. If you are lucky, there is still toilet paper left, and the sink is not disgusting, but the chance is small. Now take Japan Airlines: at any time during the flight, the toilet is spotless, the paper is even nicely folded. What a difference.

-Get rid of those ancient planes as soon as you can: imagine a large screen monitor with more movie channels than I have on my cable at home, nice leather seats, excellent food, and free beer and wine on international flights, pretty much most other airlines, but Delta and Emirates are the best in my opinion.

There are many more recommendations; bottom line is that the risk of trying to create a new perception top-down is that if there are no actual changes “inside”, it might actually backfire. This applies to any company. To be honest, in this particular case, I have not noticed any difference flying this week flying American, and am skeptical that changes are “in the air”. But hopefully I am wrong.

IHE Connectathon, standardization over the top?

The ultimate plug-fest for healthcare IT

The IHE (Integrating the Healthcare Enterprise) has organized so-called Connectathons since 1997. These events, also sometimes called plug-fests, provide an opportunity to test and demonstrate device and system interoperability, and to resolve compatibility issues. 

This particular event has grown steadily, and this year in Chicago, more than 500 engineers and monitors were working diligently for a week to test 163 healthcare IT systems representing 101 vendors from all over the globe. The number of tests that were verified were close to 3000.

This year however, the number of attendees was flat compared with last year, which begs the question, why? The healthcare IT industry is definitely booming, there is a big emphasis on implementing healthcare IT as part of the $20 billion American Recovery and Reinvestment Act (ARRA) incentives for deploying electronic health records, and healthcare is one of the fields where new innovations are definitely happening. Here are my guesses for its reasons for its stall:

1.    We have reached the max with regard to standardization ­­- As of now there are twelve domains ranging from radiology to pathology, including eye care, oncology, lab, cardiology, dentistry, pharmacy, infrastructure, patient care coordination and devices, and patient care coordination and public health. Each domain has numerous profiles based on specific use cases. The IHE website states that there are 350 members with 2,000 volunteers who work on various committees. Maybe IHE has become too big and the effort has become too much too soon, too fast.
2.    Vendors can’t keep up - Imagine you are a vendor and you have to update your software pretty much annually to meet the various IHE integration requirements. Market-driven companies might decide that there are other priorities that should be addressed before interoperability. The only way this would ever change is through customer pressure, i.e. if supporting an integration profile is either high on the requirement list, or, even better, a make-or-break purchasing requirement. However, despite the initiatives by RSNA and HIMSS to promote and advertise the need for interconnectivity and, in particular the need to support new IHE profiles, interconnectivity still appears to be low on the list of customer requirements.
3.    The need for a formal certification - It might be that the idea of a volunteer-driven, loosely managed activity is getting outdated and is being replaced by a more formal testing using standard guidelines resulting in official certified systems. The pros and cons of certification vs the more or less informal plug-fests are discussed elsewhere (see link). So maybe it is time to switch gears.
4.    Standards have become too complex - If you have ever looked at a sample clinical document (CDA), for example the one used to capture information for discharge from an emergency department, it is a very verbose XML document including ten pages of coding, which is not easy to interpret. It literally requires one to either be involved with the HL7 standardization effort or go through intensive training to be able to work with these. One could argue that other standards (DICOM comes to mind) are not that easy either, but people seem to have learned to work with it through the years. However, these standards bring complexity to another level, which definitely creates a barrier for implementation and deployment.
5.    Standards are still not tight enough - Despite its complexity and the extensive encoding used, there is still a connectivity problem as semantic interoperability is still missing. It is not sufficient to merely exchange pieces of data, if there is a different interpretation of the context, adding this information to a database might even be dangerous. If that means that we can only convey certain information in its original context and information model, we could resort to just sending e.g. a PDF document, which makes the current encoded information exchanges a massive overkill.
6.    Vendors are not motivated - It is not always in the commercial interest of a company to support standards, but, rather to keep their systems proprietary. A good example is the evolution of PACS systems. Even though some of the components, notably the image archive in the form of a VNA, were extracted from the monolithic PACS systems, the workstations are still very tightly connected with the image manager of these same systems, despite the presence of workstation work list standards. Except for a few open-source products, vendors have been very hesitant to implement that and want to protect their turf from third party workstation vendors.

Back to the Connectathon, I noticed the actual testing and validation process has definitely improved. There are relatively robust validation tools and plenty of samples available. In addition, we used to look for information structures in the past, now new tests and checks are available that also evaluate the content. This is overall a major positive development. However, the questions still remain whether or not we have reached a plateau or even the top.  Of course, I could be wrong and next year’s attendance could be back up proving that 2013 was just a fluke or temporary dip. But, I don’t think so; maybe we will get a better idea of the reason behind this by then.