Monday, December 7, 2015

My 2015 RSNA top ten.

View from the cafe
to the exhibition floor
This year’s annual radiology tradeshow at McCormick Place in Chicago drew about 10 percent fewer attendees than last year, which is most likely attributable to the unfavorable dollar exchange rate, which makes it expensive for non-US attendees, and the fact that last year was kind of a banner year as it was the 100th anniversary meeting.

It was my 32nd meeting and I consider this year’s as one of the uneventful ones. I had a hard time finding any new products let alone innovations. Of course, vendors would like you to believe otherwise but I couldn’t find anything exciting about a new release of an existing device or product or vendors catching up with the competition or technology that was already introduced several years back. In any case, here is my top ten list of noteworthy observations:

1.       VNA is out for the PACS vendors: Several PACS vendors have started to realize that the Vendor Neutral Archive (VNA) is here to stay, and instead of trying to offer one themselves, it is better to accept its presence in the marketplace from other specialized vendors, and work either around, or with them. To offer a VNA as part of your PACS product line does not make that much sense anyway as it kind of defeats the purpose of uncoupling the image management-archiving component in a “vendor neutral” offering if all components are yours. A PACS vendor does not have any incentive to uncouple these, and instead of implementing an open standard for synchronizing the PACS and VNA such as IOCM (image object change management) it will continue to tightly couple that and use its own proprietary communication anyway.

As a consequence it seems as if the ground between PACS and VNA vendors has been divided and both are starting to live with it. It was also interesting to see that some of the archive vendors are starting to provide dashboards, add analytic tools to create additional value, something that makes sense: it is all about using “big data” and remember, digitization of imaging has been widespread for at least 15 years while the EMR implementations only started to take off for the past five years in the US, so there is potentially much more to work with.

2.       What happened with the deconstructed PACS? The good news is that the doom and gloom as was portrayed during the recent SIIM meeting about PACS being dead, and that it might be better to build a best of breed solution using what was coined as a deconstructed PACS, was absent during this meeting. It takes a lot of work to tie all the pieces together and system integration is challenging, something that is exactly what PACS vendors have been doing for many years. The ability to provide image exchange between multiple vendors with access and prefetching from multiple, old, obsolete and retired PACS systems is what providers are looking for. That includes access to imaging from different specialties and departments, which can be challenging as some of the large institutions might have 50 or more locations and/or departments that create images. It seems that the consensus was that a “constructed” PACS makes more sense than a “deconstructed” PACS for most institutions.

3.       Do we need yet another certification? During the event, the new RSNA Image share Sequoia project was launched with a presentation by Dr. David Mendelson from Mount Sinai. This is an activity sponsored by RSNA that will initially focus on validating image sharing between different institutions using IHE profiles such as XDS, XCA and XPHR. It is kind of a follow-on to the ehealth exchange, which is mainly for exchanging documents between what was quoted as used between 35 percent of all US hospitals. Talking with some of the vendors, there seems to be a concern about yet another venue for testing and validation in addition to the annual IHE connectathon, which happens in multiple locations (USA, Europe, Asia, where a vendor can be IHE certified using the ICSA lab tools. The good news is that the core of the Sequioa tools are going to be based on the MESA toolset as used by the IHE connectathon, nevertheless, I can understand the concerns from a vendor perspective. Time will tell which method or venue, if any, will eventually prevail.

The "O-Arm" (Pac man?)
4.       The evolution of the C-arm: Most innovations are not really revolutionary but rather evolutionary. A good example is the evolution of the C-arm, intended to provide limited fluoroscopy and also basic x-ray imaging in the ER and OR. There are now variations called “O-arm” which has a movable opening and “G-arm” which combines two detectors and sources to take two views.


Specialized orthopedic
unit
5.       Cone beam CT scanners are becoming more popular: The initial cone beam CT scanners, which take volumetric scans using only a single rotation uses a
flat detector instead of the small, cylinder arrangement used in a conventional CT scanner. They are predominantly used for dental applications as the high resolution is very well suited to create data for dental implants and it is relatively affordable for this specialty ($100-200k). Its application is now being extended for head imaging as well as extremities. There were at least two vendors that showed it for orthopedic applications and vendors are also promoting it for more extensive head and sinus imaging. The dose reduction compared with a regular CT is significant, i.e. 4 to 5 times, as a matter of fact, one vendor markets it as a low-dose CT system.



6.       Thermography is becoming mainstream:
New Thermography therapy modality
This is a good example of a relatively new modality that is being applied not only as a diagnostic device, which has been around for at least five years, but now also as a therapy tool. It is typically used in addition to radiation therapy to treat cancers and apparently the heating of the tissue using microwaves gives better radiation treatment results. The way it is applied is by using several heat sources arranged in a circle so that the treated organ can be targeted very much like regular radiation therapy from different directions while the skin is cooled by a water bag touching the surface.

Promotion of DBT claiming that 39%
of woman are covered by DBT
7.       Digital Breast Tomosynthesis (DBT) is becoming the norm: Hologic has set the standard in this field and other vendors are still trying to catch up, but there is no question that DBT will replace conventional 2-view breast x-ray imaging. I would have expected that breast MRI would actually be a better candidate to replace it but I guess the cost and contrast issues just makes it not yet a viable alternative. Finally, most PACS vendors are now facilitating the new DICOM objects that are created by the DBT modality instead of having to deal with the proprietary solutions that were used initially. But, facilitating these studies, especially prefetching them, is still challenging as they are about ten times the size of a conventional mammogram study. In addition, despite the fact that the radiology community has accepted that there are probably a couple more findings detected using this technology for every thousand studies, I have not found anyone who really likes to read and report on them, mostly due to the fact that it takes 3 to 4 times longer to read these studies.
Affordable wireless DR
plate technology

8.       DR is starting to get affordable: Most vendors are not even displaying their CR systems in their
booths anymore, as they are really pushing their DR plate technology. I would argue that for countries outside the US, when converting to digital for the first time, a CR solution is still not only more affordable but also less risky, as dropping a DR plate can cost $40,000 or more. But in the US, most customers opt for digital plate solutions as plates are getting more affordable with wireless plates for around $30k (as well as insurance coverage for DR Plate Drops). Plate vendors are reporting record sales, i.e. growth of 50 percent a year for these plates.


9.       3-D models replacing films? 3-D printing has been demonstrated for several years but the technology is now becoming mainstream, as it is making its way into the consumer world. You can buy a 3-D printer at Amazon or Walmart for under $1,000. There was a lot of talk about this technology at this year’s event. It is interesting that apparently, computer visualization is not as good in replacing real-world, touchable models that are used by surgeons, for example, to determine the best way to operate on a defective heart. This use of 3D printed models was recently featured by CNN as being especially useful for unusually complicated procedures.

Medical casters? Really?
10.   Medical grade what? One of the first lessons I learned when I started working in the medical field

is that the “medical grade” label sells. It is often just a marketing tool. For example, there are medical grade CD and DVD’s, which are basically the same as you can buy at Best Buy or Walmart but with a different label or color. The company I was working for at one time used to simply spray-paint the standard computers (which were DEC pdp-11 at that time) in a white color to be able to charge more for medical grade. Of course, if there is indeed a difference, such as when using medical grade monitors, which typically allows for DICOM calibration and automatic luminance control, the term makes sense. But in general, I’d always be suspicious when I see these labels on what appears to be commodity products.

Macy's window display.
RSNA belongs to Chicago: RSNA would not be RSNA without the grumpy (this is an understatement) cab drivers, expensive food, almost unaffordable lodging that you find in Chicago.
Click here for details
An added feature this year, however, were protests spurred by the release of a video of a shooting event that had occurred more than a year ago. The protests shut down Michigan Avenue on the Friday before the event, and during the week. And then, there is nothing better than walking in the brisk air coming off Lake Michigan, the window displays at Macy’s, State street, and meeting people from all over the world in the bus going from your hotel to the event. Even though there might not be many new things again next year, I am pretty sure I’ll be there again, in case I missed you, hope to see you then!



Monday, November 2, 2015

To pre-check or not to pre-check TSA? That’s the question.

This is what the pre-check line looks
like, assuming there is one close by...
When you travel as much as I do (2 million plus miles on American alone), and also have a global
entry membership, allowing you to zoom through customs, there is a good chance you qualify for TSA pre-check.

The idea is that you can use a special check-in line at the airport, you do not have to get an X-ray taken, but rather go through a metal detector, and, you don’t have to take off your shoes, jacket or hat or take your computer out of your carry-on. It is supposed to take less time and be more convenient, at least that is the idea. But that often is not what it turns out to be the case, especially if you fly out of DFW airport. First of all, not every security entry point at DFW has a TSA lane, so you need to know where these are, however, it appears that I always end up at the wrong gate. For example, let me share with you my most recent check-in experience.

I was getting my boarding pass and checking in my luggage at C-39. As you might have noticed, gate agents have all but disappeared at the counters. You check in at a machine, have to go through several menu’s asking you if you want to spend a few extra dollars for upgrades, buy frequent flyer miles, (no idea why someone ever would want to do that), or if you will have a young child in your lap. Then a luggage tag and boarding is printed by the kiosk, and you have to put the tag on and bring the luggage to a person who’s only job is to puts it onto the belt (seems redundant to me). Then, I walk to the check-in area, which has a sign showing “closest pre-check at gate 20.” Decisions to make; less hassle, more walking but I have time, so I decide on walking back half a mile or so to get to the gate 20.

At gate 20, there are about 20 people at the pre-check line, one or two at the regular line. Well, I walked all the way to gate 20, so I might as well, choose the hassle-free line, and decided to join the long line. And, from experience, there are always a couple of people who are in the wrong lane anyway, so indeed, a young couple as well as two gentleman who seemed to be kind of lost had to be rerouted to the non- pre-check lane after they reached the front of the line.

Finally, I am assisted by a firm looking female TSA agent who smiles at me and signs off on my boarding pass after having a good look at my mug-shot driver’s license picture. Yes! I can put my backpack and carry-on with my computers on the belt and zoom straight through, sneakers and all. I read an article that the floor in these metal detectors and/or X-ray machines contains a very high bacterial count, as a matter of fact, I don’t understand why mostly woman often go through with bare feet without having socks on or putting on those little surgical socks that some airports provide. I guess a good reason for the pre-check after all.

But, my carry-on does not make it through unnoticed. Not uncommon, as I have a lot of electronics with me that I need for my training class. I need my own personal laptop, one training laptop that is my server for students to connect to and simulate a healthcare IT system (“PACS-in-a-box”), and an additional one that simulates a student laptop for hands-on practice exercises. If I take all three out at the check-in, it typically annoys my fellow travelers, I sometimes joke to them “you can never have enough laptops,” which seems to alleviate the annoyance factor somewhat. So, the TSA agent takes my bag and takes out two of the three laptops and wireless router, which I also use in my class to allow my students to set up a virtual hospital network, and puts it through the x-ray machine again. After that delay, I collect my things, put my computers and router back in to the carry-on, fetch my phone and belt from my backpack, and walk back another half mile to gate 39.

It took me 15 minutes longer to use the pre-check line, so after making a quick phone call, I am somewhat late in boarding, but still in time to get in the priority check-in lane to make sure I can put my luggage into the overhead bins, so I don’t have to check it in (which, based on experience, increases the chance that it would not make to the destination on the same flight). The good news is, I got to keep my sneakers on, and got extra exercise (helps my fitbit step counter), but is it worth it? I think next time I’ll go for the “regular” check-in lane, it allows me to have an extra few minutes to get a cup of Starbucks coffee and a newspaper.

Tuesday, October 6, 2015

Challenges with X-ray Dose Reporting.

In my travels when teaching healthcare imaging and IT in other countries I notice certain areas that
are definitely ahead of other regions and also some that are lagging behind. The US is finally catching up with widespread EMR implementations due to the ARRA act incentives requiring practitioners to meet Meaningful Use requirements unless they want to be penalized by future reimbursement cuts. We are still lagging behind wide-scale implementations of Health Information Exchanges, especially as incentives are running out for creating public ones, but the private ones that are set up by large provider groups are really taking off.

From a regulatory perspective, the HIPAA regulations were a game-changer and had a major impact, not only on policies and procedures, but also on the way that systems implement security and privacy, for example, tracking activity through audit trails. Here is where a lot of countries follow the US example. The same is now happening with the requirement to track X-ray Dose, few countries and/or states are actively enforcing this, unlike the US.

However, as with many regulatory requirements, technology is running behind and trying to catch up to the requirement to track dose, especially from CT scans, and also from fluoroscopy, angiographic and cardiology procedures. Several incidents that happened only a few years ago, whereby hundreds of patients were severely overdosed at several major medical centers, triggered stricter dose regulations. Prior to that, it was common to track or at least to record the dose for radiation therapy and digital mammography.

In radiation therapy, there is a thin line between over exposing and therefore hurting the patient’s recovery too much and under-exposing thereby not killing all the cancer cells and risking a recurrence, and therefore, dose registration has been a practice all along. For digital mammography, there is a similar concern that too much and frequent screening could cause, rather than prevent cancer and even though I am not aware that the dose information has actually been tracked, it has been practice to at least record it with the images in the DICOM header.

For CT, fluoroscopy, angiography and cardiology procedures, recording dose has not been common until it became a recent requirement. The good news is that new modalities now are being shipped with the capability to export detailed information in the form of DICOM Structured Reports. The bad news is that this only represents a relatively small portion of the installed base, and that the infrastructure to archive and register this information is not there yet. The result is a patchwork of different solutions to capture this information. The options to capture this information include using the DICOM header such as used for digital mammography, to capture it as part of the Modality Performed Procedure Step (MPPS), or as a screen save aka Secondary Capture.

The MPPS solution is hampered by sparse implementations for a variety of reasons. Some of the barriers for MPPS implementations are lack of knowledge and understanding by the user community, a resistance to activate it by the modality vendors, as it is more work and potentially increases support issues, and finally poor implementations. An example of the latter is provided by a major PACS vendor that closes the study upon receiving the MPPS complete, thereby causing all images sent after that to become “unverified” requiring intervention. And even if you were able to make MPPS work, the information about the dose in this transaction does not have a lot of detail and does not necessarily reflect the complete exposure, especially for fluoroscopy exams where the part of the exam that does not create images is not part of the dose recording.

The screen-save solution is somewhat better but also limited with regard to the information that is recorded, and requires some type of intelligence in the form of Optical Character Recognition (OCR) or screen scraping to collect the dose information in a format that can be electronically stored in a database.

Talking about a database, as of today there are few PACS vendors that offer a dose management solution as part of their PACS system. Users might have to rely on third parties to collect that data and, if needed, forward it to a regional or national registry. There is an open source solution called RADIANCE (see footnote), but like all open source, there could be a concern about support and sustainability.

Therefore, despite the requirement for dose registration and dose management that is only going to be extended to more modalities and specialties in the future, the technology is trying to catch up and health care imaging and IT professionals who are tasked with installing and supporting this are being challenged. Hopefully, when more structured report solutions and dose management applications become available, this will become easier.

Footnote: Journal of Digital Imaging 2013 Aug; 26(4): 663-667: An Interactive RADIANCE Toolkit for Customizable CT Dose Monitoring and Reporting

Saturday, September 19, 2015

Truths and myths about a deconstructed PACS.

The term of deconstructed PACS has been floating around, which might be
confusing and/or new to many, therefore this attempt to shine some light on this topic.

In the past, there have been other adjectives used for the PACS system, which came and went. For example, the terms of the integrated PACS, the RIS (Radiology Information System) driven or PACS driven, and several others. In the meantime, the RIS is pretty much dead as most hospitals now have an EMR (Electronic Medical Records) with order entry capability in the form of a CPOE (Computerized Physician Order Entry) function, so there is no use to talking about integrated RIS/PACS or RIS driven PACS anymore.

So, what is the deconstructed PACS in my opinion? It is nothing other than using a best-of-breed approach or commoditization for the individual PACS components. It is a logical evolution of what has been happening since PACS started all along.

Going back in PACS history, the initial PACS systems were a “package deal” containing hardware and software for the archive, viewing stations, including dedicated expensive video cards, and monitors. This was true until the hospital IT departments got involved, especially from the large providers, who had a contract with for example, HP, Dell or IBM to provide all of their hardware, which could amount to literally thousands (or more) computers a year. Most of them can buy the hardware much cheaper than the PACS vendors, and therefore wanted to provide their own hardware for the viewing stations.

The same thing happened with the archives, for example, an IT shop who had all EMC hardware with its support and maintenance agreement, would be very hesitant to bring in another vendor just for the PACS. Remember that the PACS from a hospital perspective is just another piece of their healthcare IT puzzle together with the other department systems, HIS, billing etc. To make a point, when I visit a hospital IT room I can appreciate their perspective, as the PACS servers take up just a few of the many computer cabinets in one of the several rows of hardware of their computer room.

Some of the first vendors very smartly addressed that requirement and started to sell PACS software licenses only while specifying minimum hardware specs such as required CPU, memory and OS version, which did shake up the industry and pretty much everyone started to follow. We are now at the point that you can provide your own hardware, including the medical grade monitors, video boards, computers, servers and storage devices for all PACS components.

After that, VNA or Vendor Neutral Archives were introduced. Users were getting tired of having to migrate data every time they changed PACS vendora, so they wanted to take control over their data and purchase the archive from a different vendor and use the PACS archive mainly as a cache to allow for fast access of the most recent studies.

In the meantime, more “ologies” wanted to manage their images electronically and again, the CIO’s were not allowing a department to buy yet another archive for let’s say cardiology, dentistry, surgery and other images. The users also found out that not all images are in a DICOM format, for example, speech pathology, physical therapy and dermatology were all storing native JPEGS and MPEGs, and ophthalmology has been creating pdf’s containing very detailed and specific results. Moreover, the push to share this information in a standard manner with Health Information Exchanges (HIE’s) forced these VNAs to become a gateway to the outside world using standard protocols such as XDS. And if you need to have access to all of the patient images creating a patient centered view, you might as well access the VNA instead of all the individual PACS systems while using a zero footprint viewer.

An additional problem with uncoupling the archive i.e. VNA from the PACS is synchronization. If an image has to be modified or deleted on the PACS, you don’t want to have to do that twice, voila, the IHE IOCM (Imaging Object Change Document) profile that provides that functionality. So we arrived at an archive device (VNA) that supports DICOM, non-DICOM, can talk XDS, supports the PACS synchronization and has a standard DICOM viewer interface. At least this is a “true VNA” in my definition.

So far we have deconstructed the PACS hardware and software, the archive and the viewer, which actually caused some “experts” to announce that PACS is dead, which is definitely a misstatement. Therefore, myth number one that the deconstructed PACS is the same as “PACS is dead,” is definitely a misstatement. We still need PACS systems that manage the department workflow, providing very fast access to images and efficiently providing very specific tools and image presentation in the form of hanging protocols to deal with specialties such as cardiology, mammography, dentistry, nuclear medicine, and last but not least, general radiography. This is in addition to features such as peer review capabilities and critical results reporting.

Going down the path of deconstruction, we also need so-called workflow managers that manage the workstations. These started out as a solution for radiologists who are serving multiple hospitals, each one might have a different PACS system. Instead of having to log into different worklists for each individual PACS, the workflow manager would combine or aggregate these worklists and create a single one, while synchronizing the reading between multiple users and PCS systems. The step from different PACS systems to a single archive (VNA) that has images from different sources is not that hard to make, hence we have the next step, i.e. a workflow manager from a different vendor. The step to use a best-of-breed viewer is now easy to make, yet from another vendor.

So far we reconstructed the archive, workflow manager and viewer, however, we need additional middleware to make this work. In particular, we need to clean up the data as it comes from different sources, such as series and procedure descriptions, especially if the images are created at different institutions with their own terminology. For this a DICOM cleaner or “tag morphing” software is needed. In addition, if you don’t have a universal ID, you need the Master Patient Index or MPI capability. Last but not least, to get some of the details needed from the orders, you need an interface engine that consolidates all of the HL7 feeds.

As of now we have five components, the VNA, workflow manager, viewer, DICOM router, HL7 interface engine and an optional MPI provider. You can purchase each one of those from a different vendor, which is the best-of-breed approach.

This brings us to myth number 2, which is that a deconstructed PACS is less expensive, which is not necessarily true. The reason is the effort involved with integrating, testing and maintaining such a diverse system. Assuming that you have a strong IT department, and educated imaging and biomedical engineering resources, it could definitely be less expensive and provide best of breed, i.e. a solution that better meets your specific requirements and workflow. The truth is that for many institutions this is not an option. They might want to deconstruct the PACS only to the level of the VNA, and even in that case they might be better off to purchase the VNA from the same vendor as their PACS provider. However, if you do so, I would strongly recommend requiring standards support at each level so that you can replace any of the components for either business reasons or if the vendor simply does not keep up with new developments. As an example, you would be surprised how many vendors still don’t support 3-D breast tomosynthesis images, or even the new enhanced multi-frame CT and MR for their viewer, in which case you might be better off to look for a different solution.

The last myth is that a deconstructed PACS is something new, which is incorrect. It is just a logical evolution of what started 15 years ago by opening up the PACS by using standard interfaces and allowing the replacement of the several parts by the different vendors. So, the truth is that there is nothing new under the sun, which might take some of the marketing hype away from the vendors, but that is OK, they just have to come up with another term to sell what is basically the same old thing.

In the meantime, as a user, it is important to make sure that any new PACS purchase allows for the deconstruction by requiring standard support, and verifying and testing these, and continue to get educated and stay knowledgeable to provide the more sophisticated support needed for these systems.

Thursday, September 10, 2015

What's new from HIMSS ASIAPAC15

There was a pretty good size exhibition
hall and it was well attended
The HIMSS ASIA Pacific was held Sept. 6 to Sept. 10 in Singapore this year and, despite a good attendance (1,700 professionals), most of the attendees were local, which might not have been such a surprise if you looked at the speaker roster as most of the speakers were from the local region as well. 

In my opinion, it would have been better to call it HIMSS Singapore, as it would have reflected the venue more accurately. The good news is that I learned quite a bit about the initiatives and challenges in Singapore, which basically come down to the fact that there are going to be too many elderly people too soon to be supported by too few workers, which will require a smarter approach to how they conduct their business. Hence the term “Smart Nation” was introduced by the Singapore government as of last year, which includes smart transportation, infrastructure, and homes, but foremost smart healthcare. However, even though the incentive for implementing digital technologies in healthcare is different than in the US and other countries, the objective is identical, i.e. doing things smarter and therefore more efficiently, safely and effectively, and we can use the same solutions.

Singapore still has a long way to go with regard to electronic health record implementations they are about five years behind where the US is right now. In addition, with regard to imaging (PACS) implementations, they are about 2-3 years behind. They have home grown EMR systems that are not interoperable, and they are just starting to think about enterprise image archiving using Vendor Neutral Archiving. The standards-based approach to healthcare IT implementation using IHE is still not well understood and/or appreciated. The good news is that they can learn from implementations in the US as well as countries that have advanced health care IT implementations such as Canada, the UK and other western countries, and use the lessons learned to jump-start their implementations, which will very likely start happening in the next one to three years, which would be expected if they are serious about “smart healthcare.”

There were three major takeaways from the conference. I learned about the potentially disruptive  impact of wearables, the use of demographic and social overlays onto the clinical data, and the use of “Big data” in a cognitive manner.

First wearables - When people think about wearables, you might think about the Apple Watch and other devices you can wear and measure the number of steps, hours of sleep, and even pulse. But the sensors are now getting very thin and can be incorporated into clothes or other wearables and measure additional physiological parameters such as glucose level. This information can be uploaded into a personal health record and a healthcare practitioner can monitor that data and potentially intervene if needed. With most people now having smartphones, they can be connected in real time and allow for monitoring 24/7. The impact of this could be huge, despite the “big brother” connotation. Sometimes, having a “big brother” watching over you could be a lifesaver or, at a minimum, prevent unnecessary ER or doctor visits. And if a visit is necessary, it could very well be done remotely in many cases.

The second potential disruptive innovation is the use of demographic data and overlaying that over clinical information from the EMR. For example, overlaying the number of re-admissions, diabetes patients, or people with any other condition that might be socially or culturally related on top of a map showing the geographical distribution of the patients can give clues as how to address such health issues. This means that patients are not being treated as individuals anymore but in their social and cultural context. Let’s say that a specific area has a high incidence of diabetes, you might want to do an educational session in the community center about nutrition in that region.

The third thing I learned has to do with the use of “big data.” While the reality hasn’t lived up to the hype so far, there are applications that use big data effectively in a cognitive manner to assist physicians making treatment decisions. A good example is the IBM Watson application, which is used by major cancer centers to assist in the treatment of oncology patients. The patient condition and characteristics which are in the EMR are compared with a database of thousands similar cases and tens of thousands of published articles to come up with a suggested treatment plan. You might think that patients do not always fit the statistics, but then the next step is to take into account the genomics data to come up with a truly personalized treatment. The power of this process is to use the intelligence of the IBM Watson, which runs in the cloud.

So, in conclusion, the HIMSS ASIAPAC15 had some insights to offer and learning experiences and it was good to know what is going on in Singapore. However, it is definitely a regional meeting, and there were a lot of vendor-sponsored presentations that talked about products rather than new technologies. So as an educational event, you might be better off to travel to the HiMSS Annual Conference & Exhibition in Las Vegas, Feb. 29 - March 4, 2016.


Thursday, September 3, 2015

An update on PACS professional certification.

PACS certification for imaging and information professionals has been around for about 10 years.
PARCA was established on January 1, 2005 and the CIIP certification by ABII (American Board of Informatics) was created not too long after that. Since then, it is estimated that more than 2,000 people have been PACS certified, and the certification is still going strong as new people enter this field or experienced professionals look to update their skills and get certified in their profession.
The titles CIIP and CPAS, CPSA and CPIA are becoming well known and used by professionals in this field. When talking with recruiters, they agree that certification is definitely an advantage when looking for a job opportunity as it shows that one has gone through the effort to study this subject matter and earn certification through a rigorous exam.

Certification is a requirement when you work in a clinical field, for example, as a radiological technologist, nurse or physician. Interestingly enough, it is not required when you work in the healthcare imaging and informatics field. However, potential legislation has been considered, in particular in the state of Texas, which would require everyone involved with maintaining and supporting medical devices to be certified. And yes, PACS is a medical device under the US federal FDA guidelines requiring specific clearance to be allowed on the market.

There are several other certifications in the healthcare imaging and informatics field, for example HL7 certification is quite popular for interface analysts and developers. More than 4,500 people are HL7 certified as of now, more than twice as many as those who hold PACS certifications. The CPHIMS (Certified Associate in Healthcare Information and Management Systems) certification is also well known and highly regarded in the IT community.

With the increase of PACS implementations in the Middle East and the Asia Pacific region, there has also be an uptick in the number of PACS certificates issued in these regions. OTech just did the first PACS certification training in Singapore and has performed several in the Middle East over the past few years, responding to the increasing demand for training in these areas. However, the number of PARCA certificates issued is still only about 2 percent of overall certifications, compared with the 23 percent share of CPHIMSS certifications in Asia. The Middle East has an 11 percent share of the CPHIMSS certifications while for PARCA has a 5 percent share. 

The statistics for PARCA certification holders is shown in the table below. 

PARCA Certifications by country (as of 9/1/15)
USA
973
80%
Canada
119
10%
Middle East
62
5%
Asia
28
2%
Europe
23
2%
Australia and New Zealand
11
1%
Latin America
3
0%
Total
1219
100%


As you can see, 90 percent are in the US and Canada, however the international portion is fast expanding. One of the main reasons for the international following of the PARCA certification is its international board and governance as well as the capability to take a certification exam by on-line proctoring, which makes it accessible from literally any place in the world at any time.

Certification is definitely a good thing to have, and remember, it is not about the piece of paper you can show, which could be an advantage if you want to change jobs, but more about the journey, i.e. the fact that you have to upgrade your skill set and learn about topics that you might not get involved with in your regular day-to-day work but are important to know and understand.



Thursday, August 20, 2015

Top ten DICOM Do’s and Don’ts-part 2.

When configuring a modality, view station, or PACS, many might be tempted to leave its default
factory configuration as-is, however, that might be a big mistake as it can impact your workflow, create current or future incompatibilities, or violate what would be considered “best practices.” After my initial list (see link), here are some additional tips:

1.       Configure MPPS - I typically do an informal poll of my students in our DICOM training and ask how many of them use MPPS, and the response is that about 20% of the audience use it, another 20% tried it but had issues, 20% did not want to bother and 40% had no clue what I was talking about. The people that did not want to bother are often service engineers setting up a new modality, and did not have access to the destination of the MPPS to configure it and did not want to wait for the local IT and/or PACS service engineer to coordinate this feature as it requires additional work and testing. The people who use MPPS are typically enthusiastic about it as it eliminates additional steps in the workflow, in particular the need for a technologist to check image count at the PACS, or closing the exam at the RIS, or making changes to the procedure manually at the RIS, instead of relying on the MPPS to convey that information electronically. There is a minority, which I estimate might be 20%, where the MPPS workflow does not quite match the current workflow, mostly for specific modalities, such as an ultrasound where you want to be able to complete a study on another modality, which could be challenging, but again, these are exceptions. For the vast majority of the cases, MPPS is underutilized and using it could have an impact on your data integrity and your efficiency. Therefore I strongly recommend considering to configure MPPS “on” at your modalities, PACS and RIS.

2.       Configure Storage Commitment (STC) - This is another workflow enabling DICOM capability that is often turned off. One of the reasons it is not always used is identical to the reason for not using MPPS, as it might be too much trouble to set up by the service engineer, or that it caused issues, which can often be traced back to not configuring it correctly. STC hands off the responsibility of archiving a DICOM object, in most cases the images, to a PACS archive, for example. It could also be used very effectively to hand off the responsibility from a PACS to cloud storage or to an enterprise archive and/or VNA. A problem when using STC occurs when it is configured at a modality and not configured at the destination, e.g. PACS. Because the sending device does not allow images to be automatically deleted from its local storage unless a storage commit acknowledgement has been received, its local storage might get full at a certain point, thus preventing new acquisitions. If this happens, the short-cut is to configure STC to be “off.” This could potentially lead to images being lost due to the lack of a handshake, or require a technologist to manually check whether all of the images actually made it to the destination, which is again extra work and subject to human error. There is one more configuration parameter that is related to the STC that specifies if the Storage Commitment request and response is processed on the same DICOM Association or a different one. The DICOM protocol allows for either option, requiring the configuration of the sender and receiver to match behaviors.

3.       Switch off promiscuous behavior by a receiver - In the context of networking, being configured as promiscuous means that a device will talk with anyone who happens to use its IP address and port number. Some devices are only promiscuous for certain features, for example, a PACS might allow someone to query its database but not allow a device to retrieve images unless it is configured to do so. I would argue that best practices require that any device that wants to communicate with a PACS system, for example, must be configured accordingly. If a new “strange” modality for example wants to exchange information with a device, it is not allowed to do so unless it is added to its DICOM tables. The reason people use the promiscuous mode is again that it is easy and less work as additions and changes do not have to be updated in the configuration files. However, being strict on who is allowed access is good security practice.

4.       Create only DICOM compliant exchange media - When burning a CD with patient images or copying a study to a flash drive, make absolutely sure that you create a DICOM compliant medium. Using a file copy command will not do that as there are very specific requirements about the encoding of the images (Using the Explicit VR Little Endian Transfer Syntax for most use cases) and the presence of a directory (DICOMDIR). Some vendors even provide the option to create a proprietary format of an image that can only be viewed with the embedded viewer on the CD, which is useless in case these need to be imported into another PACS system, so, never do that. Your best bet is to use an after-market CD reader and writer program, which is typically more user friendly than what is provided by the PACS, but is also much more careful about creating DICOM compliant CD’s. And in the case that someone created a non-compliant CD, these after-market systems often can read the proprietary formats as well.

5.       Use compression only for specific use cases - Compression can be applied at a modality, a gateway, or archive. Lossless compression at a long-term archive definitely makes sense as it reduces the amount of storage needed by a factor of at least 2, and often 3, using lossless compression. Compression at the modality makes sense for ultrasound, especially for cardiology echo when several series can be acquired that otherwise would result in large studies requiring a lot of storage and potentially slow transmission over the network. Lossy compression could compress the files sizes by 10 to 15 to 1. For angio and cardiology X-ray, it also makes sense to use compression as a typical study could contain up to 10 runs. The DICOM standard allows for quite a list of different compression schemes, but I suggest sticking to the basic JPEG for lossy and lossless compression as its implementation is widespread and incompatibility issues are relatively rare. For endoscopy and surgery, the file encoding is part of the file as the cameras that capture the images already have converted the data to an MPEG file. One needs to make sure that MPEG is supported by the receiver in addition to the correct MPEG version (MPEG-2 or MPEG-4).

6.       Use PDF instead of Secondary Capture - When scanning documents into the PACS, or when capturing a screen such as from a bone densitometry device, most systems create a bitmap as a Secondary Capture DICOM object. These objects can be quite large. Using a PDF is a much better choice because of its efficient encoding. DICOM specifies a so-called encapsulated PDF SOP Class, which basically wraps a pdf file with a DICOM header. You need to make sure that the receiver can support these, but many of them do today. This is another example of a more efficient use of your bandwidth and storage.

In conclusion, several of these will definitely facilitate a better workflow; it makes sense to follow these rules when installing new devices or even go back and make some changes to their configurations. Again, I am sure there are more tips, and would love to hear from you so I can expand this list for the DICOM community to use. If some of this goes a little bit beyond your understanding of the DICOM domain, you might want to check our core DICOM web-based training or face to face training in Dallas, which will go over these subjects in great detail.



Monday, August 3, 2015

Top DICOM Configuration Do’s and Don’ts.

When configuring a modality, view station, or PACS, many might be tempted to leave its default
factory configuration as-is, however, that might be a big mistake as it can impact your workflow, create current or future incompatibilities, or violate what would be considered “best practices.”

Here’s are some of the most important best practices.

1.       Set your SOP Class configuration at a modality to the new SOP Classes - Many modalities allow for images to be encoded as different SOP Classes, which are often configurable. The reason is that over the past 20 or so years, as DICOM matured, new versions of the initial SOP Classes were refined to solve problems caused by incomplete, vague and inconsistent data formats, and to facilitate new technology developments. One of the major changes was the increasing use of codes instead of text strings in the DICOM headers, e.g. for body parts and view codes to allow for better and more reliable pre-fetching and hanging protocols of prior studies. There is one caveat, i.e., one needs to make sure that the receiving system, e.g. PACS and/or workstation, also supports these new SOP Classes, otherwise one creates incompatibilities. Otherwise there is no reason NOT to use the latest and greatest SOP classes to provide better functionality and more information. To be specific, always configure any digital radiography unit with “DX for presentation” instead of the “CR,” any CT, MR, XA and RF always with the “enhanced” instead of the “regular” SOP Class and any tomosynthesis breast imaging unit with the “DBT,” instead of any of the Secondary Capture or, worse, CT objects.

2.       Set your Transfer Syntax configuration at modalities and PACS to Explicit VR Little Endian (ELE). Always configure a sender to send ELE and a receiver to accept the same in case there are multiple options for the Transfer Syntaxes being proposed as part of the DICOM negotiation. Explicit VR has the advantage that the Value Representation (VR) is explicitly specified as part of the DICOM header, which is “good practice.” The alternative encoding, Implicit VR, is the default transfer syntax, and could potentially be a source of problems if someone were to use that to create a CD without doing the conversion to Explicit VR (the DICOM media exchange format requires Explicit VR). Note that instead of Little Endian, you could also send images in Big Endian (BE) encoding, which is relatively rare. If you run into an old system that supports BE, which is common for Unix/Linux based modalities, make sure that LE is configured instead of BE as its sparse implementation could create compatibility or display issues later on.

3.       Use the “officially assigned” port number - Port 104 is probably the most used port number for DICOM connections, however because of certain system limitations, vendors have been using different ones such as port 5004, 6000, and others. A new DICOM port number, 11112, was assigned quite a while back by the IANA authority to the DICOM committee. Good practice would be to standardize on this port number to make it easy to manage ports at the network/router levels. I would not go back and change all of your existing port numbers but for every new device and/or upgrades, using 11112 is good practice.

4.       Use sensible AE title’s - With the increasing consolidations and mergers causing more image sharing, it is even more important to have non-duplicate AE-Titles, or, at a minimum have a system in place that allows for an easy identification of the source and destinations. An AE-Title of “WORKSTATION_14” is not as helpful as “BLR_ER_CTWSTN_05,” for example, which might indicate that this is workstation 05, connected to the CT, in the ER at Baylor Medical Center. This is another example of using “good practices.”

In conclusion, by configuring your imaging devices and PACS using good practices, one can be more efficient, facilitate a better workflow, and prevent issues down the line when having to migrate and/or pre-fetch these images. It makes sense to follow these rules when installing new devices or even go back and make some changes to their configurations. I listed the most important tips to configure your DICOM capability here, I am sure there are more, and would love to hear from you so I can expand this list for the DICOM community to use. If some of this goes a little bit beyond your understanding of the DICOM domain, you might want to check our core DICOM web-based training, which will go over these subjects in great detail.



Friday, July 3, 2015

If replacing your PACS could just be as easy as buying a new car.

I have a 2005 Ford Expedition with 160k miles on it and every year or so, I ask myself whether it
makes sense to get a newer vehicle to replace it. I consider the cost of acquiring a new vehicle, buying or leasing it, the potential gain I get in having a more fuel-efficient car, and what it cost me in maintenance and repair of my old vs a new car.

All of this is relatively easy to calculate for a car, however, for a PACS there are many more factors that play a role. First of all, when you buy a PACS, you have pretty much sunk all of your investment in a big hole, it is almost impossible to recover any of it, let alone have a trade-in. The PACS software is licensed to you and you alone, you are not allowed to resell it in most cases (check your software license agreement). So, when you depreciate it, there is little or no residual value; at best you might be able to reuse some of your high-resolution medical monitors, but that is about it.

One of the risks, therefore, is that if your business model and/or application changes before you have depreciated it, or before your lease ends, you are at the mercy of your vendor to work with you to upgrade the system. For example, you might decide that you want to use a Vendor Neutral Archive (VNA) as a backbone instead of relying on the PACS archive, or you want to add cloud storage to make images available for physicians, or add 3-D mammography (tomosynthesis) requiring either server side rendering or another architectural solution to allow for prefetching and processing of these huge images, or any other major change that your current PACS cannot facilitate. A major upgrade or even new purchase might be in order in this case.

Keep in mind, there are several hidden costs involved with changing a PACS. There are costs with migrating data, running two PACS systems in parallel for a while (maybe 3-12 months), which also requires two service contracts if they are from different vendors, training costs and lost time, etc.
At the same time there may be hidden cost savings. The old system might be less efficient, if so, you should be able to quantify the efficiency increase (less staff, more procedures), however, there could also be “soft benefits” such as better quality and less chance for errors, which are hard to measure. Some of the current PACS systems are getting unreliable and buggy as hardware is getting older. One of my recent students told me that his PACS system goes down at least once a week, which is a source of frustration and lost time for sure. He is obviously looking for a replacement to reduce those downtime costs.

Talking with Michael Cannavo, an experienced PACS consultant, he said that many changeover costs are under-estimated by most users. It is therefore prudent to make sure you get all of the facts on the table before making the decision about whether a replacement makes sense. As a matter of fact, you might decide to hang on to your current system for a while, or the other way around, whatever makes financial sense.

Replacing it takes a lot of planning and a certain amount of lead-time. Going back to my car, if that breaks down, I can go to the dealership and after haggling with a salesman for about one or two hours, I can drive away with a replacement. Replacing a PACS system is not as easy and requires at least a few months for the purchasing process followed by months of work for migration and implementation including training.

Another factor to take into account is the maturity of the changeover and/or upgrade. If you like to be on the leading edge, that is fine as long as you are prepared for what is sometimes called the “bleeding” edge effect. My rule of thumb is to never go for the first version of a new release, e.g. release 6.0; wait ‘til at least 6.1 or 6.2. I know of hospitals that hold out for a few years and wait until after version 3 or later before upgrading. It is not that vendors do not always do a good job with testing new releases, most of them do; (shame on the few who don’t and dump insufficiently tested upgrades), but it is almost impossible to test all permutations of system configurations, modalities, RIS/HIS/EMRs, voice recognition systems and the many plug-ins and additional applications that people run on their systems. So there are bound to be problems, bugs and errors.

In conclusion, changing a PACS is similar to buying a new car but only to a limited degree; you need to do a comprehensive financial assessment and look at your return on investment, the devil in the PACS case is in the details, make sure you have all costs covered. In the meantime, I’ve decided to drive my 2005 Expedition for another two years. It is paid for and it serves our purpose, i.e. hauling at least five of our grandkids and pulling my 3,000 lbs. heavy RV with ease. But I do the math every year just to make sure, something you should consider doing for your PACS on a regular basis as well.


Monday, June 8, 2015

SIIM2015, My top ten of what’s New?

Typical light traffic at the exhibition floor
The May 28 -30 meeting of the Society for Imaging Informatics in Medicine (SIIM) held at National
Harbor in Baltimore (just south of DC), was well attended. It seems the decline in attendance has stabilized and there were, what I estimated to be, between 500 and 700 attendees. This is the only PACS meeting in the US and it is a good opportunity to network and find out what is new. I attended all three days, however, I had a difficult time finding any new developments, technologies and/or products. Much of what was talked about was either said or published before. Anyway, here are some of the questions that came up during the meeting:

1.       Is PACS dead? The opening session by Donald Dennison, one of the SIIM directors, was a rehash of his article in the SIIM publication Journal of Digital Imaging called “PACS 2018, an autopsy.” It was actually the journal’s most down-loaded article last year, which might be more due to the controversial title aimed at scaring PACS professionals, than any new information it shed on the future of PACS. Yes there will be VNA’s that provide access to physicians and yes there will be image enabled EMR’s, but these are just replacing the clumsy non-patient centric physician viewing capabilities that have been part of PACS for many years. It does not mean that the advent of the VNA’s and EMR’s that we are ready to perform an autopsy on a dead PACS. I also thought it interesting that the keynote speech of a PACS conference talks about the “death of PACS,” I surely hope that is not the case.

2.       When are people going to understand what a VNA is all about? VNA’s are still touted as the next greatest thing, especially by vendors. What is missing is an honest discussion about the issues with early implementations and experiences. One of the major issues with using a VNA is that if you have to maintain yet another place where images are being managed and archived, you better make sure that the information is synchronized. For example, if you delete an image at the PACS, it should be automatically deleted at the VNA without manual intervention. There is a standard for that defined as an IHE profile, called IOCM (Imaging Object Change Management), which has not been widely implemented (yet). Second, it has become clear that DICOM metadata is not sufficient to manage the images at an enterprise level, additional information is needed as defined by the XDS profile, but storing that information in the VNA image database defeats the purpose of having a VNA to start with as it is again yet another proprietary database implementation that requires knowledge of the database tables to get that information out. Lastly, there is a lot of talk about VNA access by viewers using open standards, but I have found only one US institution so far that really implements XDS-I image access to do this. So, it appears that even though VNAs are sold as the greatest thing since sliced bread, there are still many issues to solve.

3.       What is a MERR? I heard a new term, called the Multimedia Enhanced Radiology Report or MERR. What I understood is that it is basically a report with pictures and graphs. I could not really figure out what the novelty is, as mammography reporting has done this for many years and measurements such as from ultrasound are already captured through DICOM Structured Reports and represented accordingly. Sounds like a marketing ploy to me.

4.       What about non-DICOM data? The fact that most VNA’s are advertising that they can manage non-DICOM objects seems to make for a free-for-all for storage of all types of objects, especially from the non-radiology specialties, which was referred to as the “LTFFT” or “Left To Fend For Themselves” objects. Yes, these objects are in many cases “orphans,” as they are often stored and managed locally. And there are many of these still to be discovered image sources ranging from medical photos, to pathology, to video’s that are taken to monitor gait for orthopedic patients and many more. However, I have yet to find an object and/or encoding that cannot be encapsulated into a DICOM format, including MPEG video’s, PDF documents, and even waveforms. Therefore, as suggested several times, to store these objects using new formats such as MPEG7, which is yet another encapsulation of a MPEG4 file, should be strongly discouraged.

5.       What about non-radiology imaging that is DICOM? There is a wide proliferation of POC (point of care) devices that create images. At Duke medical center, a survey showed that only 19% of all ultrasound exams are performed in radiology, meaning 81% are created elsewhere ranging from OB/GYN’ office, to the OR, the ER, labor and delivery, and specialty clinics, etc. The challenge is, how to capture all these images in a useful way in the electronic health record so they can be available to practitioners outside these departments. It appeared that only 45% of these ultrasound devices support DICOM and therefore have a way to export the data in a standard manner, of which only 75% have DICOM worklist capability to allow for patient demographic and order capture at the device. This is going to be a challenge, requiring device upgrades and user education to make sure the information is captured.

6.       How do we implement mobile technologies? The installed base of healthcare information and imaging devices use predominantly DICOM and HL7 for their communication between the different devices. Neither one of these standards lend themselves well to access over the web such as used by mobile devices, therefore we need the new “web-services” version for both standards. These are called DICOMweb and HL7 FHIR. The good news is that these web services are relatively easy to implement - there was actually a hackathon as part of the conference for the true geeks to show how easy an application can be developed using these tools. The SIIM presentation about these new services is exactly the type of information that PACS professionals want to learn about. Too bad that there were so few of these and that the next level of integration using these services, i.e. by using the appropriate IHE profiles, was not discussed at all.

7.       What is XDS anyway? XDS is yet another buzz-word, meaning the IHE profile to provide Cross Enterprise Document Sharing, which was used in the context of VNA and PACS as well as EMR integration, however, I am convinced that very few actually understand the details of this and that stating “XDS support” is as useless as stating that a system supports “DICOM” or “HL7.” One needs to be very specific about which actor one supports, i.e. does a device create documents or images (acts as a “source”), is it a repository, registry, or consumer and what about the patient identity feeds, where are they generated? To understand the workflow and identify any gaps or overlap, it is strongly recommended that you create a diagram with all of the IHE actors in your system. This might be a good exercise for next SIIM meeting, which was dearly lacking content on IHE anyway.

8.       When will DICOM finally become plug-and play? After all these years (DICOM was introduced in 1993), it amazes me that there are still Issues with DICOM connectivity. One speaker said that his institution cannot pull back images from a PACS at the ultrasound for contrast processing. The solution is a simple matter of configuring the devices to do a query from the PACS and the PACS initiating an association back to the modality to store the images. Sounds simple, and it is, however, apparently it is not, I expect due to a lack of training and understanding resulting in finger pointing between the vendors. Amazing but true.

9.       Is it show time for mammography tomosynthesis yet? This new modality which produces a set of 20-30 image slices of the breast instead of, or in addition to, the traditional 2-view image is poised to be introduced in many facilities because of marketing pressure caused by supposedly better outcomes when using this for routine breast screening. Based on comments from the audience during the special session about this topic, it was clear that there are still growing pains. One attendee reported that the reading takes 20 times (!) longer than when using conventional mammography screening, another attendee had major problems with the hanging protocols, obviously unaware of the IHE profile that explicitly addresses these issues. Somewhat scary based on the number of attendees that were planning to install this modality in the upcoming year that there are still many technical issues to be resolved, including required bandwidth and storage capacity.

10.   What to do if the PACS is down? Many institutions are relying on cloud solutions for a backup. The increasing use of DICOMWeb making images available on mobile devices in an easy manner from the cloud now provides a backup solution because practitioners can access images from their PCs, and increasingly their tablets. Mobile access is evolving as another good solution complementing a redundancy and backup strategy.

So, yet another PACS meeting, with, as I mentioned, not much new; in my opinion it was light on technical content, but not poorly attended so I guess it has its niche audience. The location (National Harbor) was kind of a bummer, so close to Washington DC and yet too far to easily get to the city unless you want to take a taxi. Next year is going to be in Portland, OR, kind of far out in the northwest corner of the US. I’ll probably attend, hopefully there will be more to learn by that time.