Wednesday, December 6, 2017

My 2017 RSNA top ten.

The atmosphere was very positive in Chicago among the 50,000 or so visitors during the 2017 RSNA radiology tradeshow. As one of the vendors mentioned: “Everyone seems to be upbeat,” which is good news for the industry and end users.
Here are my top ten observations from a balmy for Chicago (I did not need my thick coat this year) for this year’s meeting at McCormick Place:

Dr. Al Naqvi, director of SAIMAH
1.      The AI hype is in full swing - Artificial Intelligence, deep learning, or what ever it is called as shown by the many vendors touting this technology has gotten the attention of the radiologists, who packed sessions, and the trade press as shown by all the front page coverage. It is fueled in part by fear that computers will replace radiologists, which in my opinion won’t happen for many years to come. In the meantime, there will be additional tools that might assist a radiologist in eliminating some of the mundane screening exams which can definitely be labeled as “negatives,” but there is still a lot of work to be done and many of the so-called AI tools are nothing more than sophisticated CAD (Computer Assisted Diagnosis) tools that have been around for many years.

There is also a new society being established for AI, the Society of Artificial Intelligence in Medicine and Healthcare (SAIMAH). I guess every new technology needs its champion and corresponding non-profit to promote its use and efficacy.

Ultrasound CAD
2.      Talking about CAD, this has become rather commonplace in the US for digital mammography screening, which is interestingly enough not the case in many other countries, especially Europe, where they do double-reads for mammography. Algorithms are now becoming available for other modalities as well, for example for the ABUS (Automated Breast Ultrasound System). One company showed their CAD algorithm which will become available in the market as soon as they get FDA approval. I expect that CAD for several other modalities and body parts will follow suit, in addition to the already commercialized CAD for lung nodule detection in chest radiographs and CT and breast MRI.
Ultrasound robot

3.      The robots are coming! Not only are radiologists being threatened by AI, technologists might also
become obsolete and be replaced by robots. Well, maybe not quite, but there is definitely a potential. One of the large robot manufacturers was showing its device performing ultrasounds, which can be used to perform remote procedures “tele-ultrasound” in case there is no local technologist available, while performing repeatable procedures using a uniform pressure over the whole sweep performed by the robotic arm. It will be interesting to see what new applications become possible using these robotic devices.

4.     
Virtual currency (Bitcoin) in medicine? Blockchain technology has become the main vehicle to propel the popularity of virtual currency to new heights. The underlying blockchain technology is very useful for managing public records, which need to be secured from unauthorized users. The records are automatically duplicated on tens of thousands of computers and accessed through an extension of a common browser that accesses the blockchain network, in this case Ethereum.

A demonstration of a possible application that manages the licensing of physicians in the state of Illinois was shown at the show. This technology might have certain niche applications in healthcare IT. However, for managing medical records, which must stay private, it would definitely be a problematic solution (unless is it encrypted which defeats the purpose of public access), as well as for images which are definitely too large for this application.

5.      VR is getting more traction. I counted three vendors (there could have been more) who were
VR using wrap-around goggles
demonstrating 3-D stereoscopic displays, which could be especially useful for surgery applications. It still looks kind of weird to see users with these large wrap-around glasses on while waving a wand into space, and I think it might take a few years for this application to mature beyond its “gadget” state into real practical use. But this is a field where the gaming industry has provided some real spin-offs into practical applications that might potentially benefit patient care.

6.      Cloud-phobia is getting stronger - Moving your images to the cloud for image sharing was one of the previous year’s hot topics, especially as cloud providers (Amazon, Google, Microsoft and others) have been offering very competitive rates for their storage capabilities. Despite the fact that the data are probably safer and more protected at one of these cloud providers than at many corporate healthcare IT facilities, there is still a concern among users about potential HIPAA violations and potential hackers looking for patient demographics, which if accessed, could be downloaded and resold on the black market. As an alternative, some of the image sharing service companies are starting to provide secure point-to-point transfers managed by their own patient-driven portal. They provide an access key to the patient who then controls access by physicians and obviously themselves as well. This looks to be a good alternative if you don’t trust the cloud.

lightweight probes with processing in
 dedicated, customized tablet
7.      Ultrasound units are becoming commoditized. Miniaturization and ever increasingly powerful tablets allow for ultrasound units to be carried in your pocket facilitating easy bedside procedures and also to giving workers in the field, particularly in remote areas, the capability to do basic ultrasound exams.

For potentially high-risk pregnancies this has become a great asset. These units range from US$10k to $25k depending on the functionality and number of probes you want to have. There are
Heavy (wireless) probe, standard tablet
two different approaches to this technology, the first one puts all the processing in the tablet allowing for very lightweight probes, the second one is the opposite, it puts the technology and processing in the probe, which can even be wireless and uses standard tablets (IOS or Android). The latter results in probes that will produce heat and are rather heavy, especially if they need to contain a battery in the case they are wireless.

8.     
SPECT lounge
Ergonomics is gaining traction. Patients could potentially be intimidated by these large diagnostic radiology machines while having to lay on a table staring at the ceiling when being scanned. Instead, being able to sit in a comfy chair while your scan is taking place could provide a more pleasant experience while allowing for eye contact with the technologist as well. Case in point a SPECT scanner offered by one of the vendors.

Who is this company again?
9.      Mergers and Acquisitions are still the order of the day (or year?) When walking into one of the exhibit halls I was puzzled by a major booth from “Change Healthcare,” a confusing name to me, but which probably cost its owner a lot of money for a market branding company. I had to ask one of the booth attendants, who explained, “yeah, we used to be McKesson.” Similarly, Acuo went from Perceptive to
become Hyland, Toshiba is now Canon, Merge disappeared to become IBM and others were swallowed by different vendors, spun off, or re-branded themselves. The good news is that there were probably as many “first-time vendors” as there were “last-time vendors” showing that there is still room for new start-ups bringing in fresh ideas and innovative products. But it is always with a little bit of nostalgia, having worked for one of these “giants” for a few years in my past life (notably Kodak), to see these names disappear.

Shopping downtown Chicago
10.Last but not least, Chicago (still) rocks. After attending 30 RSNA’s I stopped counting, but every year it gets better. The food prices and lodging costs are still exorbitant, but instead of the always notorious cab drivers who talk non-stop on their cell phones in a language you don’t understand and don’t get out of the car to help you load your luggage, there is now Uber or Lyft to bring you to where you want for half the price. Even better there is the opportunity to provide feedback for your driver on your phone after your ride (what a great concept!). And it is always fun to watch the many international participants at the show and try to guess where they are from based on their gestures and clothing (the Italians always stick out!). This was another great year, new hype, good vibes and fun, looking forward to next year already!
  


Monday, October 30, 2017

What is the future of PACS?

I get this question a lot, i.e. where is PACS headed? It comes from different professionals, from
people who are decision-makers ready to spend another large sum of money for the next generation PACS, or from those who made PACS a career such as PACS administrators who come to my training and want to make sure that their newly acquired skills and/or PACS professional certification will be of use 5 or 10 years from now.

I also get it from the users who are often frustrated by limitations and/or issues with their current system and from start-up companies that are planning to spend a lot of time and energy in developing yet another and better PACS in the already crowded market place. Where do I think PACS is going and is there still a future in this product and market? I don’t have a crystal ball but based on what I have seen in my interactions with PACS professions, here is my assessment and prediction:

When we talk about a PACS (Picture Archiving and Communication System) in the traditional sense as a “product,” instead of as a “functionality,” yes, the PACS product is indeed the equivalent of a gas-powered car that needs a dedicated driver at the steering wheel,  doomed to disappear in favor of electric, self-driving cars using AI technology developed by Google and others. Just as every car manufacturer is scrambling to get on the bandwagon and change their product development to meet the new demands out of fear of going the same route as Kodak. Similarly, If I were be a truck driver who operates a vehicle mostly on interstate highways, I would be worried about my long-time career path.

PACS systems viewed as a “function,” however, will still be around as the need to interpret and manage images and related information will continue. But, many of those functions will become more autonomous using AI. The Wall Street Journal proclaimed recently AI to be the latest Holy Grail for the tech industry, and there is definitely going to be a spillover to the field of healthcare imaging and IT.

Self-learning systems using algorithms developed by Facebook and Amazon that know which friends or product you might want to follow or purchase next will anticipate your steps and tasks and reduce mouse clicks, anticipate information you want to consult and in what form and presentation (think self-learning hanging protocols) that allow you to become more efficient and effective. This will impact the number one complaint that users currently voice about their PACS, i.e. that it does not support their preferred workflow well.

PACS will give up its autonomy regarding the workflow. In several institutions the workflow is starting to shift from being PACS or RIS-driven to now being the EMR-driven workflow. Unlike PACS, the traditional RIS systems are becoming quickly obsolete. Order entry is shifting to CPOE functionality in the EMR and even the modality worklists are starting to become available in the EMR. Not every EMR, however, is quite ready to incorporate the entire process, consequently there are many holes that are covered with interface engines, routers, brokers, workflow managers, etc. from several “middle-ware” vendors who are bridging the gaps and integrating these systems smoothly. If I were to invest in healthcare imaging and IT that is the niche where I would bet my money.

Another major application for AI will be the elimination of the majority of negative findings from screening exams. Early experiences have shown that AI can eliminate perfectly “normal” mammography images and reduce the images that would need to be reviewed by a person to about 20 or 30 percent of the caseload. Computer Aided Diagnosis (CAD) will also become mainstay for not just the current niches in breast imaging but also be available in other types of exams.

Among the periphery, i.e. at the acquisition side, we will also see a shift as new modalities are being introduced and/or existing modalities are being replaced. Mammography screening exams could be replaced by low cost MRI combined with ultrasound and potentially thermography imaging. We can already look inside arteries and veins using IV-OCT (Intravascular Optical Coherence Tomography) using a small catheter, who knows what we will be able to visualize next, maybe the brain?

Note that this transition assumes a “deconstructed PACS,” of which the core is stripped down to an image cache of a few months with diagnostic viewing stations tightly coupled to this core, and using an enterprise VNA image manager/archive which could be from another vendor, which is driven by the EMR, tied together for now by multiple routers and prefetching gateways. Some of the institutions will opt to archive their images in the cloud, which will become very inexpensive as cloud storage rapidly transforms into a commodity with Google, Amazon, Microsoft and others all vying for your business. If nothing else, the cloud will replace the many tape libraries that are still out there. View stations will become super-fast as solid-state memory will be replacing disk drives, so we will finally be able to improve today’s requirement of a “3 second minimum image retrieval” at a workstation, which has been the semi-gold standard for the past 25 years.

Unlimited image sharing is going to be common practice, CD image exchange will go the way of floppy disks, or the large 14-inch optical disks we used to have for image storage. At my last company we used to take these big optical disk platters and make them into wall clocks, I still have one of them in my office. I should save a CD as well to hang next to it. Accessing information across different organizational boundaries will use webservices much like what you see on an Amazon web page right now. On that Amazon page you can purchase a product from Amazon or an external vendor, which is seamlessly linked.

Compare that with the physician portal, he or she can access the local lab results or jump to an outside lab that provides the lab results in a nice graph, while the image access in the local or remote VNA is also just a click away. And of course, access to many educational on-line resources and good practices are all simple apps on that same desktop, or should I say dashboard, which also displays the current wait time in the ER, number of unread reports in the queue and report turn-around time, in addition to the weather forecast and radiology Facebook share page.

So, do I think that PACS is dead as some people are declaring? In don’t think so, especially if you consider PACS as a function. Just as some see the need for fewer radiologists (think truck drivers?) as a doomed career, but I their functional roles will shift to that of a consultant and the job will be less focused on cranking out reports of which many are “normals” as those will be automated, PACS will continue as an important function in clinical-decision making.


Finally, what about the people who support these sophisticated systems, i.e. the PACS administrators? Their role will shift too, many of the mundane jobs will be more automated and they will be able to focus on re-engineering workflows, planning and solving tricky integration problems. So, the future of PACS is bright in my opinion, but is will be a different color of bright, and as always with transitions, there will be people and companies that anticipate and embrace these changes, and others that will have blinders on and be left out. 

Friday, September 29, 2017

Vendor Neutral PACS Administrator training

A red light on my dashboard suddenly came on saying “no charging.” The battery indicator showed still at least 12 Volts, so I chose to continue my errand and take care of it when I got back home. That was a mistake, which I found out when my car stalled at a red light in a busy intersection. I should have turned around right away and/or gone to a garage to take care of my alternator which was broken. This event caused me to think that all of us are taught to drive a car before getting a license, but we aren’t taught basic troubleshooting of issues that might occur, hence these kinds of events could happen to anyone.

The same can be said of training as a PACS administrator. Similar to when a car salesman explains where to find the blinker and light switch, and possibly even how to set the clock on your car, there is little vendor training about how a PACS functions, what can go wrong, and how to interpret the “error messages.”

The good news is that cars have gotten pretty reliable, you don’t need to be a part-time mechanic anymore to be able to operate them. The bad news is that is not the case with supporting a PACS system. These are complex software applications, which definitely can have bugs, and are subject to many user errors and/or integration issues, which can cause images and related information to be unavailable or incorrectly presented to a physician.

Even though one is trained on a PACS system from a specific vendor of a particular release, it does not mean that you are taught the fundamentals. For example, what happens if the PACS rejects an image because it has a duplicate Accession Number, Study Number, Series UID, or SOP Instance UID?

Vendor-specific training does not cover what could have been the cause and how to fix it? Nor does it cover a “DICOM error,” or how to interpret the log files, or what to do if a modality does not display a worklist. What if images are randomly “dropped” when sending from a modality to the PACS? The easy answer is: call the vendor, but what if there is finger-pointing going on between the modality, RIS or PACS vendor, or what if the vendor is not going to be on-site for another 4 hours and your PACS is refusing to display any images?

I can go on-and-on listing many reasons and situations that are not covered by a vendor-specific PACS training program; but that is what you are taught by Vendor Neutral PACS Administrator (VNPA) training. That is why many PACS administrators search for “neutral” training providers that do teach the fundamentals.

The generic or neutral training is also a great track for healthcare imaging professionals who would like to get into this field, or want to cross over from a related career such as healthcare IT or clinical specialties such as radiological technologists.

The PACS fundamentals training covers subjects such as DICOM and HL7 basics and troubleshooting, and also covers new developments such as Vendor Neutral Archives (VNA), how to implement enterprise image archiving, what to look for when you get the new breast tomosynthesis modality or IV-OCT in cardiology, and the characteristics of the new encounter-based specialties such as surgery, endoscopy and in the future digital pathology.

As an additional bonus, you can even consider getting certified as a PACS administrator, where you might consider the basic, advanced and DICOM certifications.
So, even though you might have had the vendor-specific PACS administrator training, you might want to consider the Vendor Neutral PACS administrator training as well, to teach you the fundamentals which will empower you to be a mediator between vendors who are finger-pointing each other and blaming “the other” as the culprit, and to be able to perform basic trouble-shooting yourself without having to wait for your vendor to show up, and to be prepared for new developments in PACS and modality technology.

Thursday, September 21, 2017

PACS and Cyber Security.

There is a lot of anxiety around cybersecurity, especially after the recent ransomware incidents which
basically shut down several hospitals in the UK and affected several institutions in the US. The question is whether we should be concerned with potential cyber security breaches in our PACS systems and how to prevent, diagnose and react to them.

At the recent HIMSS security forum in Boston, a distinguished panel rated the security performance and readiness of healthcare IT systems at around 4 on a scale of 1 to 10. That is certainly very troublesome, and combined with the fact that breaches in healthcare systems are by far the most frequent as they are potentially more rewarding for hackers than trying to get access to, for example credit card information, that means that this industry still has a lot of catching up to do.

The problem is also that the vulnerabilities are increasing as the Internet of things (IOT) is expanding exponentially with as many as 10 million devices being added every day, and is estimated to reach 20 billion by 2020. Included in the IOT are medical imaging devices, which may put PACS in the high-risk category, as downtime could mean no access to images, which could directly impact patient care. However, there are even higher risk devices that have proven to be potential targets for intrusions such as IV pumps that administer drugs, implantable pacemakers, personal insulin pumps, etc. that can be immediately fatal to a patient. One can compare this threat with that posed to the controls of a self-driving car, whereby a hacker could turn the steering wheel so it goes towards the traffic, which can be as dangerous as increasing the morphine drip rate of an infusion pump.

Now getting back to PACS, if a hacker gains access to a patient imaging database, there is typically no Social Security numbers, addresses, credit cards or other potentially lucrative personal information stored in the PACS. A more likely scenario would be that the PACS is used to provide a “backdoor” into the EMR, or hospital information system to either shut that down and use it as a potential ransomware threat or get to the more extensive patient records in other systems. The prevailing opinion is that ransomware is probably the most likely scenario as it gives immediate rewards (pay $xxx or else….) instead of having to sell the patient records on the black market.

So, how can vendors and institutions prepare? First of all, no system can be made totally fool proof, just as no lock can be strong enough to protect against every type of attack. If someone is really motivated and wants to spend the time, there is always going to be a way to break in. The good news is that apparently a typical hacker is willing to spend, on average, a mere 150 hours on one attempt, after that he will move on to find another target that may be easier to break into.

This could be different if the attacker represents a nation-state that wants to access the records of military personnel served by a DOD hospital, they have all the time of the world, which is why the VA, DOD and other military healthcare institutions have a much higher set of cyber security rules. And the threat is real, according to the recent HIMSS security survey, more than 50 percent of the respondents reported that they had been subject of a known cyber-attack over the past 12 months. The emphasis is on “known” as it takes typically more than 200 days to detect an intrusion.

The key for preparation for every healthcare IT system is “basic hygiene,” analogous to hand-washing to prevent infections. Cyber security “hygiene” starts with updating your operating systems and implementing patches as they come out. Just as an illustration, the WannaCry ransomware attack exploited a flaw in the Microsoft OS for which a fix had been distributed two months prior to the attack, which affected about a quarter of a million computers in 230 countries. Basic “cyber hygiene” also includes password updates, three-way authentication, closing down unused ports, segmenting your network, disabling flash drives, using virus scanners and firewalls, etc.  Also, make sure you have a backup and/or duplicated system so that as soon as your system goes down you can still operate.

A comprehensive cyber security program has to be in place that includes allocating resources. As an example, Intermountain Healthcare has an IT staff of 600 people to support its 22 hospitals and 180 clinics with 70 of those people (12%) dedicated to cyber security. This is an exception, the average IT budget allocated to cyber security is only about 6-8%.

There are lots of resources to get started, the best known and most used is the NIST security framework, there is also a very extensive certification that is becoming more popular called HITRUST. At a minimum, one can start by looking at the so-called MDS2 (Manufacturer Disclosure Statement for Device Security) form developed by NEMA and HIMSS. As a vendor, one should look at these resources and as an end user you might want to request the MDS2 and ask about HITRUST certification. There already are several vendors who are supporting this.

In conclusion, PACS is probably not the number one target for cyber attack, but they could be an easy backdoor to other systems, which can be used to access patient and personal information that is valuable to hackers, and/or even worse, can be used as a ransom. Basic cyber security hygiene is critical, and using the NIST and/or HITRUST framework can be very beneficial.


Saturday, June 17, 2017

SIIM 2017 Top Ten Observations.

The 2017 SIIM (Society for Imaging Informatics in Medicine) meeting was held in Pittsburgh, PA on June 1-3.
View back to the city from Allegheny
The meeting was well attended both by users and an increasing number of exhibitors. This meeting is mostly attended by PACS professionals, typically PACS administrators, in addition to several “geeky” radiologists who have a special interest in medical informatics. Pittsburgh, in addition to being somewhat “out of the way,” was not a bad choice to hold a conference; downtown was quite nice and readily accessible, actually better than I expected. Here are my top ten takeaways of the meeting:

1.     AI (Artificial Intelligence) is still a very popular topic. The title of the keynote speech by Dr. Dryer from Mass General says it all; “Harnessing Artificial Intelligence: Medical Imaging’s Next Frontier.” AI goes also by the name of “deep learning” reflecting the fact that it uses large databases of medical information to determine trends, predictions, precision medicine approaches, and provide decision support for physicians.Another term people use is “machine learning” and I would argue that CAD (Computer Aided Diagnosis) is a form of AI as well. One of the major draws for this new topic is that some professionals are arguing that we won’t need radiologists anymore in the next 5-10 years as they are going to be replaced with machines. In my opinion, much of this is hype, but I believe that in two areas there will be a potentially significant impact on the future of radiology. First of all, for radiography screening AI could help to rule out “normal.” Imagine for breast screening or TB screening of chest images, one could potentially eliminate the reading of many of them as they would appear normal to a computer, freeing the physician to concentrate on the “possible positives” instead.Second, there were several new startup companies that showed some kind of sophisticated processing that can assist a radiologist with diagnosis, for very specific niche applications. There are a couple of issues with the latter. A radiologist might have to perform some extra steps and/or analyses, which could impact the application’s performance and throughput. As such, the application will have to provide a significant clinical advantage. Also, licensing additional software could be a cost that might or might not be reimbursed. In conclusion, AI’s initial impact will be small, and I don’t think that despite the major investments (GE investing $100m in analytics) it will mean the end of the radiology profession in the near future. A quote from Dr. Dryer also summed it up, “it will not be about Man vs. AI but rather the discussion of Man with, vs a Man without AI.”

2.     Cyber warfare is getting real. The recent WannaCry incident shut down 16 hospitals in the UK, which created chaos, as practitioners had to go back to paper. As we are now living in the IOT (Internet Of Things) era, we should be worried about ransomware and hacking. Infusion pumps, pacemakers and other devices can be accessed and their characteristics and operating parameters can be modified.It is interesting that HIPAA regulations already covered many of the security measures that could prevent and/or manage these incidents, but in the past, most institutions focused mostly on patient privacy. Of course, patient privacy is a major issue, but it might be prudent for institutions to shift some of the emphasis on network security instead of privacy as that could be potentially more damaging. Imagine the potential impact of one patient’s privacy being compromised vs the impact of infusion pumps going berserk, or a complete hospital shutdown.

3.     Facilitating the management of images created by “ologies” is still very challenging. Enterprise imaging, typically done using an enterprise archive such as a VNA as imaging repository, is still in its infancy. The joint HIMSS/SIIM working group has done a great job outlining all of the needed components and defined somewhat of an architecture, but there are still several issues to be resolved. When talking with the VNA vendors, their top issue that seems to come up universally is that the workflow of non-traditional imaging is poorly defined and does not lend itself very well to being managed electronically. For example, imagine a practitioner making an ultrasound during anesthesia or an ER physician taking a picture of an injury with his or her smart phone. How do we match up these images with the patient record in such a way that they can be managed? Most radiology-based imaging is order driven, which means that a worklist entry is available from a DICOM Modality Worklist provider, however, most of the “ologies” are encounter driven. There is typically no order, so to go hunting for the patient demographics from a source of truth can be challenging.There are several options, one could query a patient registration system using HL7, using a patient RFID or wristband as a key, or, if FHIR takes off, one could use the FHIR resource as a source, or one could use admission transactions instead (ADT), or do a direct interface to a proprietary database. There is probably another handful of options, which is the problem as there is no single standard that people are following. The good news is that the IHE is working on the encounter-based workflow, so we are eagerly awaiting their results.

4.     Patient engagement is still a challenge. There is no good definition of patient engagement in my opinion, and different vendors are implementing only piecemeal solutions. Here is what HIMSS has to say about this topic:
Patient engagement is the activity of providers and patients working together to improve health. A patient’s greater engagement in healthcare contributes to improved health outcomes, and information technologies can support engagement. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their care tend to be healthier and have better outcomes.
 
Many think of patient engagement as being equivalent to having a patient portal. The top reasons for patients wanting to use a portal are for making appointments, renewing prescriptions and paying their bills. However, none of these is a true clinical interaction. Face-to-face communication using, for example, Skype or another video communication, or just simply having an email exchange dealing with clinical questions are very important. One of the issues is that the population group that is the first to use these portals are also the group who already take responsibility for their own health. 
The challenge is to reach the non-communicative, passive group of patients and keep a check on their blood pressures, glucose levels, pacemaker records, etc. Also, portals are not always effective unless they can be accessed using a smart phone. This assumes of course that people have a phone, which was solved by one of the participants in the discussion by providing free phones for homeless so that texts can be sent for the medication reminders and checking up on them. Different approaches are also needed, as a point in fact, Australia had made massive investments in patient portals but because patients were by default set up as opt-out, only 5 percent of them were using portals. 
One of the vendors showed a slick implementation whereby the images of a radiology procedure were sent to the personal health record in the cloud and from there could easily be forwarded to any physician authorized by the patient. This is a major improvement and could impact the CD exchange nightmare we are currently experiencing. I personally take my laptop with my images loaded on it to my specialists as I have had several issues in the past with the specialists having no CD reader on their computers or lacking a decent DICOM viewer. There are still major opportunities for vendors to make a difference here.

5.     FHIR (Fast Healthcare Interoperability Resources) is getting traction, albeit limited. If you want one good
Packed rooms for educational sessions
example of hype, it would be the new
FHIR standard. It has been touted as the one and only solution for every piece of clinical information and even made it into several of the federal ONC standard guidelines. Now back to reality. We are on its third release of the Draft Standard for Trial implementation (DSTU3), typically, there is only one draft before a standard, and it is still not completely done. Its number of options are concerning as well. And then, assuming you have an EMR that has just introduced a FHIR interface (maybe DSTU version 2 or 3) for one or more resources, are you going to upgrade it right away to make use of it? But to be honest, yes, it will very likely be used for some relatively limited applications, some examples are the physician resource used by the HIE here in Texas finding information about referrals, or, as one of the SIIM presenters showed, a FHIR interface to get reports from an EMR to a PACS viewing station. But there are still many questions to be addressed to use what David Clunie calls “universal access to mythical distributed FHIR resources”.

6.     The boundary between documents and images remains blurry. When PACS were limited to radiology images, and document management systems were limited to scanned documents that were digitized, life was easy and there was a relatively clear division between images and documents. However, this boundary has become increasingly blurry. Users of PACS systems started to scan documents such as orders and patient release forms into the PACS, archiving them as encapsulated DICOM objects, either as a bitmap (aka as “Secondary Captures”) or encapsulated PDF’s.Some modalities such as ophthalmology were starting to create native PDF’s, bone densitometry (“DEXA”) scanners were also showing thumbnail pictures of the radiographs with a graph of its measurements in a PDF format. Then we got the requirement to store native png, tiff, jpeg’s and even mpeg videos in the PACS as well. At the same time, some of the document management systems were starting to store jpegs as well as ECG waveforms that were scanned in. By the way, there has been a major push for waveform vendors to create DICOM output for their ECG’s, which means they would now be managed by a cardiology PACS.And managing diagnostic reports is an issue by itself, some store them at the EMR, some at the RIS, some at the PACS and some at the document management system. The fact that the boundary is not well defined is not so much of an issue, what becomes clear is that each institution decides where the information resides and creates a universal document and image index and/or resource so that viewers can access the information in a seamless manner.

7.     The DICOMWeb momentum is growing. DICOMWeb is the DICOM equivalent of FHIR and includes what most people know as WADO, i.e. Web Access to DICOM Objects, but there is more to that, as it also allows for images to be uploaded (STOW), or queried (QIDO) and even provides a universal worklist allowing images to be labelled with the correct patient demographics before sending them off to their destination.There are three versions of DICOMWeb, each one builds on the next one with regard to functionality and a more advanced technology making them current with state-of-the-art web services. One should realize that the core of DICOM, i.e. its pixel encoding and data formats is not changed, we still deal with “DICOM headers” but that the protocol, i.e. the mechanism to address a source and destination as well as the commands to exchange information has become much simpler.As a matter of fact, as the SIIM hackathon showed, it is relatively easy to write a simple application using the DICOM resources. As with FHIR, DICOMWeb is still somewhat immature, and IHE is still trying to catch up. Note that the XDS-I profile is based on the second DICOMWeb iteration, which is based on SOAP (XML encapsulated) messaging that has recently been retired by the DICOM standards committee. The profile dealing with the final version of WADO, called MHD-I is still very new. There is a pretty good adoption rate though; and many PACS systems are implementing WADO, which unlike FHIR can be done by a simple proxy implementation on an existing traditional DICOM interface.

The radworkflow space
8.     Ergonomics is critical for radiology. I can feel it in my arm when I am typing or using a mouse for an extended time. Imagine doing it day-in and day-out while staring at a screen in half-dark, no wonder that radiology practitioners have issues with their arms, neck, and eyes. Dr Mukai, a practicing radiologist who started to rethink his workspace after having back surgery is challenging the status quo with what he calls the radworkflow space, i.e. don’t think about a workspace but rather a flow space (see link to his video). He built his own space addressing the following requirements:
a.     You need a curved area when looking at multiple monitors with a table and chair that can rotate making sure you always have a perpendicular view. Not only does this improve the view angle distortion from the monitors but also is easy on your neck muscles.
b.    Everything should be voice activated and by the way, all audio in and out should be integrated such as your voice activation, dictation software and phone.
c.     Two steps are too many and two seconds for retrieval is too much. It is amazing to think that retrievals of images in the 1990’s, using a dedicated fiber to the big PACS monitors of the first PACS systems used by the Army, were as fast or possibly faster than what is state-of-the-art today. Moore’s law of faster, better, quicker and more computing power apparently does not apply to PACS.
d.    Multiple keyboards is a no-no, even when controlling three different applications on 6 imaging monitors (one set for the PACS, one set for the 3-D software, and one set for outside studies).
Hopefully, vendors are taking notes and will start implementing some of these recommendations, it is long overdue.

Camera mounted at Xray
9.     Adding a picture to the exam to assist in patient identification. As we know, there are still way too many errors made in the healthcare delivery that potentially could be prevented. Any tool that allows a practitioner to double-check patient identity in an easy manner is recommended. A company that was exhibiting at SIIM had a simple solution as it takes a picture of a patient and makes it part of the study by creating a DICOM Secondary Capture of the image. It consists of a small camera that can be mounted at the x-ray source. I noticed two potential issues that need to be addressed: does it work with a MRI, i.e. what is the impact of a strong magnetic field on its operation? Second, now we know how to identify the patient better, how would it be to de-identify the study if needed? We would need to delete that image from the study prior to sharing it for the purpose of clinical trials, teaching files, or when sharing it through any public communication channel.

Nice dashboard from Cincinati Childrens
10.  Dashboards assist in department awareness. I am all in favor of dashboards, both clinical and operational as it typically allows one to graphically see what it going on. I liked the poster that was shown by Cincinnati Children’s showing the display that is placed in a prominent space in the department and shows its operational performance such as the number of unread procedures, turnaround time, a list of doctors who are on call, and also a news and weather link. They pulled this data from their PACS/RIS system doing some simple database queries. This is a good example of how to provide feedback to the staff.


As mentioned earlier, I thought that SIIM2017 was a pretty good meeting, not only for networking with fellow professionals, but also learning what’s new, and seeing a couple of new innovative small start-up companies, especially in the AI domain, and last but not least, enjoying a bit of Pittsburgh, which pleasantly surprised me. Next year will be in DC again, actually National Harbor MD, which despite its close location to Washington will not be a match for this year’s, but regardless, I’ll be looking forward to it.

Wednesday, June 14, 2017

Top 10 lessons learned when installing digital imaging in developing countries.

Patient at Zinga
Children's hospital,
close to Dar-es Salaam,
Tanzania,
recipient of a Rotary
 International grant for
imaging equipment
Installing a digital medical imaging department in a developing country is challenging, which is
probably an understatement. The unique environment, lack of resources, money and training, pose barriers to creating a sustainable system.

As anyone who has worked in these countries will attest, sustainability is key, witnessed by the numerous empty buildings, sometimes half finished, non-working equipment due to lack of consumables, spare parts, or simply not having the correct power, A/C or infrastructure environment. 

I learned quite a bit when deploying these systems as a volunteer, especially through gracious grants by Rotary International and other non-profits, which allowed me to travel and support these systems in the field. Some of these lessons learned seem obvious, but I had to re-learn that, what is obvious in the developed world, is not necessarily the case at the emerging developing countries of the world. 

So, here is my top 10 lessons learned in the process:

1.       You need a “super user” at the deployment site with a minimum set of technical skills. Let’s take, as an example, a typical digital system for a small hospital or large clinic, which has one or two ultrasounds, a digital dental system and a digital X-ray, either using Direct or Computerized Radiography (DR or CR). These modalities require a network to connect them to a server and a diagnostic monitor and physician viewer. Imagine that the images don’t show up at the view station, someone needs to be able to check the network connection, and be able to run some simple diagnostics making sure that the application software is running. In addition to being able to do basic troubleshooting on-site, that person needs to also function as the single point of contact for a vendor trying to support the system and be the ears and eyes for support.

2.       Talking about “single point of contact,” I learned that it is essential to have a project manager on-site, which means that one person arranges for equipment to be there, knows what the configuration looks like, checks that the infrastructure is ready, does the follow up, etc. It is unusual that the local dealer does all of this. There also might be construction needed to make a room suitable for taking X-rays (shielding etc.), A/C to be installed to prevent the computers from overheating, network cables to be pulled, etc.; there has to be a main coordinator to do this.

3.       You also need a clinical coordinator on site. This person takes responsibility for X-ray radiation safety (which is a big concern) and also doing the QA checks, looking for dose creep (over exposing patients), reject analysis (what is the repeat rate for exams and why are they repeated). With regard to radiation safety, I have yet to see a radiation badge in a developing country, which is common practice for any healthcare practitioner who could be exposed to X-ray radiation in the developing world. As a matter of fact, I used to carry one with me all the time when on the vendor site and being in radiology departments on a regular basis. I would get calls from the radiation safety officer in my company when I forgot that I had left the badge in my luggage going through the airport security X-ray scanners. There is little radiation safety infrastructure available in developing countries, and the use of protective gloves, lead aprons and other protective devices is not always strictly enforced, this is definitely an area where improvements can be made.

4.       Reporting back to the donors is critical. There are basically three kinds of reports which are preferably shared on a monthly basis, as a matter of fact, this is a requirement for most projects funded by Rotary International grants: 1) The operational reports that include information such as number of exams performed by modality (x-ray, dental, ultrasound), age, gender, presenting diagnosis, exam type, etc. 2) The clinical reporting which includes the quality measures such as exposure index, kV mAs, etc. and 3) Outcomes reporting which includes demographics, trends, diagnosis, etc.
The operational reporting will indicate potential operational issues, for example, if the number of exams shows a sudden drop, there could be an equipment reliability issue. The clinical reporting will show if the clinic meets good practices. The outcomes reporting is not only the hardest to quantify but is the most important as it will prove to potential donors, investors and the local government the societal and population health impact of the technology. This information is critical to justify future grant awards.

5.       Power backup and stabilizers are essential. Power outages are a way of life, every day there can be a 4 hour or more power outage, therefore, having backup batteries and/or generators in addition to having a local UPS for each computer for short term outages is a requirement. One thing we overlooked is the fact that if we have power from the grid, the variation can be quite large, for example, a nominal 220V can fluctuate between 100 and 500 Volts. Needless to say most electronic equipment would not withstand such high spikes, so we had to go back in and install a stabilizer at one site after we had a burnout, which is now part of the standard package for new installs.

6.       Staging and standardization is a must. When I tried to install dental software on a PC on-site in Tanzania, it required me to enter a password. After getting back to a spot where I could email the supplier, I found that the magic word “Administrator” allowed me to start up the software, however, not until a loss of a day’s work as the time difference between the US and East Africa is 9 hours. After that, It took me only 5 minutes to discover the next obstacle, “device not recognized,” which did not allow the dental byte-wings to be used for capturing the X-rays. This caused another day delay as it took me another night to get an answer to solve that question. This shows that installing software onsite in the middle of nowhere is not very efficient unless you have at least 2 weeks time, which is often a luxury. And this was just a simple application, imagine a more complex medical imaging (PACS) system requiring quite a bit of configuration and setting up, it will take weeks.

There are a few requirements to prevent these issues:

1) Virtualize as much as you can, i.e. use a pre-built software VM (virtual machine) that can be “dropped in” on site. The other advantage of the virtual machine is that it is easy to restore to its original condition, or any other in-between conditions that are saved. It is interesting that the “virtualization trend,” which is common in the western IT world in order to save on computers, servers, and most importantly power and cooling capacity, is advantageous in these countries as well but more for ease of installation and maintenance reasons.

2) Stage as much as you can, but do it locally. If you preload the software on a computer in the US, ship it to let’s say Kenya, first you will be charged with an import duty that easily can be 40%, and you also might send the latest and greatest server hardware that nobody knows how to support locally. Therefore, the solution is to source your hardware locally providing local support and spare parts, and then stage it at a central local location that has internet access to monitor the software installation and then ship to the remote site.

3) Use standard “images” which goes back to the “cookie-cutter” approach, i.e. have a single standardized software solution, for maybe three different sizes of facilities, small, mid-size and large, so that the variation is minimal.

7.       Use a dedicated network. This goes back to the early days of medical imaging in the western world. I remember when we would connect a CT to the hospital network to send the images to the PACS archive, it would kill the network because of its high bandwidth demands. It is quite a different story right now, the hospital IT departments have been catching up, and have been configuring routers into VLANS that have fiber and/or gigabit speed connections to facilitate the imaging modalities. But we are back to square one in the developing world; networks, if available, are unreliable, might be open to the internet and/or computers that are allowed to use flash drives (the number one virus source), and therefore connecting these new devices to that would be asking for trouble. Therefore, when planning a medical imaging system, plan to pull your own cables, and use dedicated routers and switches. If you use high quality programmable and managed devices, it could become the core of the future hospital network expanding beyond the imaging department.

8.       Have an Internet connection. The bad news is that there is typically no reliable or affordable internet connection, however, the good news is that the phone system leapfrogged the cable infrastructure and therefore you should plan for a G3 compatible hot-spot that can be used to connect a support expert and take a look at the system in case there are any issues.

9.       Training is critical. Imagine buying a car for your 16-year-old daughter and just giving her the keys and telling her that she’ll be on her own. No-one would do that, but now imagine deploying a relatively complicated system in the middle of nowhere, which will allow people to make life-and-death decisions, without any proper training. I am not talking about clinical training on how to take an X-ray or do an ultrasound, but the training on how to support these systems that are taking the images, communicating, archiving them and displaying them. You need a person who takes the weekly back-ups to make sure that if there is a disk crash they can recover the information, who will do the database queries to get the report statistics, do the troubleshooting in case an image has been lost or misidentified, is the main contact to the support people at the vendor, and so on. On- the-job-training will not be sufficient. The good news is that it is relatively easy to create training videos and upload them on YouTube (or better send them on a CD as internet access might not always be available).

10.   Do not compromise on clinical requirements. I have seen darkroom processors being replaced with a CR and a commercial (i.e. non-medical) grade monitors to look at the images in a bright environment. This is very poor medical practice. No, you don’t need two medical grade 3 MegaPixel monitors at the cost of several thousands of dollars. Clinical trials have shown that a 2 Megapixel has the same clinical efficacy as a 3MP, but requires a user to use its zoom and pan tools a little bit more, which is acceptable in these countries. Therefore, the key is to use a medical grade monitor, which is calibrated to convert each individual grayscale value into a pixel that can be distinguished from each other. If this is not the case, there is no question that valuable clinical information will be lost. Also the so-called luminance ratio (difference between dark and white) does not have to be as high as long as the viewing environment is dark enough. So, as a rule of thumb, use an affordable medical grade monitor and put it into a dark room (paint windows, walls, hang curtains), don’t skimp on these monitors.


In conclusion, none of these lessons learned are new, we learned most of these 20 years ago, but the problem is that most of them might be forgotten or assumed, at least that is what I did when venturing out to these developing countries. The good news is that we can apply most of what we have learned and therefore be successful in providing imaging to the remaining two-thirds of the world that does not yet have access to basic imaging capabilities and thereby still make a major difference.

Monday, May 1, 2017

Digital Pathology: the next frontier for digital imaging; top ten things you should know about.

Typical pathology workstation (see note)
As the first digital pathology system finally has passed FDA muster and is ready to be sold and used in the USA, it is time for healthcare institutions to prepare for this new application. Before jumping head first into this new technology, it is prudent to familiarize yourself with the challenges of this application and learn from others who, notably in Europe, have been doing this for 5+ years. Here is a list of the top ten things you should be aware of.

1.       The business case for digital pathology is not obvious. Unlike the experience in radiology where film was replaced by digital detectors, and we could argue that the elimination of film, processors, file rooms and personnel would at least pay for some of the investment in digital radiology, digital pathology does not hold promise for the same amount of savings. Lab technicians will still need to prepare the slides, and as a matter of fact, there is additional equipment needed to digitize the slides to be viewed electronically.
The good news is that pathology contributes very little to the overall cost of healthcare (0.2%), and therefore, even though the investment in scanners, viewers, and archiving storage is significant, impact of this on the bottom line is small. Of course, there are lots of “soft” savings such as never losing slides, being able to conference and get second opinions without having to send slides around, much less preparation time for tumor boards, much faster turnaround through tele pathology, and the potential for Computer Aided Diagnosis. So, going digital makes every sense in the world, but it might just be a little bit hard to convince your CFO.

2.       Most institutions are “kind of” ready to take the jump from an architecture perspective. Many hospitals are strategizing how to capture all of their pathology imaging, in addition to radiology and cardiology, in a central enterprise archiving system (aka Vendor Neutral Archive). And they might have already made small steps towards that by incorporating some of the other “ologies.” However, pathology is definitely going to be challenging, as the files sizes for images are huge. A sub-sampled compressed digital slide could easily top 1.5GB, therefore you should be ready to multiply your digital storage requirements by a factor of 10. As a case in point the University of Utrecht, which has been doing this for 7 years is approaching 1 Petabyte of storage. So, even if you have an enterprise image management, archive and exchange platform in place, it definitely will need an adjustment.

3.       Pathology viewers are different from other “ologies.” Pathologists look at specimens in a three dimensional plane, unlike radiologists who, in many cases look at a 2-D plane (e.g. when looking at a chest radiograph). One could argue that looking at a set of CT or MRI slices is “kind of 3-D” but it is still different than when having to simulate looking at a slide under a microscope. The pathologist requires a 3-D mouse to view the images, which are readily available. The requirements for the monitors are different from other imaging specialties as well; a large-size good quality color monitor will suffice for displaying the images, which is actually much less expensive (by a factor of 10) than the medical grade monitors needed for radiology.

4.       Standard image formats are still in their infancy. This is something to be very aware of; most pathology management systems are closed systems, with an archive, viewer and workflow manager from the same vendor, with little incentive to use the existing DICOM pathology standard for encoding the medical images. Dealing with proprietary formats does not only lock you in to the same vendor, possibly making migration of the data to another vendor costly and lengthy, but also jeopardizes the whole idea of a single enterprise imaging archiving, management and exchange platform. Hopefully, user pressure will change this so that the vendors will begin to embrace the well-defined standards that the DICOM and IHE community has been working on for several years.

5.       Digital pathology will accelerate the access to specialists. I remember from several years back, visiting a remote area in Alaska, when it switched to digital radiology and when all the images were sent to Anchorage to be diagnosed. Prior to that, a radiologist would fly in for 2 days a week, weather permitting, to read the images. So if you needed a diagnosis over the weekend, you were out of luck. The same scenario applies for having a pathologist at those locations, as of now, the samples are sent, weather permitting, to a central location to be read. In some locations there is a surplus of pathologists, in some there is a shortage or even lack of these medical professionals. Digital pathology will level the playing field from a patient access perspective. Without having to physically ship the slides and/or specimens, it will significantly decrease the report turnaround time and impact patient care positively.

Typical Slide scanner (see note)
6.       Digital pathology is the next frontier. Here is some more good news. Vendors are spending 100’s of millions of dollars in developing this new technology. Digital scanners that can load stacks of slides and scan them while matching them with the correct patient using barcodes are available. Workflow management software has improved. Last but not least, automatic detection and counting instead of doing this manually, of certain structures in the images is a big improvement towards characterizing patterns and therefore diagnosis can be made more accurately.

7.       Don’t expect to become 100% digital. Some applications still require a microscope. The experience at the Utrecht Medical Center in the Netherlands is that you may achieve  95% conversion to digital but there are still some outliers that require a microscope because of the nature of the specimen. However, this is very manageable and only a relatively small subset.

8.       Digital pathology has ergonomic advantages. Imagine having to bend over most of the day while looking through a microscope, you can imagine that doing that day-in-day-out for many years can cause strain on your neck and back. Instead, sitting in a comfortable chair, or having a stand-up desk definitely is better, even although one needs to be careful with picking the right mouse to avoid carpal tunnel syndrome.

There is a lot of opportunity for automated
counting and detection (see note)
9.       Viewing capabilities are an order of magnitude better. This is obvious for professionals who are reading medical images as a radiologist or cardiologist, but for pathologists who were bound to a single view through a microscope, and who now are having multiple images next to each other, and being able to annotate them electronically, it is a completely new world.

10.   Research and education gets a major boost. Imagine the difference when teaching a group of pathology students who are supposed to be looking at a similar tissue through their own microscope and now they all can access the same image on their computer monitor. One can build a database of teaching files and easily share them electronically. All of this seems obvious for anyone who is involved with medical imaging in other specialties, but for pathology this is a major step.

In conclusion, digital pathology is finally here in the USA. However, there are some hurdles, starting with convincing the people who hold the purse that it is a good investment, then adjusting the architecture and workflows to facilitate the huge image sizes, and making sure that these systems support open standards so you are not going to be locked into a specific vendor. There are definitely major advantages and it might be expected that the benefits will soon become so evident that it will only be a matter of time before everyone will jump on the digital pathology band wagon. It is strongly recommended that you learn from others, notably in Europe who have been implementing this technology already for several years.

Note: Illustrations courtesy of Prof. Paul van Diest, UMC Utrecht.