Showing posts with label Newsletter. Show all posts
Showing posts with label Newsletter. Show all posts

Thursday, April 3, 2014

When eliminating film is not necessarily the best solution.

During my recent humanitarian trip to Nicaragua, sponsored by Rotary International, we built a library and feeding center. While there I also visited the local children’s hospital in the city of Chinandega. It was good to see the impact on patient care we had made on a previous trip by bringing in equipment, such as oximeters and other life-saving devices, to help care for these very tiny premature babies.
I typically also visit the radiology department to find out if I can learn anything. They just had received a nice refurbished CR system with an X-ray table, and with having the room newly painted, and air conditioning installed, it looked really nice.

Feeding 900 children, one at the time
However, I noticed on the table next to the CR computer an appointment book listing all of the patients that were imaged. It seems it is difficult to get people to let go of their old practices. Even though there was a mini-PACS and patient management module available, for them, there is nothing better than paper. The exams were identified as “exam1,” “exam2” etc. and the appointment book served as an index.

If patients are deemed to be in a critical condition, they travel by ambulance to the Nicaraguan capital, Managua. In the past, they would send the film with the patients. In this case, even though a film printer was provided with the system, it is unused. It seems they needed its power conditioner somewhere else, so the printer is not operational anymore. The primary image exchange from this system was intended to be paper using a small desktop printer, however, after the print cartridge ran out, this too is no longer used. While burning the image on a CD is an option, there is apparently no budget for supplies such as print cartridges and CDs.

So, the bottom line is that a year ago, before the CR system was installed, critical patients would go with the analogue film to the main hospital. Today, the analogue processor is gone, and there is no means for getting the digital images out of the PC to go with the patient. I resisted the urge to run to the local supply store and buy a stack of CD’s and/or a printer cartridge, however that would only have been good for as long as they lasted. The hospital has to learn how to implement digital technology in a sustainable manner.

This is only one example of the trial and error that takes place in aiding developing countries. That is why it is critical to travel and find out the local needs instead of throwing technology at these regions. That does not only apply to medical devices, but equally to any other areas as well. This year I visited a library that was built by another non-government organization next to a school we had started several years ago. The library was very well equipped, with books, a computer, big screen TV with a DVD, and even a copy machine. Too bad there was no electricity to use any of these new gadgets, nor has the education ministry provided anyone to staff the library.


I estimate that about 50 percent of all relief money is spent without doing the necessary homework and follow up and is thereby wasted. This is by no means meant to discourage participating or contributing to these causes, just make sure you do your homework and pick the right projects and organizations. We have been very fortunate to be able to sponsor the construction of several small clinics and classrooms in this area with a high success rate for sustainability. It takes work, visits and local follow up to make it a success.

Monday, December 31, 2012

2013, looking back and ahead.

1 million people at Times Square

Well, what a year it was. We kept our healthcare reform on track, EMR implementations are happening at the speed of light, PACS is maturing, and the RSNA annual meeting was so-so with regard to new innovations and attendance.

I am personally involved with an EMR implementation in my hometown at the clinic where I serve on the board, and therefore am experiencing the challenges first-hand. Compared with PACS systems, an EMR is much more diverse and the integration problems are on a much larger scale, but the complexity is less. I tell everyone who is new to this field, if you can learn DICOM, HL7 is but a breeze.

It will be an exciting year going ahead. OTech introduced several new training classes to include advanced hands-on troubleshooting and IHE as well as EMR administration certification. The EMR administration training is based on the recently announced new certifications by PARCA. We already did a couple of pilots of these classes and they were very well received, this will be a great asset to professionals who are working in the PACS field or for new entrants.

Newtown, CT
On a personal note, I have been very much touched by the Connecticut tragedy. We lived in that region, literally 20 minutes from the Newtown site for 5 years when our girls were in elementary school. We consider that time one of the best eras of our life. I remember the cold Halloween walks, where we would need stops with hot chocolate to warm up, the many camping trips with the girls, and the garage filled with girl scout cookies as Johanna, my spouse, was the ultimate girl scout mom. We made many friends and our kids loved their teachers and classmates. Now, I don’t own any guns; I have had my share of using them during my army time. Now I concentrate on collecting stamps and woodworking as hobbies, which are potentially less harmful. But I understand that some people love them and like to collect and use them. I just hope that there is enough momentum for people to consider restricting assault weapons to those who should be using them professionally and not allowing them to be sold at your department store or sporting goods store around the corner.

Anyway, looking forward, I hope and expect that we learn from the past and are looking forward to yet another great year. Happy New Year!

Sunday, January 1, 2012

2012, It’s All About Image Enabling

As we are entering the new year, one might wonder what this upcoming year will bring in the healthcare imaging and IT field. After the foundation that was laid in the 1990 of solid and reliable standards such as DICOM and HL7, the first decade of the 21st century was dominated by digital imaging and PACS system installations. There are still a couple of departments in a hospital that are holding on to analogue technologies and non-DICOM encapsulation of their data such as pathology and endoscopy which generates MPEG clips, but that will gradually convert over the next few years. Most US based healthcare institutions are already at their second or even third generation of PACS installations. Migration has become a major issue as unexpected costs have become associated with changing vendors and loosing essential information such as image annotations, key images, and other presentation state information. This is one of the major reasons that people are considering Vendor Neutral Archives or archiving images in the "cloud". However, these VNA cloud solutions have created quite some confusion as there is no true definition about what a true VNA storage solution would encompass. 

One of the major components in my definition of a true VNA is the capability to provide image access by a stand-alone viewer or through an Electronic Medical Record in a non-proprietary manner. Each vendor has its own plug-in and/or web access which allows for image access to their own images, but the challenge is to provide this capability to different vendors from different systems. IHE has defined a way to do this using the so-called XDS-I profile, which is based on a Dicomized version of a web protocol, called WADO, or Web Access to DICOM Objects. However, the problem with defining standards is that it always includes a certain level of crystal bowl gazing, i.e. one does not quite know whether this will take off and be widely implemented as there might be more pragmatic and readily available solutions that might become a de-facto standard. Therefore, time will tell whether WADO will become the widely accepted standard, but it is clear that image enabling has to happen, through whatever means. This is where we as health care imaging professionals will have to spend our next time and energy. Most of the problems distributing and managing images in an enterprise have been successfully solved, it is now all about ubiquitous availability if imaging and corresponding access. This is also where I feel that our profession will grow into: supporting the infrastructure and products and services to facilitate this. The only thing missing is to make this a “meaningful use” requirement so that it will get more attention of the people distributing resources and funds. Time will tell. 

Tuesday, November 1, 2011

What About the End-user?

I recently came across a couple of examples showing how healthcare providers still do a poor job of providing products services and even physical facilities that truly meet the needs of end-users. As a case in point, when I asked my chiropractor why he still requests film for the MRI's he has done for his patients instead of getting the images on a CD, he told me that the viewing software that is embedded on those CD’s is so hard to use that it sometimes takes him up to 30 minutes to figure out how to line up two series on his monitor. Taking a set of films out of a envelope and placing them on a viewing box is much faster. The second example was when I talked with a local primary care physician who is considering scaling down her practice because the meaningful-use requirements for implementing electronic health records require her to implement an EHR to continue to get full reimbursements from Medicare and Medicaid. The fact that she can get between $40k and $50k in grants to implement an EHR does not counter balance the additional work needed to enter all of the information into the system. She told me that she has looked at several EHR products and none of them meet the requirements for her to enter the information effectively and efficiently. The last point is when I talked with a nurse who is just moving to a brand new wing in her hospital, which was built without taking any input from the nurses who are going to be working there. Instead this brand new facility, which was most likely designed to meet all building standards, does not meet any of the potential workflow improvements that could have been made. 

The US government is trying to improve the efficiency and quality of healthcare, but the industry is lacking products, services and facilities that focus on what the user really needs. This appears to be less the case when one considers the acquisition of devices such as new ultrasounds or CT’s, but appears to be common with the software, larger systems and infrastructure. I am not sure what the solution is except for encouraging end users to continue to press the industry to focus on the end-user. 

Saturday, October 1, 2011

Eliot Siegel Q&A;

This is a transcript of the Q and A session of the September vDHIMS eposium presentation by Eliot Siegel about advanced visualization workstations. If you are interested, you can listen to the full one-hour presentation, simply register at https://otechimg.com/vdhims/?action=register and enjoy. 

Q: There is a difference in the quantitative output of several advanced visualization workstations among the different vendors, do you see a potential standardization and/or certification by a company such as ECRI?

A: That is a great question and it is of tremendous concern to several people including myself with regard to quality control as we seem to look for the esthetics of how the images look but when we are making quantitative measurements either manually or by using the software, the measurements vary considerably. 

I propose to do a couple of things, and we have been talking with some vendors about them. The first thing would be to have standard scans of phantom data, for example creating a phantom for lung nodules or carotid stenosis. Another option is to work with NIST, which has created standard objects that have been measured very precisely that we can scan. Yet another option is to create a mathematical model, so we would not have to use the scanner to create a data set, and there are interesting data sets that are well known and which can be submitted to the vendors. 

The problem is that it is hard to reproduce the human anatomy with phantoms, therefore one might use a de-identified data set, with patient approval, and share those and use them to create a semi-standard. It would be great if one could go to RSNA or another meeting and go to a vendor and look at a standard data set for carotid imaging or cardiac etc. So I think it is a great idea and, as a customer and a person who is interested in quality improvement, I would very much like to pursue that. 

Q: Do you keep the thin axial CT slices and what would you recommend for a typical hospital? 

A: I work in multiple clinical settings and at the University of Maryland, we keep them for only three to six months unless it is designated as a research study or need to be kept for other purposes. At the VA, we keep all of our thin slice data indefinitely. My recommendation would be for everyone to keep the thin slices indefinitely. However, I think that if you look across the country, only a minority of institutions keeps the thin slices. 

When we talk with the legal folks about what data to retain, the answer that they give us is that you should retain data that you used for making your original clinical diagnosis. I and other people are doing image interpretation from the thin slices and therefore logically the conclusion would seem to be that if we use the data for making the day-to-day diagnosis, we ought to be keeping that information because my decision was partly predicated upon what I would see in an oblique image or a reconstructed image that was synthesized from the original data. I don't really have a record of what I saw unless I am able to save the thin slices. Therefore my philosophy is to save it, especially with the cost of storage declining. One compromise for institutions who are having cost issues would be to compress thin sections in a number of different ways. You could store the thick sections uncompressed and then use, for example, a JPEG compression for the thin sections. Therefore my philosophy is that in the near future everybody will start saving the thin sections. 

Monday, August 1, 2011

Trouble With Transitions Anyone?

I am always looking for new intellectual and physical challenges, which is why I entered my first-ever mini-triathlon last year. After having done two, I am about to enter another race this weekend in my hometown to see whether I can improve my ranking this time. I find that the hardest part is not any one of the three legs, i.e. swimming, biking or running, but rather the transitions. Biking at my maximum performance for about an hour seems to program my body in such a way that changing to running becomes almost impossible, at least for the first mile or so. It is difficult to move my legs in front of each other. 

Intellectual transitioning is also hard to do, but often a requirement. Professions where these transitions may involve life threatening or emergency situations typically require a lot of training. Examples of that are pilots who suddenly need to react when an engine fails or other serious condition occurs. 

Healthcare IT or PACS system administrators face similar requirements to be ready for stressful transitions. You might be in the middle of upgrading a device, when you get a call to update the demographics of a procedure because a technologist entered the information incorrectly. 

Some of the tasks you are performing require a lot of concentration because of the potential impact if you make an error. Imagine the impact if you made an error while making a backup, and that backup was needed because the original information was lost due to a major disk malfunction. If you make a mistake updating a study, it could result in the information or image being assigned to the wrong patient. 

Unfortunately, humans make mistakes, especially if they are in a multi-tasking environment and have to transition often from one domain and/or activity to another. I would argue that most errors can be attributed to human error, rather than hardware failure. One local hospital told me that their last significant PACS downtime was due to a service engineer from the vendor who remotely used an incorrect database backup, which corrupted the original, causing a 4-hour PACS downtime. Needless to say, it pays to monitor anything that happens with the system, even if it is done by your vendor. 

In conclusion, be aware of "transitions" and focus on the activity at hand, especially if you are dealing with issues that will impact the lives of others in a potentially significant manner. 

Friday, July 1, 2011

VHR Lessons Learned for PHR/EHR Implementations

It seems that every time we vacation with our dogs, we end up at a veterinarian as they contract some disease or injury. In any case, we get to know different veterinarians, in this case somewhere in Colorado. We needed a vet and found one using a Veterinarian Health Record (VHR). This recent visit taught some lessons that we might apply as we begin to roll out Personal and Electronic Health Records (PHR/EHR). First of all, I was initially impressed with the nice data entry screen with graphics to identify the information needed; however, I found that it was not as easy and smooth as expected.

In general, when registering a patient, there is an issue with unique identification, i.e. is this person the same patient for which a record already exists in the system? If the system is connected to another patient domain, what is the patient identifier to be used for query? The veterinarian world is relatively easy, as increasingly our pets are getting RFID chips, which are about the size of a grain of rice and implanted under the skin. The purpose of this chip was initially to identify lost pets, but it is also a great tool for medical records identification. Farmers and ranchers have used RFID tags on animal ears for years to identify individual animals among large herds. The DICOM standard extensions for veterinary applications actually have added a special data attribute to include this information with images. 

Unfortunately, there is no US national registry; each manufacturer, distributor, or provider keeps the information. That is why our pets are not "chipped," as we tend to use different providers as we travel with them. 

I don't think it is realistic to expect that human patients would be willing to be “chipped,” however, even if in theory this could happen, it would still require a national registry to prevent duplicates and ensure that each person is uniquely identified. In addition, there are security and privacy concerns that prevent a universal patient ID to be issued and/or used in the US (unlike many other countries), therefore we need to implement rather sophisticated patient registries defined by IHE (Integrating the Healthcare Enterprise) to allow local ID registration that can also be reconciled with multiple ID registries. However, one would suspect that there are no such security and privacy concerns with pets, and therefore hopefully there might come a day when we see a unified pet registry in place. 

Another lesson learned has to do with the data entry for our pet in the electronic record. My guess is that the time it took to enter the information about the primary complaints, and observations, was more than was actually spent with the “patient.” Even though the technician was a very efficient typist, she had to use many different screens and had to do a lot of free text data entry. When I see demos of EHR systems by vendor representatives at tradeshows such as the HIMSS, it appears to go very fast and efficient, however in practice, it is a different story. 

As I watched the data entry for our dog, it occurred to me that it would be really nice to have speech recognition technology or at least templates, macros or other time-saving methods. As a matter of fact, I estimate that this visit took twice as long as with our home veterinarian, who merely scribbles her notes in the patient jacket. Of course, that paper information is not available to other vets, but there is definitely a time trade-off. 

With regard to entering the diagnosis, another issue emerged. While there were no doubt hundreds, if not more, potential diagnoses preprogrammed into the system, the diagnosis for our dog was apparently not foreseen by the system developers. Not that it seemed to me to be an obscure disease, it just did not fit into any of the many available categories. After trying many different searches, the vet gave up; there is apparently no “free text” entry in this particular system. She commented that the system was definitely developed by engineers who had not taken into account the true requirements of healthcare providers. 

I understand the developer’s predicament, especially if we would want to improve our healthcare system for humans, we need to make sure we can categorize diagnoses and therefore measure and potentially influence the efficiency of the healthcare delivery. However, it might not always be possible to define a black-white definition of a given diagnosis that fits an existing code system or text. The danger of course is that by allowing free data entry, physicians may misuse it and not use the standard diagnosis when it is an easy case. However, I would argue that if it is easier to enter the preprogrammed codes than entering additional text, a physician will not misuse the system. 

In conclusion, this experience taught me several lessons with regard to patient identification, ease of use for data entry, and use of preprogrammed templates that I hope that some of the developers of EHR systems will take as valuable input. 

Wednesday, June 1, 2011

Dose Issues Not Only For CT

There is a lot of activity around radiation dose reduction, especially for CT exams. This is partly due to the incidents that got a lot of press in which people were overdosed due to operator errors and negligence. Another important factor is the increase in CT exams, especially in the ER. It used to be that trauma cases would get a couple of X-rays done to look at potential fractures and/or internal damage, however, most ER's now have a resident CT and a body scan is pretty much standard procedure. 

After numerous studies raising safety concerns about the amount of radiation exposure for all these CT scans, vendors are finally taking notice and implementing techniques to reduce radiation exposure. One step being taken is to start registering the dose administered to the patient. This sometimes requires dosimeters in the X-ray chain, as well as reporting mechanisms. The reporting is still very much a work in progress. For some modalities, such as digital mammography, there already is relatively reliable information in the image header, which could be extracted by the PACS and stored. Some systems use the DICOM Modality Performed Procedure Step (MPPS) information as it also (optionally) can contain the dose. This is used in some cardiology applications whereby the cardiology information system records this information. There are drawbacks of the MPPS method as it is design dependent on the images that are created, and, for example, for fluoroscopy exams, there might not be any or only a few images taken. If one would depend on the dose information in the MPPS for those types of exams, the exposure would be severely under reported. 

This is especially true for CT, in which there is often a separate screen archived with the dose information, however, there is no digital representation, which means that the data extraction needs to depend on so-called screen-scraping or optical character recognition to get the actual information. The best way of reporting the dose information would be by using the dose-structured report. As a matter of fact, the Integrating the Healthcare Enterprise or IHE initiative has defined a special profile, called the radiation exposure monitoring or REM profile. This was demonstrated at the recent Radiological Society of North America meeting, however, there is a still a lack of recording and reporting systems, which is causing very slow implementation. 

CT dose reporting is getting most of the attention, however, in my opinion, the over exposure and unnecessary exams using standard X-rays such as DR or CR is underestimated. As an example, my little six-year-old grandson has issues with allergies and congestion, especially during the flu season. He has already had several chest X-rays over the course of his first six years as pediatric physicians like to play it safe and order an X-ray "just in case." There are also no guidelines on how much to reduce the technique factors to maintain reasonable image quality, and still to be able to make a diagnosis. The Image Gently campaign has developed online teaching materials, but to my knowledge, there are no guidelines published yet. In addition, if images are taken, there is also often a lack of shielding, an issue which was reported in the article “X-rays and Unshielded Infants” on Feb. 27, 2011 in the New York Times. 

It might seem strange to hear a message of reduction of X-rays from someone like myself who works in this industry, however, one should realize that 80 percent of the world’s population has no access to X-rays at all. Rather than over utilizing these systems for the privileged 20 percent, it might be better to expand it to those who have no access. This requires the development of low cost digital systems, which are very durable and easy to use. I believe that this can be done, if some of the major vendors would just make this a priority. 

In conclusion, dose registration is still challenging, the IHE REM profile implementation should be a major push. However, registration is just the first step, further development of dose reduction techniques and guidelines by professional organizations are needed as well. 

Sunday, May 1, 2011

Enterprise Information Management and Archiving Hot Topics

During the recent vDHIMS ePosium on the subject of the evolving digital healthcare enterprise, attendees had an opportunity to interact with the distinguished faculty to ask their questions in a Q and A session. Here are some of the notable Questions that were asked ant the respective responses: 

Data migration is still a major issue and Steve Horii from the Hospital of the University of Pennsylvania (HUP) is able to attest to that, going through this experience several times. One of the issues he noted was the potential loss of annotations when migrating the data. These annotations are also referred to as overlays in the DICOM standard. To store this information there are several options. The first option is to "burn-in" the data, which actually means that the pixels are replaced. This is seen a lot with Ultrasound and creates a lot of potential issues in case the information happens to be incorrect and needs to be modified. Some users put “XXX-es” over the text; however, if this annotation not preserved during the migration, there could be a major issue. Another option is to save this information in a database record in a proprietary method, which is what Steve had to deal with in his migration. The proper way of storing overlays is by creating a DICOM standard object, the so-called “Presentation State”, however, this requires the migration software to be able to interpret the database of the PACS system which is used as input to allow for the conversion to the presentation states. 

There are other reasons for being able to interpret the proprietary component of the input data to be migrated, for example, if the archived images were stored on non-rewritable media such as an optical disk, the changes to the patient demographics or the deletion of certain images or even complete studies after the fact, are not reflected in the image archive but only in the database. This is another proof that data migration does not include only the transfer of the images but also requires a lot of knowledge about the input and output database structure. 

Another question that was asked at the ePosium by the audience is what to do with any of the modality disks, as many CT, MR and even some Ultrasound units might have archived their images on optical disks (MOD) long before PASC was installed. These studies might occasionally need to be retrieved. The HUP solution for that was to have the vendor create a special data input station with a single disk reader. Remember, that the CT or MRI might have long be retired and replaced with a newer modality which meant that the “old” MOD or DVD readers could have been retired with the old units and therefore the capability to read this media has disappeared. 

Another insightful series of presentations was from Kevin McEnery about the EHR and their in-house developed viewer which were built using Service Oriented Architecture (SOA) principles at MD Anderson cancer center in Houston. One of the participants asked about the staff at this institution and it was an impressive 200 people strong. Major reasons for this institution to develop their in-house viewer and infrastructure are the very different workflow for radiation therapy, the need for clinical trial support and the submission requirements of treatment data to regulatory agencies. Even for “typical” institutions” there are already significant differences in workflow, making it hard to match the EHR systems, let alone if you take into account the difference between very specialized institutions such as a cancer center. 

One of the issues noted was also the requirement to have a certified EHR to meet the new Meaningful use requirements so that the institution can apply for incentives as part of the Hitech section of the ARRA. As Dr. McEnery noted, it is possible to apply for a “modular” certification, and re- use the certification of a “core” functionality and only certify the additional modules, which will be a big help for many institutions as they are customizing their EHR and MPR implementations. 

If you are interested in the complete text of the presentations of Dr. Horii and McEnery, you can find these archived as part of the symposium, in addition to the other presentations and copy of the hand-outs from this three day event. It is even possible to gain continuing education credits by taking a simple quiz after the presentation so you can keep up with your certification requirements, see www.otechimg.com/vdhims for more details. 

Friday, April 1, 2011

VPN vs HIE

Many hospitals are connecting their outpatient facilities and clinics as well as their high-volume users of image and related information databases for exchange and access on a routine basis. Most use a secure Internet connection with a VPN to make sure that the patient privacy and security requirements are met. It gets a little bit more complicated when information has to be exchanged between organizations that are not part of the same delivery network, as patient information, especially Patient ID, often differ at the sites. Even more troublesome, the Accession Number, which is often used as a database key may be a duplicate of a number already in use. 

Some organizations have used tricks, such as automatic prefixes to the Accession Numbers that can be stripped as needed to prevent issues with data integrity. However, most organizations have a semi-automated import procedure for importing images from the outside. 

The good news is that the process of importing images through a VPN is virtually identical to importing images from off-line or exchange media such as CD's. Institutions have learned that it is very dangerous to import images from a patient without checking the patient demographics to make sure there are no data integrity issues. They typically create a new dummy order with a new accession number as well. In the case of images being transferred automatically, most PACS systems leave the images in the unverified or unspecified queue and allow a technologist or PACS administrator to check them and update any information as needed. 

It might be expected that these ad-hoc processes will be streamlined and automated in the US as soon as Health Information Exchanges or HIE’s come on-line. These HIE’s are currently in the process of being formed, mostly funded by federal grants, and will provide the infrastructure by which patient information as well as images can be exchanged between all participants. This will also provide the link with the National Health Information Network, or NHIN, so that information can be exchanged across all states. Critical to the implementation of these HIE’s are the Regional Extension Centers, or REC’s, which will provide technical consulting, vendor selection support for physicians to get their Electronic Health Record Software, and provider interface. 

The impact of the HIE rollout will be huge as, theoretically, every physician connected to the network will have access to any patient’s information, including images. It is almost guaranteed that the information exchange that currently takes place on a semi-ad-hoc basis and among regular high volume users over dedicated VPN’s will multiply as these gateways become the standard for information access. These HIE’s will facilitate the automated exchange and reconciliation of patient ID’s using standard protocols defined by IHE. 

There are still hurdles to overcome. Some states are moving very fast with their HIE implementations, and some are lagging behind. For example, at the recent Texas Health Information Technology Forum in Austin, Tony Gilman, CEO of the Texas Health Services Authority, showed how implementation in Texas is already well underway. State-level services are expected to take place in 2012, and the transition to sustainability will happen in 2013. 

The sustainability is still a risk factor for all of these HIE’s as they face long-term funding issues. Almost all HIE’s are initially funded by federal grants, for example, Texas received $28.8 million. However, after this money will be spent, who is going to support the organization and infrastructure? I am sure that this will take some negotiation between all stakeholders. 

In conclusion, image and information exchanges are increasingly shifting away from importing CD’s to the exchanging files over the Internet using secure VPN connections. These semi-ad-hoc connections will increase and, over the next few years, be replaced by more automated information exchanges using HIE’s whereby patient identifiers and other demographics will be automatically synchronized using the IHE profiles. 

Saturday, January 1, 2011

Technique, Technology Aid in Driving Down Dose

As Low As Reasonably Achievable, or ALARA, has always been the mantra of technologists and radiologists when it comes to the amount of radiation to which a patient is exposed during a diagnostic imaging exam. However, due to new technology, human error, or miscommunication there have been a clutch of high-profile incidents of patient overexposure in the past few years that have received widespread press attention. These cases have served to invigorate the commitment of imaging professionals to keep the dose as low as it can go, especially for the pediatric patient.

Dose management was a hot topic at the 2010 RSNA, with several presentations offering attendees the tools and techniques to deliver low-dose exams. Physicians shared their efforts to reduce the dose to a level that provides images that are clinically acceptable, while coping with added noise and reduced resolution. 

Manufacturers of digital radiography equipment, which has suffered from a tendency of technologists to increase dose (the "dose creep" phenomena), have implemented processing technologies to improve image quality and logging tools that allow administrators to catch and coach against dose creep in their facilities. 

High-volume, multi-slice CT has also come under scrutiny, with clinicians re-examining their protocols in light of these new, powerful technologies. Proper collimation and using techniques appropriate for different areas of the body—such as varying dose from head to chest to abdomen to pelvis during a whole-body scan—are proving helpful in achieving ALARA. In addition, new solutions for dose recording and registration are rapidly being implemented in the new devices. 

The most important factors for dose reduction are the definition of policies and procedure, technique charts, training and education, as well as awareness. Healthcare IT can assist here, because an electronic health record could be used to record radiation dose events. This would allow it to be programmed to alert a physician before they order another CT scan or conventional X-ray. 

Dose reporting is critical to allow for benchmarking against yet-to-be defined national standards and good practices. The ACR is setting up several trial centers that will perform dose recording in order to establish a database from which to base to create standards. The IHE has defined the Radiation Exposure Monitoring profile that defines use cases and DICOM transactions to implement dose recording, which was demonstrated in its booth at the 2010 RSNA. 

The IHE profile uses a Dose Structured Report, which is defined by the DICOM standard and has a template for both X-ray and CT radiation dose. These templates contain information about the observer/system, date/time, and start and end of the event; the number of events that are recorded; and dose information. It also includes the method used to calculate and record the radiation event. This information can be sent to a PACS, RIS, or dose registry station, depending on the capabilities and architecture of the radiology imaging system. Dose reports can be sent, after anonymization, to an external registry. 

There are other short-term solutions to record the dose, none of which are optimal, but might provide an interim approach. One could, for example, capture a screen with the dose information, which can later be processed by an optical character recognition software package to extract the information. 

Other solutions could be the use of the DICOM MPPS dose information, which might not be as detailed and exact as the Structured Report (SR), but is better than nothing if the device does not yet support SR. Another solution, which is used by digital mammography, is to get this information from the DICOM image header. 

Both state and federal governments are taking dose management seriously—via legislation (notably in California), hearings, and FDA rule making. The MITA organization, representing imaging vendors, is also actively involved by defining guidelines and standards for dose reporting and registration. 

In conclusion, there is no question that because of regulatory and consumer pressure, X-ray radiation devices will be required to be modified, upgraded, and equipped with the capability to exchange dose information is a standard manner. Institutions will also have to store radiation events and dose details, most likely in the RIS, but also possibly in a PACS or separate application. Eventually, these events will become part of an electronic or personal health record. Imaging facilities are strongly urged to use standards for the implementation of dose management, in particular, the IHE Radiation Exposure Monitoring profile. 

Wednesday, December 1, 2010

Meaningful Use or Mis-Use for Radiology?

If you scan all the marketing materials, press releases, and announcements of healthcare device and IT companies, the term Meaningful Use (MU) would be one of the most used (followed closely by cloud storage, vendor neutral archive, and dose reduction). For obvious reasons, healthcare providers are rushing to implement Electronic Health Records (EHR) that comply with MU requirements in order way to receive a share of the multibillion dollar subsidy for these products from the American Recovery and Reinvestment Act. If they don't have a system in place, procedure reimbursement will be cut—so there's a strong incentive to implement an EHR.

As would be expected, creative marketing for these products has kicked into high gear, stretching the meaning of MU. Radiology, in particular, is difficult to define in terms of MU, at least in a first pass at the recent Phase 1 Rule on Medicare and Medicare EHR incentive payments. 

What is the impact of MU on radiology? As a start, the word radiology appears only once in the entire Phase 1 Rule description, which doesn't give diagnostic imaging much to go on. The emphasis is on medication interactions, errors, and public health. Even the Phase 1 Rule requirement for Computerized Physician Order Entry (CPOE) does not include radiology tests. 

What is puzzling is that radiology has led the way in the development and implementation of IT in healthcare. Digital imaging, PACS, voice recognition-based reporting have been mainstream technologies for radiology the past decade. As film has been phased out, so have errors due to missing, lost, or misplaced images. Turnaround times for reports have been slashed from a day or longer to less than an hour at many facilities. And teleradiology has permitted the expertise of specialty radiologists to be available in remote areas, and allowed all-day, every-day coverage at any facility. 

This is not to say that there is not still work to be done. The integration among radiology, cardiology and other ologies is still in its infancy. Most radiology images are distributed via somewhat cumbersome API's in EMR’s. The implementation of images of for anyone everywhere at any time is still a challenge. 

IHE has addressed integration by offering a set of standards that makes this available, and its 2010 Connectathon has shown that vendor implementation is feasible. Unfortunately, vendors typically don’t choose to voluntarily implement standards; they tend to be biased toward using proprietary solutions that lock users into their architecture and products. And this is where the Phase 1 Rule for MU could have drawn a line in the sand by mentioning some of these efforts. 

The Medical Imaging and Technology Alliance (MITA), a lobbying organization for diagnostic imaging, attempted to bring this to the attention of the Dr. Blumenthal, the Director of the Office of the National Coordinator for Healthcare Information Technology (ONC) earlier this year--without any noticeable impact on the Phase 1 Rule. 

So, where and how does the MU regulation address radiology? First, it shows up in the requirement to use a CPOE; however, for radiology this is probably not to be expected until the Phase 2 Rule. Second, drug-to-drug and drug-allergy checks are required, which impacts the modalities that use contrast agents. 

DICOM has defined the capability for substance administration at a modality that allows queries for verification of the supplies to be used. There is an impact on the recording of certain demographic information, such as race and ethnicity, which might impact the data that will be sent to remote readers. Problem lists and diagnosis have to be encoded. 

Also, patients will have to be provided with a copy of their health information, which is a somewhat regular practice as many patients currently are requesting a copy of their images on portable media. Eventually, this data will be automatically uploaded to their Personal Health Record (PHR). There are also privacy and security requirements to protect electronic health information, which is already addressed in radiology via the widespread implementation of audit trails in PACS. 

It will be interesting to see what the Phase 2 and 3 MU ruling will say; hopefully radiology will be addressed in greater detail--in particular the requirement for interconnectivity standards, which are needed to make EHR’s that include imaging a success. 

Monday, November 1, 2010

Implementing Dose Recording

As low as reasonably achievable (ALARA) is a radiation dose guideline that radiologists and technologists endeavor to achieve daily. In the majority of radiologic exams, this goal is met. In addition, ongoing diagnostic imaging research, more effective post-processing algorithms, and breakthroughs in modality manufacturing are all endeavoring to lower the radiation dose delivered to the patient. 

When dose delivery goes wrong—someone gets hurt. Generally, when there is a dose accident it can be traced to a combination of human and mechanical/technological issues. For example, the recent publicity surrounding the hundreds of California patients overexposed due to the selection of incorrect CT imaging protocols. 

Pediatric radiologists are particularly keen on using the lowest possible diagnostic dose for their imaging protocols. The Image Gently Alliance, which advocates for greater radiation safety in pediatric imaging, has received enthusiastic support from every radiological professional society. 

In addition, there have been hearings in the U.S. Congress about the issue of dose reduction, as well as new legislation adopted by California requiring that patient radiation dose be recorded—all in an effort to minimize patient overexposure. 

The medical imaging industry in the U.S. was caught off-guard by the dose recording requirement. However, most of Europe has already established standard dose recording terminology and methodology, so it would not be that difficult to adopt these applications. And new extensions to the DICOM standard—in the form of Structured Reports for dose recording—have recently been added. This means that there are no practical excuses for manufacturers to not implement patient dose recording in a standard manner. 

One should note that there are alternative solutions to the DICOM Structured Report for recording dose information; however, the other options are either incomplete or do not allow for the information to be stored electronically. For example, the practice of displaying the information only on a user's screen is unacceptable beyond the immediate moment—the only mechanism to access this information electronically is by saving the screen and applying some type of optical character recognition (OCR) application in order to access the data. 

Alternate solutions to record dose by using information in the DICOM header are also incomplete as they do not record the complete exam (not every X-ray exposure results in a saved image). The same argument (an incomplete record of total exposure) applies to the use of the optional recording of dose in DICOM's Modality Performed Procedure Step (MPPS). 

Implementing new software additions, such as adding Dose Structured Reports to existing devices, is not a trivial task. There is at least a one-year lead time—which includes proper design, documentation, implementation, verification and testing, and a well organized roll-out and upgrade of the installed base. In addition to these challenges to the modification of deployed X-ray modalities, there is also an infrastructure question as to where and how dose information should be recorded and stored. Options are in the EMR, HIS, RIS, PACS, or a separate, dedicated dose recording device or software application. Also, should patient dose recording be done in a one-off, discrete manner on a per-exam basis? Or, should dose also be recorded and presented to the ordering and examining healthcare professionals on a cumulative basis? 

There is no question that dose reporting is going to be a universal requirement for X-ray modalities. The imaging industry will have to gear up quickly to implement the new DICOM additions, and also come up with solutions to record and report this vital patient safety information. 

Friday, October 1, 2010

Is 'Meaningful Use' Making Imaging Meaningless?

The U.S. government is set to dole out billions of dollars to promote healthcare IT solutions that meet its Meaningful Use (MU) criteria. The Phase 1 requirements were published this past July and the Phase 2 requirements are expected to be available for public comment near the end of the year. 

Most professionals working in medical imaging informatics, especially the radiology and cardiology area, were very surprised to notice the absence of PACS in the MU requirements. Also conspicuous by their absence was the mention of DICOM and HL7 as well as any of their corresponding IHE profile definitions. Interestingly, radiology is only mentioned in the context of orders; there is a mention about results (not the imaging, but merely the reports) in Phase 2. 

This is very puzzling to me and many of my fellow imaging informatics professionals. The implementation of digital image communication, archiving, and display over the past 30 years has been a major, but seemingly, silent revolution. Our efforts have provided health professionals in remote areas of the world with instant access to experts, during practically any time of the day or night. When it comes to an electronic health record, it's obvious that having images available to any physician at any location will eliminate duplicate procedures—increasing patient safety AND significantly decreasing the cost of providing care—and allow the physician to make a fully informed decision about the patient's course of treatment. 

The Medical Imaging and Technology Alliance (MITA), the medical imaging market's lobbying organization in Washington, D.C., has written letters to the Office of the National Coordinator for Health Information Technology (ONCHIT) about these issues—apparently to no avail (yet). 

So, why the silent treatment from the federal government? Not including the more than three decades of work in medical imaging informatics into the MU criteria is foolish and wasteful—both to patient safety and to our healthcare economy. On a pragmatic level, by not including PACS in the MU criteria, federal stimulus funds will be unavailable to implement these systems and related technologies. This means that the small clinics that are still using film and processor technology will still be left on the digital imaging roadside. 

One would be surprised how many of these facilities there still are. I recently visited one of these clinics, which only performed about 10 or so exams each day. They rely on an old processor and then have to digitize the films so they can be read in a hospital about 45 miles away. Imagine how much more efficient and economical a digital system would be—if they only had the initial capital needed to acquire one. 

I hope that our professional organizations (such as RSNA, HIMSS, SIIM, ACR, ACC, and others) make a concerted and sustained effort to ensure that medical informatics takes its rightful place in the MU requirements, as Phase 2 are about to be defined. I also urge any and all of my fellow professionals to urge these organizations to lobby for our rightful place at the healthcare informatics table. 

Sunday, August 1, 2010

Cardiology PACS: IT A-Fib

Although the first blush of cardiology PACS as a new health IT product has passed, the systems still lag about five years behind their cousins in the radiology market. This is not only in the number of system installations, but also in its technology and integration capabilities. From an IT perspective, cardiology is more challenging than radiology, as there are many more pieces of information that need to be integrated. However, from a technology perspective, there is no reason for vendors not to provide totally integrated solutions. Unfortunately, many of the current cardiology PACS offerings lack this necessary capability. 

There are fundamental and significant differences in the IT workflow of cardiology and radiology with regard to PACS. First, the Cardiovascular Information System (CVIS) plays a more important role than a RIS. Not only does it take care of the interface with a HIS for patient demographics, it also provides ordering and scheduling information as well as a robust supply inventory component. In many cases, an interface with a surgery information system is needed, as rooms can be used for both diagnostics and treatment. 

The reporting component is also very different. Speech recognition is not as prevalent (yet) and the level of macros, templates, and structured text are much more important. Ultrasound has special requirements defined by the Intersocietal Commission for the Accreditation of Echocardiography Laboratories (ICAEL). And most importantly, the automated interface to ultrasound modalities that create DICOM structured reports (which provides all the measurement information to populate the appropriate fields in the header) is critical. 

With regard to the workflow, yes, there is a "scheduled workflow" profile defined for cardiology by IHE. However, in practice, it seems that there is much more workflow variety among cardiology practitioners than in radiology, which may be because PACS is not as pervasive in that market. 

When it comes to the modalities employed by clinicians to deliver patient care, radiology PACS is fairly well integrated. This is not to say that the work is complete. There are still some issues around making information available to the radiologists such as the requisition for the “reason for study” or “admitting diagnosis.” In addition, technologist notes, ER discrepancy reporting, and critical results reporting might be better integrated, but overall, it kind of works. However, in the case of cardiology, a PACS has to integrate not only with the DICOM modalities for images--including cine loops--but also with the hemodynamic, physiological, and EKG data. 

This is a major challenge for IT administrators as some of this data may be in proprietary or semi-proprietary (as a customized pdf or xml file) formats. A cardiology PACS should be able to take this information and manage it. Some cardiology systems are clearly defunct as they only are designed to store true DICOM data and, in the best case, might have a Web interface or plug-in to other data sources, but they don't really manage this. There are not many cardiology PACS that support a true level 5 vendor neutral archive (VNA), which would allow all the information to be managed. 

So, if you are considering the replacement of your cardiology PACS, or are in the market for your first system, don’t use the same requirements as you did for purchasing your radiology PACS. These are two very different products. Pay special attention to the back-end, make sure the system provides true multimedia connectivity, ensure that the CVIS provides the functionality your cardiologists need, and that the system meets the workflow requirements of your institution. 

Tuesday, June 1, 2010

PACS Data Migration: How Clean Is Your Data?

In case you missed our May Webcast on data migration, there is still an opportunity to watch the video and have a look at the slides, just go to https:/www.healthimaginghub.com/webcast.html. 

There were some interesting questions posed during the live Q and A portion of this event. The first one was from a listener who asked whether he could use his existing Centera archive when migrating data. 

The answer was an unequivocal: "It depends." The reality is that it depends on the data storage format of the archive. Basically, the question boils down to whether or not an institution can re-use its existing physical storage device when migrating data. Many customers have invested a significant amount of money in these systems, and many of them come with their own intelligence for the PACS interface as well as provisions for data mirroring and back up. 

Most newer archive systems are configured as a Storage Area Network (SAN) or a Network Attached Storage (NAS) device. Many institutions are moving toward a NAS, especially for new installations or as part of an upgrade. Although a SAN/NAS configuration is transparent to information storage, a NAS (at best) may deliver better performance and be more reliable. 

With regard to data migration, assuming that the information is stored in a “native” DICOM format and that a new PACS is able to understand this; then yes, in principle, it is feasible. 

In my career I have seen a couple of cases where this worked, but this has been an exception. When this works, the new database points to the old image data. In order to determine whether this is possible, one needs to do an analysis of the sending and receiving data. Note (and this is important) that a DICOM conformance statement is not necessarily sufficient to determine this feasibility, as it only specifies the interface and not the internal data format. 

Another question from our Webcast asked how one could determine the level of “proprietariness” of an institution's archived data. Again, that is difficult to determine from paper specifications; however, it is an important piece of information for any institution to know. 

The answer would determine, if, best case, an image archive can easily be migrated by just changing pointers, or worst case, if the image data has to be converted. For example, a proprietary compression format would have to be converted to a DICOM standard format. In other instances, image tags will have to be “morphed” added, or deleted to accommodate a new PACS. A rather good measure could be developed by doing a minor “fake” or “test” migration” with one day of generated data and performing an analysis of what was able to be seamlessly migrated. 

One should note that as each institution has its own data environment, every data migration is a unique event to that facility. The image headers of data acquired by two identical modalities with the same software version (say a Phillips' MRI), archived on the same PACS (say a GE), with patient demographics generated by the same HIS/RIS (say McKesson) can be different because many institutions use different data dictionaries, protocols, codes, and so on. 

In general, one can make general statements and have certain expectations about the data, but there will always be exceptions and deviations. 

As such, I would strongly recommend that each institution learns the characteristics of its archived image data. It is always better to be prepared for future migrations than uncomfortably surprised when it comes time to perform this process. Of course, having a Vendor Neutral Archive (or a Storage Service Provider) could eliminate many of these issues.