I was in the departure lounge of an international airport waiting for a delayed flight when I heard a familiar name over the intercom. It sounded vaguely as if it could have been my last name, but it was not quite as I was used to it being pronounced. Just to make sure, I went to the desk and asked whether they called my name and they told me that they indeed had done so several times. It turned out they wanted to notify me that they had put me on another flight because I would have missed my connection otherwise.
The incident serves as a reminder of the challenge names pose for healthcare imaging and IT systems. How we identify people, obviously plays a pivitol role in the integrity of the information managed by healthcare imaging and IT systems. First of all, they might be entered incorrectly. A common error is that a data-entry person inputs the last and first name in a single field, which either should contain only the last or first name. Another common problem is a name change due to changes in marital status. At worst, the impact could be that a previous record, result or image might not be available or, at best case has to be merged. The standard actually allows for a special "merge" or “update” transaction to take care of this.
Other name issues occur with truncations. Not that the connectivity standard does not support sufficient characters, but because an input device has certain limits. I have seen a simple data entry device using a small handheld that only had 16 characters maximum as input. Many foreign names can be longer than that. Some cultures also have double last names, for example, Hispanics typically carry their mother's and father’s names. In my native country, the Netherlands, it is common that a woman maintains both her own and her husband’s name. As a matter of fact, my spouse has been stopped at several international security posts because of the difference between her names in her European passport and how it commonly is entered on airline tickets.
An additional complication with several European names is the presence of “prefixes,” such as “de la” in French, “von” in German, and “van de” or “van 't” in Dutch with several variations. Many US generated healthcare IT systems do not know how to handle this, or require special configuration options to deal with this. A recent post on a user-group highlighted this issue with a popular CT system that could not match names with these prefixes in its worklist.
A problem also occurs when people have names in a language and corresponding character set that does not have a exact mapping into English. This is the case with several Asian languages as well as Middle Eastern languages such as Arabic and Hebrew. The name Abraham can be spelled Abrahim and there are many spellings of Mohammed. With medical care becoming increasingly global, it is not uncommon for a patient to be screened initially in a clinic in Dubai, for example, and then treated in a US institution. The international name issue could be easier if the software processing the patient information supports multiple international character sets. This is relatively easy to check in the interface specification. In some cases the name is the only differentiator, as can be the case with identical twins who have the same sex, and birth date, as well as address.
In conclusion, names are tricky and can cause problems with identification and matching of the right patient records, especially in countries that lack a universal patient identifier (such as the US), making patients identification an ongoing challenge.
Thursday, December 1, 2011
RSNA 2011, What?s New?
As I strolled the giant exhibition halls of McCormick Place at this year's RSNA 2011, it felt less busy. There didn’t seem to be any "buzz," there were no new gadgets or applications generating the interest and excitement of years past. When I asked others about this, there was no clear consensus as to why this should be, but a number of people suggested that the requirement to implement “meaningful use” of Electronic Medical Records has taken up a lot of energy, and investment. It will be interesting to find out at the upcoming HIMSS meeting this spring in Vegas whether this is true or not.
I also found the scientific exhibits disappointing; here is where you can get a peek into what is happening in the research laboratories. The most fascinating demonstration this year was the use of manual gestures through standard gaming technology to operate a radiology workstation.
Another interesting development is the use of Kiosks for allowing patients to enter their personal information into their electronic medical record electronically, instead of having them fill out pages of forms, consents, HIPAA statements and other paperwork, which seems to be necessary every time one makes a physician appointment.
Decision support for order entry, which checks appropriateness of specific orders against a specific complaint, also seems to be taking off. Exchanging images in an “ad-hoc” manner through subscription services also seems to be growing exponentially to address the need for images to be shared within an institution and between physicians. Images are typically uploaded onto a “cloud” server and are than shared with those physicians who are authorized to do so. Display technology is also improving rapidly, with better of-axis viewing, offering the capability to follow fast changes such as displaying slices that are generated by the new tomographically generated digital mammography systems in a CINE mode. One of the vendors was able to increase the output of its monitor to such a degree that it became a virtual light box and allowed the comparison display of a film on one display with the digital image on the other.
Of course, the term “meaningful use” was on almost every booth promising a to help conference attendees tap into the US federal incentive programs. And finally, dose reporting also seemed to be on the top of every list, especially for CT vendors. There was a very well done demonstration about the IHE organization on how to do this according to the new standards using DICOM structured reporting. As with many new standard extensions, however, the industry always seems to lag a few years behind, so that in many cases, one still needs to rely on burned-in text that is captured as a DICOM image, which requires screen scraping to get it in an electronic format.
So, in conclusion, I found RSNA 2011 to be a subdued conference, quiet, not many new things, just another year in Chicago. The good news was that the weather was perfect, I have memories of previous years when I could not get home because of snow and ice, but the weather cooperated very well. This allowed many to visit the company receptions and dinners in the evening and stroll back to the hotel leisurely without having to take a taxi.
I also found the scientific exhibits disappointing; here is where you can get a peek into what is happening in the research laboratories. The most fascinating demonstration this year was the use of manual gestures through standard gaming technology to operate a radiology workstation.
Another interesting development is the use of Kiosks for allowing patients to enter their personal information into their electronic medical record electronically, instead of having them fill out pages of forms, consents, HIPAA statements and other paperwork, which seems to be necessary every time one makes a physician appointment.
Decision support for order entry, which checks appropriateness of specific orders against a specific complaint, also seems to be taking off. Exchanging images in an “ad-hoc” manner through subscription services also seems to be growing exponentially to address the need for images to be shared within an institution and between physicians. Images are typically uploaded onto a “cloud” server and are than shared with those physicians who are authorized to do so. Display technology is also improving rapidly, with better of-axis viewing, offering the capability to follow fast changes such as displaying slices that are generated by the new tomographically generated digital mammography systems in a CINE mode. One of the vendors was able to increase the output of its monitor to such a degree that it became a virtual light box and allowed the comparison display of a film on one display with the digital image on the other.
Of course, the term “meaningful use” was on almost every booth promising a to help conference attendees tap into the US federal incentive programs. And finally, dose reporting also seemed to be on the top of every list, especially for CT vendors. There was a very well done demonstration about the IHE organization on how to do this according to the new standards using DICOM structured reporting. As with many new standard extensions, however, the industry always seems to lag a few years behind, so that in many cases, one still needs to rely on burned-in text that is captured as a DICOM image, which requires screen scraping to get it in an electronic format.
So, in conclusion, I found RSNA 2011 to be a subdued conference, quiet, not many new things, just another year in Chicago. The good news was that the weather was perfect, I have memories of previous years when I could not get home because of snow and ice, but the weather cooperated very well. This allowed many to visit the company receptions and dinners in the evening and stroll back to the hotel leisurely without having to take a taxi.
Tuesday, November 1, 2011
Tips from a Road Warrior (17): Checklists are Critical!
I have been burned several times upon returning from a flight by either not being able to find where I parked my car, or finding it with a dead battery. The airport I fly out the most is DFW in Dallas-Fort Worth, which is very spread out and has five different terminals. It seems that every time I fly out of, let's say terminal C, I seem to return to any terminal but terminal C. I have come back at the height of summer, hauling two suitcases in more than 100-degree heat through the parking garage trying to find my car. I ended up getting a taxi to drive me around to find it.
On another occasion, I returned to a car without a battery as I left my interior light on. In this particular case, I was at a remote parking lot at JFK airport in New York City trying to call AAA road side assistance and could not convey to the person on the phone my car’s location, I fortunately ran into someone with jumper cables who could get me started. Having learned my lessons, I now have a "system" by which I always write down my parking space, and always check the car before I leave it at the garage for any lights etc. that might drain the battery.
Having a well-defined system for documenting status and locations really helps, not only when traveling but also when dealing with complicated systems such as healthcare IT systems, especially when multiple people are involved. I find that the institutions that have a very well defined checklist seem to have the least down time and fewest problems.
One of the hospitals I deal with has three shifts for their PACS administrators so they can provide 24/7 support on-site. There are always two or three people available, and at night typically one person. Needless to say this is a very large institution, which has a very high degree of availability of its PACS system. They have not had any considerable downtime in more than a year. In addition to having a robust and mature PACS system, I find that the main reason is the way that the PACS team monitors their system. They have a detailed checklist that helps them to regularly monitor all critical processes. Some of these are checked hourly such as the RIS feeds while others such as queues in the database or archive, as well as error files, are checked every four hours or as needed.
Some of the checking can be conducted by active monitoring software, which will page or e-mail an administrator that a process is going down, however, performance and intermittent issues are hard to detect automatically and require that someone has the finger on the pulse on a regular basis.
Checking a system on a regular basis and making sure that all errors and problems are addressed immediately as they occur will not only pay off in the short term, but also in the long term, especially when the data has to be migrated. When changing vendors, it is not uncommon that all of the information from the database and archive has to be migrated to a new vendor’s platform and, at that time, information that was mismatched, unidentified or incorrectly identified will rise to the surface. Upon migration, it will become obvious how well the system was managed during the lifetime of the system.
I learned my lesson when traveling by carefully documenting where I left off so I can get back on the road upon my return. For me, documenting is a necessity when dealing with multiple airports, parking garages, hotels, and facilities I visit. The same applies when managing complex systems: the better the documentation, the better the hand-off and ultimate quality of the information that is managed.
On another occasion, I returned to a car without a battery as I left my interior light on. In this particular case, I was at a remote parking lot at JFK airport in New York City trying to call AAA road side assistance and could not convey to the person on the phone my car’s location, I fortunately ran into someone with jumper cables who could get me started. Having learned my lessons, I now have a "system" by which I always write down my parking space, and always check the car before I leave it at the garage for any lights etc. that might drain the battery.
Having a well-defined system for documenting status and locations really helps, not only when traveling but also when dealing with complicated systems such as healthcare IT systems, especially when multiple people are involved. I find that the institutions that have a very well defined checklist seem to have the least down time and fewest problems.
One of the hospitals I deal with has three shifts for their PACS administrators so they can provide 24/7 support on-site. There are always two or three people available, and at night typically one person. Needless to say this is a very large institution, which has a very high degree of availability of its PACS system. They have not had any considerable downtime in more than a year. In addition to having a robust and mature PACS system, I find that the main reason is the way that the PACS team monitors their system. They have a detailed checklist that helps them to regularly monitor all critical processes. Some of these are checked hourly such as the RIS feeds while others such as queues in the database or archive, as well as error files, are checked every four hours or as needed.
Some of the checking can be conducted by active monitoring software, which will page or e-mail an administrator that a process is going down, however, performance and intermittent issues are hard to detect automatically and require that someone has the finger on the pulse on a regular basis.
Checking a system on a regular basis and making sure that all errors and problems are addressed immediately as they occur will not only pay off in the short term, but also in the long term, especially when the data has to be migrated. When changing vendors, it is not uncommon that all of the information from the database and archive has to be migrated to a new vendor’s platform and, at that time, information that was mismatched, unidentified or incorrectly identified will rise to the surface. Upon migration, it will become obvious how well the system was managed during the lifetime of the system.
I learned my lesson when traveling by carefully documenting where I left off so I can get back on the road upon my return. For me, documenting is a necessity when dealing with multiple airports, parking garages, hotels, and facilities I visit. The same applies when managing complex systems: the better the documentation, the better the hand-off and ultimate quality of the information that is managed.
What About the End-user?
I recently came across a couple of examples showing how healthcare providers still do a poor job of providing products services and even physical facilities that truly meet the needs of end-users. As a case in point, when I asked my chiropractor why he still requests film for the MRI's he has done for his patients instead of getting the images on a CD, he told me that the viewing software that is embedded on those CD’s is so hard to use that it sometimes takes him up to 30 minutes to figure out how to line up two series on his monitor. Taking a set of films out of a envelope and placing them on a viewing box is much faster. The second example was when I talked with a local primary care physician who is considering scaling down her practice because the meaningful-use requirements for implementing electronic health records require her to implement an EHR to continue to get full reimbursements from Medicare and Medicaid. The fact that she can get between $40k and $50k in grants to implement an EHR does not counter balance the additional work needed to enter all of the information into the system. She told me that she has looked at several EHR products and none of them meet the requirements for her to enter the information effectively and efficiently. The last point is when I talked with a nurse who is just moving to a brand new wing in her hospital, which was built without taking any input from the nurses who are going to be working there. Instead this brand new facility, which was most likely designed to meet all building standards, does not meet any of the potential workflow improvements that could have been made.
The US government is trying to improve the efficiency and quality of healthcare, but the industry is lacking products, services and facilities that focus on what the user really needs. This appears to be less the case when one considers the acquisition of devices such as new ultrasounds or CT’s, but appears to be common with the software, larger systems and infrastructure. I am not sure what the solution is except for encouraging end users to continue to press the industry to focus on the end-user.
The US government is trying to improve the efficiency and quality of healthcare, but the industry is lacking products, services and facilities that focus on what the user really needs. This appears to be less the case when one considers the acquisition of devices such as new ultrasounds or CT’s, but appears to be common with the software, larger systems and infrastructure. I am not sure what the solution is except for encouraging end users to continue to press the industry to focus on the end-user.
Saturday, October 1, 2011
Tips From a Road Warrior (16): WiFi in the Sky?
Some airline carriers are starting to offer WIFI in the sky, allowing one to check email, and/or browse the Internet. Certain sites and/or applications are apparently blocked, for example, you cannot use SKYPE or any other Internet carrier to make calls from the sky (yet). So, I was very excited when I boarded a flight from the East Coast to find that they were promoting the service with a free introductory offer. However, this turned out to be a very frustrating experience as pretty much everyone on the plane had the same idea. Even though this was a small plane (a MD-80), the WiFi capacity was obviously way too limited. The router kept dropping web connections and on several occasions the login was rejected due to too many users.
WiFi is also taking off in healthcare institutions. Wireless portable x-ray systems are becoming very popular, as it allows you to query a worklist at the unit, preview the image taken and send it wirelessly to its destination, which can be the PACS, a QA station, or directly to a radiologist. This has helped "instant radiology" become a reality. There undoubtedly will be a big push from physicians to allow entering orders over the wireless network as well.
As you may recall CPEO (centralized physician order entry) is a requirement for Meaningful Use implementation for Electronic Health Records. Ordering medications and diagnostic procedures from the bedside using a wireless tablet or smart phone should be feasible. Also, the ability to show results on the tablet is becoming reality as the first tablets for this use were recently approved by the FDA.
This all shows great potential, however, before you get too excited, you might want to take it slowly. For example, I recently saw a portable unit in a corridor in a hospital and upon asking why it was not being used, they told me that they had so many issues with the wireless connection that they basically did not use this particular manufacturer anymore. As you may know, hospitals have many physical barriers to wireless signals, steel firewalls, and lead-lined walls surrounding x-ray rooms interfere with electromagnetic signals, including wireless signals.
The lesson learned here is that, while wireless networking is definitely worth the investment, you need to make sure that you test the devices thoroughly prior to introducing them. Make the purchase and final payment of new wireless devices dependent on proper functioning in your environment. Last but not least, if you use standard web transmission technology, make sure the information is encrypted as others can easily listen in. If you take these precautions, WiFi will definitely enhance your healthcare practice. In the meantime, I hope they fix the bugs in the airplane WiFi so I can check my email in the sky, instead of having to catch up after my trips.
WiFi is also taking off in healthcare institutions. Wireless portable x-ray systems are becoming very popular, as it allows you to query a worklist at the unit, preview the image taken and send it wirelessly to its destination, which can be the PACS, a QA station, or directly to a radiologist. This has helped "instant radiology" become a reality. There undoubtedly will be a big push from physicians to allow entering orders over the wireless network as well.
As you may recall CPEO (centralized physician order entry) is a requirement for Meaningful Use implementation for Electronic Health Records. Ordering medications and diagnostic procedures from the bedside using a wireless tablet or smart phone should be feasible. Also, the ability to show results on the tablet is becoming reality as the first tablets for this use were recently approved by the FDA.
This all shows great potential, however, before you get too excited, you might want to take it slowly. For example, I recently saw a portable unit in a corridor in a hospital and upon asking why it was not being used, they told me that they had so many issues with the wireless connection that they basically did not use this particular manufacturer anymore. As you may know, hospitals have many physical barriers to wireless signals, steel firewalls, and lead-lined walls surrounding x-ray rooms interfere with electromagnetic signals, including wireless signals.
The lesson learned here is that, while wireless networking is definitely worth the investment, you need to make sure that you test the devices thoroughly prior to introducing them. Make the purchase and final payment of new wireless devices dependent on proper functioning in your environment. Last but not least, if you use standard web transmission technology, make sure the information is encrypted as others can easily listen in. If you take these precautions, WiFi will definitely enhance your healthcare practice. In the meantime, I hope they fix the bugs in the airplane WiFi so I can check my email in the sky, instead of having to catch up after my trips.
Eliot Siegel Q&A;
This is a transcript of the Q and A session of the September vDHIMS eposium presentation by Eliot Siegel about advanced visualization workstations. If you are interested, you can listen to the full one-hour presentation, simply register at https://otechimg.com/vdhims/?action=register and enjoy.
Q: There is a difference in the quantitative output of several advanced visualization workstations among the different vendors, do you see a potential standardization and/or certification by a company such as ECRI?
A: That is a great question and it is of tremendous concern to several people including myself with regard to quality control as we seem to look for the esthetics of how the images look but when we are making quantitative measurements either manually or by using the software, the measurements vary considerably.
I propose to do a couple of things, and we have been talking with some vendors about them. The first thing would be to have standard scans of phantom data, for example creating a phantom for lung nodules or carotid stenosis. Another option is to work with NIST, which has created standard objects that have been measured very precisely that we can scan. Yet another option is to create a mathematical model, so we would not have to use the scanner to create a data set, and there are interesting data sets that are well known and which can be submitted to the vendors.
The problem is that it is hard to reproduce the human anatomy with phantoms, therefore one might use a de-identified data set, with patient approval, and share those and use them to create a semi-standard. It would be great if one could go to RSNA or another meeting and go to a vendor and look at a standard data set for carotid imaging or cardiac etc. So I think it is a great idea and, as a customer and a person who is interested in quality improvement, I would very much like to pursue that.
Q: Do you keep the thin axial CT slices and what would you recommend for a typical hospital?
A: I work in multiple clinical settings and at the University of Maryland, we keep them for only three to six months unless it is designated as a research study or need to be kept for other purposes. At the VA, we keep all of our thin slice data indefinitely. My recommendation would be for everyone to keep the thin slices indefinitely. However, I think that if you look across the country, only a minority of institutions keeps the thin slices.
When we talk with the legal folks about what data to retain, the answer that they give us is that you should retain data that you used for making your original clinical diagnosis. I and other people are doing image interpretation from the thin slices and therefore logically the conclusion would seem to be that if we use the data for making the day-to-day diagnosis, we ought to be keeping that information because my decision was partly predicated upon what I would see in an oblique image or a reconstructed image that was synthesized from the original data. I don't really have a record of what I saw unless I am able to save the thin slices. Therefore my philosophy is to save it, especially with the cost of storage declining. One compromise for institutions who are having cost issues would be to compress thin sections in a number of different ways. You could store the thick sections uncompressed and then use, for example, a JPEG compression for the thin sections. Therefore my philosophy is that in the near future everybody will start saving the thin sections.
Q: There is a difference in the quantitative output of several advanced visualization workstations among the different vendors, do you see a potential standardization and/or certification by a company such as ECRI?
A: That is a great question and it is of tremendous concern to several people including myself with regard to quality control as we seem to look for the esthetics of how the images look but when we are making quantitative measurements either manually or by using the software, the measurements vary considerably.
I propose to do a couple of things, and we have been talking with some vendors about them. The first thing would be to have standard scans of phantom data, for example creating a phantom for lung nodules or carotid stenosis. Another option is to work with NIST, which has created standard objects that have been measured very precisely that we can scan. Yet another option is to create a mathematical model, so we would not have to use the scanner to create a data set, and there are interesting data sets that are well known and which can be submitted to the vendors.
The problem is that it is hard to reproduce the human anatomy with phantoms, therefore one might use a de-identified data set, with patient approval, and share those and use them to create a semi-standard. It would be great if one could go to RSNA or another meeting and go to a vendor and look at a standard data set for carotid imaging or cardiac etc. So I think it is a great idea and, as a customer and a person who is interested in quality improvement, I would very much like to pursue that.
Q: Do you keep the thin axial CT slices and what would you recommend for a typical hospital?
A: I work in multiple clinical settings and at the University of Maryland, we keep them for only three to six months unless it is designated as a research study or need to be kept for other purposes. At the VA, we keep all of our thin slice data indefinitely. My recommendation would be for everyone to keep the thin slices indefinitely. However, I think that if you look across the country, only a minority of institutions keeps the thin slices.
When we talk with the legal folks about what data to retain, the answer that they give us is that you should retain data that you used for making your original clinical diagnosis. I and other people are doing image interpretation from the thin slices and therefore logically the conclusion would seem to be that if we use the data for making the day-to-day diagnosis, we ought to be keeping that information because my decision was partly predicated upon what I would see in an oblique image or a reconstructed image that was synthesized from the original data. I don't really have a record of what I saw unless I am able to save the thin slices. Therefore my philosophy is to save it, especially with the cost of storage declining. One compromise for institutions who are having cost issues would be to compress thin sections in a number of different ways. You could store the thick sections uncompressed and then use, for example, a JPEG compression for the thin sections. Therefore my philosophy is that in the near future everybody will start saving the thin sections.
Thursday, September 1, 2011
Tips From A Road Warrior (15): Who Cares About Meals These Days?
I was on board for the first leg of a long intercontinental flight when I noticed that we were already 10 minutes past the departure time. As I was upgraded to business class (one of the very few perks I get for having flown more than 2 million miles on my airline of no-choice), I had noticed that ground personnel had come and gone twice to recount the number of meal trays in the front galley. After another five minutes, the captain came on to announce that there were a couple meals missing which would cause a take-off delay. After another 10 minutes, yet another supervisor came in the cabin to do another recount. By then the passengers were beginning to get restless. Some of the passengers started to suggest leaving and were willing to give up their meals as long as we would leave. I was getting nervous as I needed to make a connection as well. After a total of 40 minutes, someone brought the missing meals and we finally left the gate.
I had a similar experience a year or so prior, with a different airline, which caused only a five minute delay, because the captain asked who preferred a hamburger, rushed out to the terminal to McDonalds, brought back three "value meals" and we took off without any noticeable delay.
I have found similar stalemates while working with healthcare imaging and IT professionals, both on the vendor support side and within internal departments. I am sure that many of us have experienced the traditional “finger-pointing” that takes place when a problem appears and the different parties involved focus on the blame instead of working together to solve the problem.
At one particular site, a new digital system was installed, which produced images with a circlular mask around it that was supposed to be black. Unfortunately, the images displayed on the PACS workstation showed up with a white mask around the images, which obviously is a major distraction for the radiologist who is trying to read the image for a diagnosis. The presence of the bright surrounding decreases the sensitivity of the radiologist for seeing differences in the dark areas, thus posing a patient safety issue. In this case, the imaging vendor was blaming the PACS vendor, and the PACS vendor blamed the modality. It did not help the modality vendor's case that the PACS system had been installed for several years without any similar issues. In this case it took getting both parties together with a consultant to look at the details of the DICOM header information to see that the specified mask information was being ignored by the PACS workstation.
It is also not uncommon for different departments to disagree about who is responsible for certain, often mundane tasks. The result is that the work falls between the cracks. At a hospital located in the Arizona desert, dust and sand was a chronic problem for the dust filters of the computer fans, which frequently clogged, causing overheating and failure. There was disagreement between the biomed department and the IT department as to who should clean and vacuum these filters on a regular basis. In one extreme case, there was even a disagreement about who is supposed to be cleaning CR cassettes, an activity that is in most cases done weekly or sometimes even daily by the technologists on evening or night shift.
Many of these situations can be resolved by having a shared sense of responsibility. In the case of the airline the crew should have focused on the primary mission, which is transporting people to where they want to go on-time rather than ensuring that all passengers are fed. In the case of healthcare IT the focus needs to be maintaining systems in support of patient care. A clear definition of roles and policies about who does what is a big help, but in many cases there are gray zones that need to be driven by the mission. I, for one, prefer to deal with institutions that show this commitment; unfortunately one is often strapped into the airline seat, or laying in a hospital bed before you find out whether the people are committed to the mission.
I had a similar experience a year or so prior, with a different airline, which caused only a five minute delay, because the captain asked who preferred a hamburger, rushed out to the terminal to McDonalds, brought back three "value meals" and we took off without any noticeable delay.
I have found similar stalemates while working with healthcare imaging and IT professionals, both on the vendor support side and within internal departments. I am sure that many of us have experienced the traditional “finger-pointing” that takes place when a problem appears and the different parties involved focus on the blame instead of working together to solve the problem.
At one particular site, a new digital system was installed, which produced images with a circlular mask around it that was supposed to be black. Unfortunately, the images displayed on the PACS workstation showed up with a white mask around the images, which obviously is a major distraction for the radiologist who is trying to read the image for a diagnosis. The presence of the bright surrounding decreases the sensitivity of the radiologist for seeing differences in the dark areas, thus posing a patient safety issue. In this case, the imaging vendor was blaming the PACS vendor, and the PACS vendor blamed the modality. It did not help the modality vendor's case that the PACS system had been installed for several years without any similar issues. In this case it took getting both parties together with a consultant to look at the details of the DICOM header information to see that the specified mask information was being ignored by the PACS workstation.
It is also not uncommon for different departments to disagree about who is responsible for certain, often mundane tasks. The result is that the work falls between the cracks. At a hospital located in the Arizona desert, dust and sand was a chronic problem for the dust filters of the computer fans, which frequently clogged, causing overheating and failure. There was disagreement between the biomed department and the IT department as to who should clean and vacuum these filters on a regular basis. In one extreme case, there was even a disagreement about who is supposed to be cleaning CR cassettes, an activity that is in most cases done weekly or sometimes even daily by the technologists on evening or night shift.
Many of these situations can be resolved by having a shared sense of responsibility. In the case of the airline the crew should have focused on the primary mission, which is transporting people to where they want to go on-time rather than ensuring that all passengers are fed. In the case of healthcare IT the focus needs to be maintaining systems in support of patient care. A clear definition of roles and policies about who does what is a big help, but in many cases there are gray zones that need to be driven by the mission. I, for one, prefer to deal with institutions that show this commitment; unfortunately one is often strapped into the airline seat, or laying in a hospital bed before you find out whether the people are committed to the mission.
What Is Happening At AHRA These Days?
I enjoy attending the annual meetings of the professional organization for radiology administrators, the American Healthcare Radiology Administrators (AHRA), for several reasons. I especially enjoyed the most recent one, which took place in Dallas. First of all, they have a good education program with informative sessions, and the exhibition area is somewhat intimate allowing for easier access to vendors than some of the major meetings. There were several major vendors missing from their exhibition, however, which may be another sign of the economy as companies are both trying to concentrate on major meetings only and shifting resources and dollars to virtual symposia online. The AHRA keynote speakers are always a good choice, and cause one to pause to think about our jobs and commitment to our mission and passion. In this case, Rich Bluni, author of the book the Inspired Nurse, accomplished just that, by sharing stories from his time in the ER and ICU in a very comic and sometimes emotional manner. I promptly bought 2 copies of his book after the session to give to friends.
The AHRA organization itself is at a crossroads as their membership, which is close to 4,000, is declining. I am not sure that this is due to the economic times, i.e. that people are cutting back on membership fees to save money, or if it has more to do with the fact that the organization might not appeal anymore to their constituency. In any case, there is a major drive to increase membership and time will tell whether they are on the right path or not.
With regard to the meeting itself, I personally found that current topics were missing from the speakers roster. No one spoke about such current topics as legislative activity around dose reporting, and especially Meaningful Use implementation of electronic health records. I also found technical topics were under represented, which is especially important as many of these professionals often face complex decisions regarding acquisition of high tech modalities.
Walking around the exhibition floor, I did not find a lot of new products and/or services, except for new software for woman's health, especially osteoporosis. Traditionally, this is diagnosed with Dexa scans, which are x-rays used to measure the bone density presented in a graph with corresponding measurements. New algorithms are becoming available that allow this to be done using CT scans, which are claimed to be more accurate and relevant. In addition, software has been introduced to show an assessment of the spine that shows potential degeneration, which can help determine whether early treatment is needed to prevent potential fractures. I would expect that, with the aging population in the US, and the accompanying risk for bone loss among women, these applications will become mainstream and be implemented widely over the next decade.
In a nutshell, this meeting was enjoyable; I learned a few new things and was inspired, especially after listening to the keynote speaker. We will see whether the organization can pull off a successful membership drive and is able to make the meeting more attractive to its constituency next year.
The AHRA organization itself is at a crossroads as their membership, which is close to 4,000, is declining. I am not sure that this is due to the economic times, i.e. that people are cutting back on membership fees to save money, or if it has more to do with the fact that the organization might not appeal anymore to their constituency. In any case, there is a major drive to increase membership and time will tell whether they are on the right path or not.
With regard to the meeting itself, I personally found that current topics were missing from the speakers roster. No one spoke about such current topics as legislative activity around dose reporting, and especially Meaningful Use implementation of electronic health records. I also found technical topics were under represented, which is especially important as many of these professionals often face complex decisions regarding acquisition of high tech modalities.
Walking around the exhibition floor, I did not find a lot of new products and/or services, except for new software for woman's health, especially osteoporosis. Traditionally, this is diagnosed with Dexa scans, which are x-rays used to measure the bone density presented in a graph with corresponding measurements. New algorithms are becoming available that allow this to be done using CT scans, which are claimed to be more accurate and relevant. In addition, software has been introduced to show an assessment of the spine that shows potential degeneration, which can help determine whether early treatment is needed to prevent potential fractures. I would expect that, with the aging population in the US, and the accompanying risk for bone loss among women, these applications will become mainstream and be implemented widely over the next decade.
In a nutshell, this meeting was enjoyable; I learned a few new things and was inspired, especially after listening to the keynote speaker. We will see whether the organization can pull off a successful membership drive and is able to make the meeting more attractive to its constituency next year.
Monday, August 1, 2011
Tips from a Road Warrior (14): Wireless Works
Most airlines are starting to offer wireless on their flights, which is convenient for checking emails, especially during long flights. The fees differ and can be steep, often more than $10. Before you think you can also make calls using Skype, they have found a way to block that. I am sure that the airlines want to offer cell service under a separate fee in the future (not looking forward to that!).
On one of my recent flights, there was a promotion for free access so I tried it out. You can imagine what happened, as everyone in the plane wanted to take advantage of the offer, they quickly exceeded the capacity of the available bandwidth. Needless to say it was frustrating to find that getting a connection was only randomly successful, and there were very frequent drops of the connection. It worked, but intermittently, which was frustrating.
When using wireless in a life-critical situation such as transmitting an image from a portable x-ray unit or ultrasound device in the ER, one better make sure the devices can connect flawlessly and remain connected reliably. The feedback I have gotten from several users, however, is that reliability of wireless applications is still spotty at best in many cases. There are several reasons for this. One is the physical design of the environment, which may not be very "wireless-friendly." There are often steel doors, physical firewalls, many cables, high voltage generators, and other electrical devices that all interact and interfere with the radiofrequency signals. Another problem is “dead spots” where the reception is very poor. It sometimes requires trial and error to place all of the wireless transmitters and receivers to make sure there is consistent reception throughout the facility.
The third reason has to do with communication equipment interoperability issues. The wireless routers in place might not work well with the hardware wireless board used in specific devices. Again, it seems to be hard to predict this and often requires trial and error, which can be costly. I walked into a department not too long ago and the administrator told me how they went to wireless DR portables for all his ER and ICU portables, which was very successful in providing better turn-around times and service to the physicians. I pointed to a device in the hallway and asked whether it too was one of his units and he told me that particular brand worked very unreliably and was basically unusable in the new wireless environment, versus the four other ones from brand “B” which were very reliable with the new system.
Another concern is obviously security and privacy. There is a well-known story of a vendor who intercepted all of the hospital orders while sitting in the cafeteria. You can imagine the uproar if images from a celebrity who was just admitted to the ER with a fractured bone were intercepted. Unfortunately, the use of encryption and VPN's over wireless networks adds another layer of technical complication and raises the potential for interoperability issues. Another wireless issue is emerging with the growing use of tablet computers such as the iPad and others. Similarly there are wireless connectivity issues with COWs (Computer on Wheels). Most nurses are wheeling these computer carts from room to room within their departments. The problem is all the sensors needed to measure vitals, that can also interfere with the computer connection to the wireless network. That problem, however, may be short-lived. I can already measure my heartbeat with my smart phone and there was recently a publication about a nanotechnology sensor that can be applied to the skin that allows a smart phone to directly measure glucose levels and other vital measures without having to draw blood.
In conclusion, wireless is here to stay despite intermittent reliability, so there is much to worry about. I, for one, would never purchase any wireless device without thoroughly testing it and getting a money back guarantee if it does not meet certain reliability and throughput requirements. The technology can only be expected to become better, but one might also expect that its usage will increase exponentially, which will keep us busy for a while figuring out how to make it all work.
On one of my recent flights, there was a promotion for free access so I tried it out. You can imagine what happened, as everyone in the plane wanted to take advantage of the offer, they quickly exceeded the capacity of the available bandwidth. Needless to say it was frustrating to find that getting a connection was only randomly successful, and there were very frequent drops of the connection. It worked, but intermittently, which was frustrating.
When using wireless in a life-critical situation such as transmitting an image from a portable x-ray unit or ultrasound device in the ER, one better make sure the devices can connect flawlessly and remain connected reliably. The feedback I have gotten from several users, however, is that reliability of wireless applications is still spotty at best in many cases. There are several reasons for this. One is the physical design of the environment, which may not be very "wireless-friendly." There are often steel doors, physical firewalls, many cables, high voltage generators, and other electrical devices that all interact and interfere with the radiofrequency signals. Another problem is “dead spots” where the reception is very poor. It sometimes requires trial and error to place all of the wireless transmitters and receivers to make sure there is consistent reception throughout the facility.
The third reason has to do with communication equipment interoperability issues. The wireless routers in place might not work well with the hardware wireless board used in specific devices. Again, it seems to be hard to predict this and often requires trial and error, which can be costly. I walked into a department not too long ago and the administrator told me how they went to wireless DR portables for all his ER and ICU portables, which was very successful in providing better turn-around times and service to the physicians. I pointed to a device in the hallway and asked whether it too was one of his units and he told me that particular brand worked very unreliably and was basically unusable in the new wireless environment, versus the four other ones from brand “B” which were very reliable with the new system.
Another concern is obviously security and privacy. There is a well-known story of a vendor who intercepted all of the hospital orders while sitting in the cafeteria. You can imagine the uproar if images from a celebrity who was just admitted to the ER with a fractured bone were intercepted. Unfortunately, the use of encryption and VPN's over wireless networks adds another layer of technical complication and raises the potential for interoperability issues. Another wireless issue is emerging with the growing use of tablet computers such as the iPad and others. Similarly there are wireless connectivity issues with COWs (Computer on Wheels). Most nurses are wheeling these computer carts from room to room within their departments. The problem is all the sensors needed to measure vitals, that can also interfere with the computer connection to the wireless network. That problem, however, may be short-lived. I can already measure my heartbeat with my smart phone and there was recently a publication about a nanotechnology sensor that can be applied to the skin that allows a smart phone to directly measure glucose levels and other vital measures without having to draw blood.
In conclusion, wireless is here to stay despite intermittent reliability, so there is much to worry about. I, for one, would never purchase any wireless device without thoroughly testing it and getting a money back guarantee if it does not meet certain reliability and throughput requirements. The technology can only be expected to become better, but one might also expect that its usage will increase exponentially, which will keep us busy for a while figuring out how to make it all work.
Trouble With Transitions Anyone?
I am always looking for new intellectual and physical challenges, which is why I entered my first-ever mini-triathlon last year. After having done two, I am about to enter another race this weekend in my hometown to see whether I can improve my ranking this time. I find that the hardest part is not any one of the three legs, i.e. swimming, biking or running, but rather the transitions. Biking at my maximum performance for about an hour seems to program my body in such a way that changing to running becomes almost impossible, at least for the first mile or so. It is difficult to move my legs in front of each other.
Intellectual transitioning is also hard to do, but often a requirement. Professions where these transitions may involve life threatening or emergency situations typically require a lot of training. Examples of that are pilots who suddenly need to react when an engine fails or other serious condition occurs.
Healthcare IT or PACS system administrators face similar requirements to be ready for stressful transitions. You might be in the middle of upgrading a device, when you get a call to update the demographics of a procedure because a technologist entered the information incorrectly.
Some of the tasks you are performing require a lot of concentration because of the potential impact if you make an error. Imagine the impact if you made an error while making a backup, and that backup was needed because the original information was lost due to a major disk malfunction. If you make a mistake updating a study, it could result in the information or image being assigned to the wrong patient.
Unfortunately, humans make mistakes, especially if they are in a multi-tasking environment and have to transition often from one domain and/or activity to another. I would argue that most errors can be attributed to human error, rather than hardware failure. One local hospital told me that their last significant PACS downtime was due to a service engineer from the vendor who remotely used an incorrect database backup, which corrupted the original, causing a 4-hour PACS downtime. Needless to say, it pays to monitor anything that happens with the system, even if it is done by your vendor.
In conclusion, be aware of "transitions" and focus on the activity at hand, especially if you are dealing with issues that will impact the lives of others in a potentially significant manner.
Intellectual transitioning is also hard to do, but often a requirement. Professions where these transitions may involve life threatening or emergency situations typically require a lot of training. Examples of that are pilots who suddenly need to react when an engine fails or other serious condition occurs.
Healthcare IT or PACS system administrators face similar requirements to be ready for stressful transitions. You might be in the middle of upgrading a device, when you get a call to update the demographics of a procedure because a technologist entered the information incorrectly.
Some of the tasks you are performing require a lot of concentration because of the potential impact if you make an error. Imagine the impact if you made an error while making a backup, and that backup was needed because the original information was lost due to a major disk malfunction. If you make a mistake updating a study, it could result in the information or image being assigned to the wrong patient.
Unfortunately, humans make mistakes, especially if they are in a multi-tasking environment and have to transition often from one domain and/or activity to another. I would argue that most errors can be attributed to human error, rather than hardware failure. One local hospital told me that their last significant PACS downtime was due to a service engineer from the vendor who remotely used an incorrect database backup, which corrupted the original, causing a 4-hour PACS downtime. Needless to say, it pays to monitor anything that happens with the system, even if it is done by your vendor.
In conclusion, be aware of "transitions" and focus on the activity at hand, especially if you are dealing with issues that will impact the lives of others in a potentially significant manner.
Friday, July 1, 2011
Tips from a Road Warrior (13): Plan Your Retirement
The airlines have a very nice way of retiring their pilots. I once witnessed the last flight of a senior pilot, which was an interesting experience. First of all, they treated it like a celebration, his spouse was included and given a special seat in first or business class, and was given a big bouquet of flowers upon boarding. Second, upon landing the plane gets a "wet" greeting by the airport fire brigade as it taxied through a spray from water hoses, which for passengers is similar to the spray when the plane is being de-iced. The co-pilot told us ahead of time what to expect to keep people from becoming anxious when they saw the fire engines roll up beside the plane.
When planning the retirement of a healthcare IT system, it also should be treated as a time for celebration, while recognizing the need to alert users to the potential struggles that such a transition involves, especially if the new system comes from a different vendor. Transitioning means in many cases migrating data, in almost all cases a vast retraining effort, and a new support staff. Such change typically triggers apprehension, but when people know what to expect, the anxiety level is reduced.
In many cases new systems impact productivity as well. People who have implemented an EMR have told me it took them three months to get to the same level of productivity as before implementing the system. Many make the mistake of switching without accommodating the workload, i.e. they expect to be able to treat the same number of patients as they did prior to the switch. Prudent IT managers try to ease the transition by either cutting production in half the first few days or by doubling the amount of support resources.
Planning for the retirement of a healthcare IT system should actually start when a new one is being purchased. Agreements about support during the retirement process should be negotiated into the contract. There should be clear expectations about the amount of support needed during the retirement process, which will help both the service provider and the user. Frustrations arise on both sides when expectations are not spelled out, for example, service providers can be surprised when they are converted to a month-by-month support agreement, and users can be frustrated to learn that continuing support during the transition phase wasn't included in the budget.
It is also important to have a disposal plan for the physical hardware. Everyone has probably seen obsolete computers stacked up in a basement closet. A well-known study by a professor from MIT who had his students buy computers from various used sources, found literally thousands of credit card numbers, pharmacy records and other private information on these systems. The US federal HIPAA regulation specifies that old hardware be disposed of in a manner that preserves the privacy and security of the information stored on these systems. In my city, there is a “computer crusher” that takes the hard drives and physically destroys them to prevent any possibility of data recovery later on.
One should also make sure that a retention policy is defined, i.e. when information and devices should be retired or eliminated, and that retention is possible from a technical perspective. Many systems build in redundancy to make sure information never gets lost, and are not designed to purge information, especially if it is part of a database or archive. Many systems, especially archive systems, delete a record by a flag in a database, which does not erase the actual data on an archive. Consequently, if the information on the old system is migrated to the new, deleted information often reappears.
In conclusion, one should plan for the retirement for a system when it is being purchased to ensure it can be done gracefully, and one should consider doing it like the airlines do with their pilots, i.e. make it a celebration for the new to come in and let go of the old.
When planning the retirement of a healthcare IT system, it also should be treated as a time for celebration, while recognizing the need to alert users to the potential struggles that such a transition involves, especially if the new system comes from a different vendor. Transitioning means in many cases migrating data, in almost all cases a vast retraining effort, and a new support staff. Such change typically triggers apprehension, but when people know what to expect, the anxiety level is reduced.
In many cases new systems impact productivity as well. People who have implemented an EMR have told me it took them three months to get to the same level of productivity as before implementing the system. Many make the mistake of switching without accommodating the workload, i.e. they expect to be able to treat the same number of patients as they did prior to the switch. Prudent IT managers try to ease the transition by either cutting production in half the first few days or by doubling the amount of support resources.
Planning for the retirement of a healthcare IT system should actually start when a new one is being purchased. Agreements about support during the retirement process should be negotiated into the contract. There should be clear expectations about the amount of support needed during the retirement process, which will help both the service provider and the user. Frustrations arise on both sides when expectations are not spelled out, for example, service providers can be surprised when they are converted to a month-by-month support agreement, and users can be frustrated to learn that continuing support during the transition phase wasn't included in the budget.
It is also important to have a disposal plan for the physical hardware. Everyone has probably seen obsolete computers stacked up in a basement closet. A well-known study by a professor from MIT who had his students buy computers from various used sources, found literally thousands of credit card numbers, pharmacy records and other private information on these systems. The US federal HIPAA regulation specifies that old hardware be disposed of in a manner that preserves the privacy and security of the information stored on these systems. In my city, there is a “computer crusher” that takes the hard drives and physically destroys them to prevent any possibility of data recovery later on.
One should also make sure that a retention policy is defined, i.e. when information and devices should be retired or eliminated, and that retention is possible from a technical perspective. Many systems build in redundancy to make sure information never gets lost, and are not designed to purge information, especially if it is part of a database or archive. Many systems, especially archive systems, delete a record by a flag in a database, which does not erase the actual data on an archive. Consequently, if the information on the old system is migrated to the new, deleted information often reappears.
In conclusion, one should plan for the retirement for a system when it is being purchased to ensure it can be done gracefully, and one should consider doing it like the airlines do with their pilots, i.e. make it a celebration for the new to come in and let go of the old.
VHR Lessons Learned for PHR/EHR Implementations
It seems that every time we vacation with our dogs, we end up at a veterinarian as they contract some disease or injury. In any case, we get to know different veterinarians, in this case somewhere in Colorado. We needed a vet and found one using a Veterinarian Health Record (VHR). This recent visit taught some lessons that we might apply as we begin to roll out Personal and Electronic Health Records (PHR/EHR). First of all, I was initially impressed with the nice data entry screen with graphics to identify the information needed; however, I found that it was not as easy and smooth as expected.
In general, when registering a patient, there is an issue with unique identification, i.e. is this person the same patient for which a record already exists in the system? If the system is connected to another patient domain, what is the patient identifier to be used for query? The veterinarian world is relatively easy, as increasingly our pets are getting RFID chips, which are about the size of a grain of rice and implanted under the skin. The purpose of this chip was initially to identify lost pets, but it is also a great tool for medical records identification. Farmers and ranchers have used RFID tags on animal ears for years to identify individual animals among large herds. The DICOM standard extensions for veterinary applications actually have added a special data attribute to include this information with images.
Unfortunately, there is no US national registry; each manufacturer, distributor, or provider keeps the information. That is why our pets are not "chipped," as we tend to use different providers as we travel with them.
I don't think it is realistic to expect that human patients would be willing to be “chipped,” however, even if in theory this could happen, it would still require a national registry to prevent duplicates and ensure that each person is uniquely identified. In addition, there are security and privacy concerns that prevent a universal patient ID to be issued and/or used in the US (unlike many other countries), therefore we need to implement rather sophisticated patient registries defined by IHE (Integrating the Healthcare Enterprise) to allow local ID registration that can also be reconciled with multiple ID registries. However, one would suspect that there are no such security and privacy concerns with pets, and therefore hopefully there might come a day when we see a unified pet registry in place.
Another lesson learned has to do with the data entry for our pet in the electronic record. My guess is that the time it took to enter the information about the primary complaints, and observations, was more than was actually spent with the “patient.” Even though the technician was a very efficient typist, she had to use many different screens and had to do a lot of free text data entry. When I see demos of EHR systems by vendor representatives at tradeshows such as the HIMSS, it appears to go very fast and efficient, however in practice, it is a different story.
As I watched the data entry for our dog, it occurred to me that it would be really nice to have speech recognition technology or at least templates, macros or other time-saving methods. As a matter of fact, I estimate that this visit took twice as long as with our home veterinarian, who merely scribbles her notes in the patient jacket. Of course, that paper information is not available to other vets, but there is definitely a time trade-off.
With regard to entering the diagnosis, another issue emerged. While there were no doubt hundreds, if not more, potential diagnoses preprogrammed into the system, the diagnosis for our dog was apparently not foreseen by the system developers. Not that it seemed to me to be an obscure disease, it just did not fit into any of the many available categories. After trying many different searches, the vet gave up; there is apparently no “free text” entry in this particular system. She commented that the system was definitely developed by engineers who had not taken into account the true requirements of healthcare providers.
I understand the developer’s predicament, especially if we would want to improve our healthcare system for humans, we need to make sure we can categorize diagnoses and therefore measure and potentially influence the efficiency of the healthcare delivery. However, it might not always be possible to define a black-white definition of a given diagnosis that fits an existing code system or text. The danger of course is that by allowing free data entry, physicians may misuse it and not use the standard diagnosis when it is an easy case. However, I would argue that if it is easier to enter the preprogrammed codes than entering additional text, a physician will not misuse the system.
In conclusion, this experience taught me several lessons with regard to patient identification, ease of use for data entry, and use of preprogrammed templates that I hope that some of the developers of EHR systems will take as valuable input.
In general, when registering a patient, there is an issue with unique identification, i.e. is this person the same patient for which a record already exists in the system? If the system is connected to another patient domain, what is the patient identifier to be used for query? The veterinarian world is relatively easy, as increasingly our pets are getting RFID chips, which are about the size of a grain of rice and implanted under the skin. The purpose of this chip was initially to identify lost pets, but it is also a great tool for medical records identification. Farmers and ranchers have used RFID tags on animal ears for years to identify individual animals among large herds. The DICOM standard extensions for veterinary applications actually have added a special data attribute to include this information with images.
Unfortunately, there is no US national registry; each manufacturer, distributor, or provider keeps the information. That is why our pets are not "chipped," as we tend to use different providers as we travel with them.
I don't think it is realistic to expect that human patients would be willing to be “chipped,” however, even if in theory this could happen, it would still require a national registry to prevent duplicates and ensure that each person is uniquely identified. In addition, there are security and privacy concerns that prevent a universal patient ID to be issued and/or used in the US (unlike many other countries), therefore we need to implement rather sophisticated patient registries defined by IHE (Integrating the Healthcare Enterprise) to allow local ID registration that can also be reconciled with multiple ID registries. However, one would suspect that there are no such security and privacy concerns with pets, and therefore hopefully there might come a day when we see a unified pet registry in place.
Another lesson learned has to do with the data entry for our pet in the electronic record. My guess is that the time it took to enter the information about the primary complaints, and observations, was more than was actually spent with the “patient.” Even though the technician was a very efficient typist, she had to use many different screens and had to do a lot of free text data entry. When I see demos of EHR systems by vendor representatives at tradeshows such as the HIMSS, it appears to go very fast and efficient, however in practice, it is a different story.
As I watched the data entry for our dog, it occurred to me that it would be really nice to have speech recognition technology or at least templates, macros or other time-saving methods. As a matter of fact, I estimate that this visit took twice as long as with our home veterinarian, who merely scribbles her notes in the patient jacket. Of course, that paper information is not available to other vets, but there is definitely a time trade-off.
With regard to entering the diagnosis, another issue emerged. While there were no doubt hundreds, if not more, potential diagnoses preprogrammed into the system, the diagnosis for our dog was apparently not foreseen by the system developers. Not that it seemed to me to be an obscure disease, it just did not fit into any of the many available categories. After trying many different searches, the vet gave up; there is apparently no “free text” entry in this particular system. She commented that the system was definitely developed by engineers who had not taken into account the true requirements of healthcare providers.
I understand the developer’s predicament, especially if we would want to improve our healthcare system for humans, we need to make sure we can categorize diagnoses and therefore measure and potentially influence the efficiency of the healthcare delivery. However, it might not always be possible to define a black-white definition of a given diagnosis that fits an existing code system or text. The danger of course is that by allowing free data entry, physicians may misuse it and not use the standard diagnosis when it is an easy case. However, I would argue that if it is easier to enter the preprogrammed codes than entering additional text, a physician will not misuse the system.
In conclusion, this experience taught me several lessons with regard to patient identification, ease of use for data entry, and use of preprogrammed templates that I hope that some of the developers of EHR systems will take as valuable input.
Wednesday, June 1, 2011
Tips from a Road Warrior (12): Check Your Connections
The majority of my travel stories deal with my travels to India, as this country, despite its high tech image, still has so many infrastructure problems and issues that I typically characterize it as operating in "controlled chaos." Yes there is public transportation and there are taxis, and they have airports but they seem to be built for a fraction of the people that use it today. For example, there is typically only a single door to enter the airport, causing a queue out of the terminal stretching literally around the block. Unless you know a back-door, or wave a few hundred rupees, there is no way you will make it to the front without at least a one- or two-hour wait. As an illustration, I had to connect from Bangalore back to Dallas through New Delhi. Unfortunately, that means changing from the local to the international airport. Upon arriving at the local airport, there are no signs directing where to go or what to do, and after asking several people for a clue, they pointed me to an obscure ticket counter to purchase a bus ticket. The next bus was going to leave in an hour and looking outside to the queue already lined up, there was no way that we were going to fit into that bus. Talking with a fellow American in the same predicament, we decided to take a cab, which is an adventure by itself, especially when each of us had two suitcases to get into these tiny cars. Upon arriving at the international terminal, we decided to bribe our way through by giving an important looking person with an airport badge a good tip, and we made it just in time.
Lesson learned is that any connection depends on the parties involved. Connecting from Tokyo Narita to Haneda or London Heathrow to Gatwick is a breeze, not in India. The same can be said about connecting devices, whether it is to a RIS or to a PACS or even a workstation. Just as traveling with the same airline, purchasing a modality from the same vendor as your RIS or PACS does not guarantee that it will be simple, straightforward, and without any connection issues. Remember most large companies have separate development centers for their different products. The RIS might be developed in the US on the East Coast, the modality in France, and the PACS by a division at the West Coast. Their DICOM conformance statements do not always follow the same template and do not have the same level of detail, and each seems to have their own conventions and ideas about configuration and installation. As a matter of fact, I would argue that some of the mid-sized PACS companies, who do not sell any of the modalities, are more open and easier to connect as they have every incentive to make this as painless as possible.
It is important to do your homework and test the interfaces extensively prior to connecting devices. This not only applies to the interface of a modality with the RIS and PACS, but also to the reporting system, web-based viewers, MPR and future EHR applications as well. Most of the calls for assistance are not related to the basic connections anymore but for example, information missing in the report template for cardiology ultrasound exams. In this case, the reporting system does not quite interpret the information from the structured report template generated by the ultrasound. There are also issues with archiving and displaying the new modality objects such as being generated by the latest generation MR and CT as well as 3-D breast imaging devices.
As images are going to be distributed more widely through electronic health records and possibly in personal health records as well, the connectivity issues are only going to grow, if for no other reason than rising volumes. Image distribution used to be rather well controlled within one or more departments, it is already widespread, so you can imagine what it will be like when patient images are accessible by any physician who is authorized by a patient: the sky is the limit.
In conclusion, anytime there is a connection or interface, one should be aware of potential issues and prepare accordingly. This can be done by studying the interface specifications, and testing sample images and transactions in advance. Don't assume that it is easier if you use the same vendor on each side, just as using the same airline doesn’t guarantee easy connecting flights.
Lesson learned is that any connection depends on the parties involved. Connecting from Tokyo Narita to Haneda or London Heathrow to Gatwick is a breeze, not in India. The same can be said about connecting devices, whether it is to a RIS or to a PACS or even a workstation. Just as traveling with the same airline, purchasing a modality from the same vendor as your RIS or PACS does not guarantee that it will be simple, straightforward, and without any connection issues. Remember most large companies have separate development centers for their different products. The RIS might be developed in the US on the East Coast, the modality in France, and the PACS by a division at the West Coast. Their DICOM conformance statements do not always follow the same template and do not have the same level of detail, and each seems to have their own conventions and ideas about configuration and installation. As a matter of fact, I would argue that some of the mid-sized PACS companies, who do not sell any of the modalities, are more open and easier to connect as they have every incentive to make this as painless as possible.
It is important to do your homework and test the interfaces extensively prior to connecting devices. This not only applies to the interface of a modality with the RIS and PACS, but also to the reporting system, web-based viewers, MPR and future EHR applications as well. Most of the calls for assistance are not related to the basic connections anymore but for example, information missing in the report template for cardiology ultrasound exams. In this case, the reporting system does not quite interpret the information from the structured report template generated by the ultrasound. There are also issues with archiving and displaying the new modality objects such as being generated by the latest generation MR and CT as well as 3-D breast imaging devices.
As images are going to be distributed more widely through electronic health records and possibly in personal health records as well, the connectivity issues are only going to grow, if for no other reason than rising volumes. Image distribution used to be rather well controlled within one or more departments, it is already widespread, so you can imagine what it will be like when patient images are accessible by any physician who is authorized by a patient: the sky is the limit.
In conclusion, anytime there is a connection or interface, one should be aware of potential issues and prepare accordingly. This can be done by studying the interface specifications, and testing sample images and transactions in advance. Don't assume that it is easier if you use the same vendor on each side, just as using the same airline doesn’t guarantee easy connecting flights.
Dose Issues Not Only For CT
There is a lot of activity around radiation dose reduction, especially for CT exams. This is partly due to the incidents that got a lot of press in which people were overdosed due to operator errors and negligence. Another important factor is the increase in CT exams, especially in the ER. It used to be that trauma cases would get a couple of X-rays done to look at potential fractures and/or internal damage, however, most ER's now have a resident CT and a body scan is pretty much standard procedure.
After numerous studies raising safety concerns about the amount of radiation exposure for all these CT scans, vendors are finally taking notice and implementing techniques to reduce radiation exposure. One step being taken is to start registering the dose administered to the patient. This sometimes requires dosimeters in the X-ray chain, as well as reporting mechanisms. The reporting is still very much a work in progress. For some modalities, such as digital mammography, there already is relatively reliable information in the image header, which could be extracted by the PACS and stored. Some systems use the DICOM Modality Performed Procedure Step (MPPS) information as it also (optionally) can contain the dose. This is used in some cardiology applications whereby the cardiology information system records this information. There are drawbacks of the MPPS method as it is design dependent on the images that are created, and, for example, for fluoroscopy exams, there might not be any or only a few images taken. If one would depend on the dose information in the MPPS for those types of exams, the exposure would be severely under reported.
This is especially true for CT, in which there is often a separate screen archived with the dose information, however, there is no digital representation, which means that the data extraction needs to depend on so-called screen-scraping or optical character recognition to get the actual information. The best way of reporting the dose information would be by using the dose-structured report. As a matter of fact, the Integrating the Healthcare Enterprise or IHE initiative has defined a special profile, called the radiation exposure monitoring or REM profile. This was demonstrated at the recent Radiological Society of North America meeting, however, there is a still a lack of recording and reporting systems, which is causing very slow implementation.
CT dose reporting is getting most of the attention, however, in my opinion, the over exposure and unnecessary exams using standard X-rays such as DR or CR is underestimated. As an example, my little six-year-old grandson has issues with allergies and congestion, especially during the flu season. He has already had several chest X-rays over the course of his first six years as pediatric physicians like to play it safe and order an X-ray "just in case." There are also no guidelines on how much to reduce the technique factors to maintain reasonable image quality, and still to be able to make a diagnosis. The Image Gently campaign has developed online teaching materials, but to my knowledge, there are no guidelines published yet. In addition, if images are taken, there is also often a lack of shielding, an issue which was reported in the article “X-rays and Unshielded Infants” on Feb. 27, 2011 in the New York Times.
It might seem strange to hear a message of reduction of X-rays from someone like myself who works in this industry, however, one should realize that 80 percent of the world’s population has no access to X-rays at all. Rather than over utilizing these systems for the privileged 20 percent, it might be better to expand it to those who have no access. This requires the development of low cost digital systems, which are very durable and easy to use. I believe that this can be done, if some of the major vendors would just make this a priority.
In conclusion, dose registration is still challenging, the IHE REM profile implementation should be a major push. However, registration is just the first step, further development of dose reduction techniques and guidelines by professional organizations are needed as well.
After numerous studies raising safety concerns about the amount of radiation exposure for all these CT scans, vendors are finally taking notice and implementing techniques to reduce radiation exposure. One step being taken is to start registering the dose administered to the patient. This sometimes requires dosimeters in the X-ray chain, as well as reporting mechanisms. The reporting is still very much a work in progress. For some modalities, such as digital mammography, there already is relatively reliable information in the image header, which could be extracted by the PACS and stored. Some systems use the DICOM Modality Performed Procedure Step (MPPS) information as it also (optionally) can contain the dose. This is used in some cardiology applications whereby the cardiology information system records this information. There are drawbacks of the MPPS method as it is design dependent on the images that are created, and, for example, for fluoroscopy exams, there might not be any or only a few images taken. If one would depend on the dose information in the MPPS for those types of exams, the exposure would be severely under reported.
This is especially true for CT, in which there is often a separate screen archived with the dose information, however, there is no digital representation, which means that the data extraction needs to depend on so-called screen-scraping or optical character recognition to get the actual information. The best way of reporting the dose information would be by using the dose-structured report. As a matter of fact, the Integrating the Healthcare Enterprise or IHE initiative has defined a special profile, called the radiation exposure monitoring or REM profile. This was demonstrated at the recent Radiological Society of North America meeting, however, there is a still a lack of recording and reporting systems, which is causing very slow implementation.
CT dose reporting is getting most of the attention, however, in my opinion, the over exposure and unnecessary exams using standard X-rays such as DR or CR is underestimated. As an example, my little six-year-old grandson has issues with allergies and congestion, especially during the flu season. He has already had several chest X-rays over the course of his first six years as pediatric physicians like to play it safe and order an X-ray "just in case." There are also no guidelines on how much to reduce the technique factors to maintain reasonable image quality, and still to be able to make a diagnosis. The Image Gently campaign has developed online teaching materials, but to my knowledge, there are no guidelines published yet. In addition, if images are taken, there is also often a lack of shielding, an issue which was reported in the article “X-rays and Unshielded Infants” on Feb. 27, 2011 in the New York Times.
It might seem strange to hear a message of reduction of X-rays from someone like myself who works in this industry, however, one should realize that 80 percent of the world’s population has no access to X-rays at all. Rather than over utilizing these systems for the privileged 20 percent, it might be better to expand it to those who have no access. This requires the development of low cost digital systems, which are very durable and easy to use. I believe that this can be done, if some of the major vendors would just make this a priority.
In conclusion, dose registration is still challenging, the IHE REM profile implementation should be a major push. However, registration is just the first step, further development of dose reduction techniques and guidelines by professional organizations are needed as well.
Sunday, May 1, 2011
Tips from a Road Warrior (11): Have a Backup for your Back-up Plan
The recent actions in Pakistan brought back memories of the 9/11 events when I was literally a mile away from the Pentagon. I was on the 11th floor at a DICOM working group meeting with a clear view of most of the city and could clearly see what was going on. I saw first hand the confusion, smoke, and semi-panic in Washington DC on that early morning. The first reaction of course was to call home, as I knew they would be trying to find out if I was safe. Of course, cell phones did not work as the circuits were over-loaded. It took me about an hour to realize that the landline might work so I went back to my hotel and made my calls to my family from there.
The second challenge was to get back to Dallas from DC before the weekend. I thought, no problem, we also have trains don't we? I could not get through to Amtrak and decided to walk to the main station. Well, I found out that there were trains, if I was willing to be on the stand-by list for up to three weeks from that date. That left me with no other option than to rent a car. There were plenty available, although I had to negotiate a one-way rental penalty of $1,000 or so when I turned in the car in Dallas. Lessons learned: have a back-up for your back-up, in my case, renting a car for the train back-up. That is why I have a back-up of my laptop computer and also keep a copy off-site somewhere in a cloud just in case.
Many institutions keep multiple copies of their images available for the first days to weeks at a modality. Some PACS architectures have gateways that store images for a certain period of time. Many also archive a copy off-site at a Vendor Neutral Archive, and then also have a tape-back-up that they put into a vault off-site. Determining what to duplicate where, when, and for how long should be based on a risk analysis. This analysis should include the unimaginable.
This is especially important for those components that directly impact critical care such as the ER. Most ER’s by now have multiple CR systems, and if the volume does not justify large units, a single plate reader should be sufficient as a back-up. You might even keep a laser printer, and make sure there is sufficient film to go with it. If the printer is connected to a network, keep a direct patch cable around that allows the CR to connect directly to the printer in case there is a major network failure. Another solution is to have a CD burner in the ER that allows images to be stored on exchange media that can be transmitted over the "sneakernet" to the radiologist who can review them temporarily on his or her workstation.
One should use common sense, however, and do not go overboard, which is where the risk analysis comes in. I have seen institutions where the images are still archived on removable disks on the CT or MRI, just because it makes the technologists “feel safe” or just because it “has always been done that way.” If over a period of, let’s say one year, no one ever asked for these to be retrieved, one might rethink this and possibly eliminate it from the workflow. I also have seen an institution where there were five copies available: one copy at a gateway, one at a local server, one at the archive, one at the web server and one at an electronic medical record server, which was functioning as a Vendor Neutral Archive provider. This also could use some analysis.
In conclusion, make sure you have a back-up and redundancy so you are never “stuck,” and also make sure that there is a back-up for your back-up in case the first back-up fails as well, however, don’t go over board. This is what I learned when traveling, and this is what you should consider when doing a risk analysis so you won’t be caught scrambling for a solution.
The second challenge was to get back to Dallas from DC before the weekend. I thought, no problem, we also have trains don't we? I could not get through to Amtrak and decided to walk to the main station. Well, I found out that there were trains, if I was willing to be on the stand-by list for up to three weeks from that date. That left me with no other option than to rent a car. There were plenty available, although I had to negotiate a one-way rental penalty of $1,000 or so when I turned in the car in Dallas. Lessons learned: have a back-up for your back-up, in my case, renting a car for the train back-up. That is why I have a back-up of my laptop computer and also keep a copy off-site somewhere in a cloud just in case.
Many institutions keep multiple copies of their images available for the first days to weeks at a modality. Some PACS architectures have gateways that store images for a certain period of time. Many also archive a copy off-site at a Vendor Neutral Archive, and then also have a tape-back-up that they put into a vault off-site. Determining what to duplicate where, when, and for how long should be based on a risk analysis. This analysis should include the unimaginable.
This is especially important for those components that directly impact critical care such as the ER. Most ER’s by now have multiple CR systems, and if the volume does not justify large units, a single plate reader should be sufficient as a back-up. You might even keep a laser printer, and make sure there is sufficient film to go with it. If the printer is connected to a network, keep a direct patch cable around that allows the CR to connect directly to the printer in case there is a major network failure. Another solution is to have a CD burner in the ER that allows images to be stored on exchange media that can be transmitted over the "sneakernet" to the radiologist who can review them temporarily on his or her workstation.
One should use common sense, however, and do not go overboard, which is where the risk analysis comes in. I have seen institutions where the images are still archived on removable disks on the CT or MRI, just because it makes the technologists “feel safe” or just because it “has always been done that way.” If over a period of, let’s say one year, no one ever asked for these to be retrieved, one might rethink this and possibly eliminate it from the workflow. I also have seen an institution where there were five copies available: one copy at a gateway, one at a local server, one at the archive, one at the web server and one at an electronic medical record server, which was functioning as a Vendor Neutral Archive provider. This also could use some analysis.
In conclusion, make sure you have a back-up and redundancy so you are never “stuck,” and also make sure that there is a back-up for your back-up in case the first back-up fails as well, however, don’t go over board. This is what I learned when traveling, and this is what you should consider when doing a risk analysis so you won’t be caught scrambling for a solution.
Enterprise Information Management and Archiving Hot Topics
During the recent vDHIMS ePosium on the subject of the evolving digital healthcare enterprise, attendees had an opportunity to interact with the distinguished faculty to ask their questions in a Q and A session. Here are some of the notable Questions that were asked ant the respective responses:
Data migration is still a major issue and Steve Horii from the Hospital of the University of Pennsylvania (HUP) is able to attest to that, going through this experience several times. One of the issues he noted was the potential loss of annotations when migrating the data. These annotations are also referred to as overlays in the DICOM standard. To store this information there are several options. The first option is to "burn-in" the data, which actually means that the pixels are replaced. This is seen a lot with Ultrasound and creates a lot of potential issues in case the information happens to be incorrect and needs to be modified. Some users put “XXX-es” over the text; however, if this annotation not preserved during the migration, there could be a major issue. Another option is to save this information in a database record in a proprietary method, which is what Steve had to deal with in his migration. The proper way of storing overlays is by creating a DICOM standard object, the so-called “Presentation State”, however, this requires the migration software to be able to interpret the database of the PACS system which is used as input to allow for the conversion to the presentation states.
There are other reasons for being able to interpret the proprietary component of the input data to be migrated, for example, if the archived images were stored on non-rewritable media such as an optical disk, the changes to the patient demographics or the deletion of certain images or even complete studies after the fact, are not reflected in the image archive but only in the database. This is another proof that data migration does not include only the transfer of the images but also requires a lot of knowledge about the input and output database structure.
Another question that was asked at the ePosium by the audience is what to do with any of the modality disks, as many CT, MR and even some Ultrasound units might have archived their images on optical disks (MOD) long before PASC was installed. These studies might occasionally need to be retrieved. The HUP solution for that was to have the vendor create a special data input station with a single disk reader. Remember, that the CT or MRI might have long be retired and replaced with a newer modality which meant that the “old” MOD or DVD readers could have been retired with the old units and therefore the capability to read this media has disappeared.
Another insightful series of presentations was from Kevin McEnery about the EHR and their in-house developed viewer which were built using Service Oriented Architecture (SOA) principles at MD Anderson cancer center in Houston. One of the participants asked about the staff at this institution and it was an impressive 200 people strong. Major reasons for this institution to develop their in-house viewer and infrastructure are the very different workflow for radiation therapy, the need for clinical trial support and the submission requirements of treatment data to regulatory agencies. Even for “typical” institutions” there are already significant differences in workflow, making it hard to match the EHR systems, let alone if you take into account the difference between very specialized institutions such as a cancer center.
One of the issues noted was also the requirement to have a certified EHR to meet the new Meaningful use requirements so that the institution can apply for incentives as part of the Hitech section of the ARRA. As Dr. McEnery noted, it is possible to apply for a “modular” certification, and re- use the certification of a “core” functionality and only certify the additional modules, which will be a big help for many institutions as they are customizing their EHR and MPR implementations.
If you are interested in the complete text of the presentations of Dr. Horii and McEnery, you can find these archived as part of the symposium, in addition to the other presentations and copy of the hand-outs from this three day event. It is even possible to gain continuing education credits by taking a simple quiz after the presentation so you can keep up with your certification requirements, see www.otechimg.com/vdhims for more details.
Data migration is still a major issue and Steve Horii from the Hospital of the University of Pennsylvania (HUP) is able to attest to that, going through this experience several times. One of the issues he noted was the potential loss of annotations when migrating the data. These annotations are also referred to as overlays in the DICOM standard. To store this information there are several options. The first option is to "burn-in" the data, which actually means that the pixels are replaced. This is seen a lot with Ultrasound and creates a lot of potential issues in case the information happens to be incorrect and needs to be modified. Some users put “XXX-es” over the text; however, if this annotation not preserved during the migration, there could be a major issue. Another option is to save this information in a database record in a proprietary method, which is what Steve had to deal with in his migration. The proper way of storing overlays is by creating a DICOM standard object, the so-called “Presentation State”, however, this requires the migration software to be able to interpret the database of the PACS system which is used as input to allow for the conversion to the presentation states.
There are other reasons for being able to interpret the proprietary component of the input data to be migrated, for example, if the archived images were stored on non-rewritable media such as an optical disk, the changes to the patient demographics or the deletion of certain images or even complete studies after the fact, are not reflected in the image archive but only in the database. This is another proof that data migration does not include only the transfer of the images but also requires a lot of knowledge about the input and output database structure.
Another question that was asked at the ePosium by the audience is what to do with any of the modality disks, as many CT, MR and even some Ultrasound units might have archived their images on optical disks (MOD) long before PASC was installed. These studies might occasionally need to be retrieved. The HUP solution for that was to have the vendor create a special data input station with a single disk reader. Remember, that the CT or MRI might have long be retired and replaced with a newer modality which meant that the “old” MOD or DVD readers could have been retired with the old units and therefore the capability to read this media has disappeared.
Another insightful series of presentations was from Kevin McEnery about the EHR and their in-house developed viewer which were built using Service Oriented Architecture (SOA) principles at MD Anderson cancer center in Houston. One of the participants asked about the staff at this institution and it was an impressive 200 people strong. Major reasons for this institution to develop their in-house viewer and infrastructure are the very different workflow for radiation therapy, the need for clinical trial support and the submission requirements of treatment data to regulatory agencies. Even for “typical” institutions” there are already significant differences in workflow, making it hard to match the EHR systems, let alone if you take into account the difference between very specialized institutions such as a cancer center.
One of the issues noted was also the requirement to have a certified EHR to meet the new Meaningful use requirements so that the institution can apply for incentives as part of the Hitech section of the ARRA. As Dr. McEnery noted, it is possible to apply for a “modular” certification, and re- use the certification of a “core” functionality and only certify the additional modules, which will be a big help for many institutions as they are customizing their EHR and MPR implementations.
If you are interested in the complete text of the presentations of Dr. Horii and McEnery, you can find these archived as part of the symposium, in addition to the other presentations and copy of the hand-outs from this three day event. It is even possible to gain continuing education credits by taking a simple quiz after the presentation so you can keep up with your certification requirements, see www.otechimg.com/vdhims for more details.
Friday, April 1, 2011
Tips from a Road Warrior (10): Keep Your Eyes Wide Open
One of my never-ending fears is that I forget something when going through the security check points in airports. I try to have a system for collecting my things after passing through the screen, i.e. first I retrieve my computer and other stuff, and then my bag and shoes, so I have less chance of leaving something behind. Apparently, this is not even a full-proof solution for everyone. Once I heard an airport announcement for someone at gate five to pick up his shoes.
When I am fit and ready to leave for a trip, there is less chance of forgetting something. However, if I am suffering from severe jet lag, which makes me feel like I am sleepwalking, and I am transferring somewhere overseas, e.g. from London Heathrow to Gatwick airport, I am surely more prone to forget something. I actually only forgot my laptop once, which is not a bad record given the many trips I make, and fortunately I found out in time. I was checking through Munich after teaching a class and was half asleep because of the time difference. I was relaxing in the airport lounge, ready to go to the gate. Fortunately something triggered my consciousness when picking up my computer bag finding it to be very light. I rushed back to security and yes, they had stored my computer securely and after identifying myself I was able to retrieve it, and hurry to the gate.
When dealing with medical information such as patient demographics and images, one cannot afford to suffer from jet lag, sleep deprivation or not being 100 percent fit. I know that it is hard if you get a call in the middle of the night to fix a study that was unidentified, rejected by the PACS, or not posted on a work list for whatever reason, but you have a responsibility to the patient to make sure you don't forget anything. The best way to do this is to have a fixed rule or process in place that is easy to follow and to check the fix when you are fresh and bright in the morning.
When fixing things, also make sure that you use the right tools and know what you are doing. I often hear professionals using tools that they find for free on the Internet, which might not be validated or tested for use in a clinical environment. Now, there are many great free and open source utilities, many of them I use myself, but just make sure that when you use one of them, you ask around and/or do some testing with these prior to relying on them. Let me give you some examples of what I have seen in the field.
There are devices that create duplicate unique identifiers (UIDs). By definition a UID is supposed to be unique worldwide and used to uniquely identify studies, series, images, frame of references, etc. It is used to index the database, for storing and retrieving. Therefore, duplicate UID’s used for different entities undermines the UID system and creates major problems. If a device has a poor UID generator, one might need to replace and fix this UID. One solution I heard of is that some administrators take the UID and add a ".1" to the end of the UID string. This is very dangerous as this is not necessarily unique. The solution is to use a tool that creates a new UID, using its own “root.”
Another example is when one changes the image header with a utility that does not recalculate and update the number of bytes in a particular group. The problem is that early versions of DICOM had a so-called “group length” attribute in the header, which indicated how many bytes there are in a particular group such as the patient information group. These attributes have long been retired, and are rarely being created, however, some applications still create and/or use this information or just check the value, and, if incorrect, will reject the image or report an error.
Another example is the correction of incorrect overlay information, which is not uncommon for ultrasound images. If a technologist forgets to change the patient information when scanning a new patient, it is possible that the incorrect patient information is “burned in” as an overlay into the image data. A similar, but even more severe patient safety issue occurs when the Left/Right marker is on the wrong side of an X-ray. In both cases, the pixel data has to be changed, assuming that an exam retake and/or recapture of the image is not possible. Many administrators simply use an overlay utility and put “XXXX” over the incorrect text or markers. This might appear correctly on a PACS viewing station, but when displaying on a different vendor workstation, teleradiology, or web-based viewers these overlays might disappear leaving an incorrectly identified image. This is also an issue when migrating the images to another PACS vendor system as overlays are often not migrated. One should use a tool that eliminates and replaces the actual pixel values.
In conclusion, it is important to be alert and keep your eyes open and use the right tools when dealing with patient and image information.
When I am fit and ready to leave for a trip, there is less chance of forgetting something. However, if I am suffering from severe jet lag, which makes me feel like I am sleepwalking, and I am transferring somewhere overseas, e.g. from London Heathrow to Gatwick airport, I am surely more prone to forget something. I actually only forgot my laptop once, which is not a bad record given the many trips I make, and fortunately I found out in time. I was checking through Munich after teaching a class and was half asleep because of the time difference. I was relaxing in the airport lounge, ready to go to the gate. Fortunately something triggered my consciousness when picking up my computer bag finding it to be very light. I rushed back to security and yes, they had stored my computer securely and after identifying myself I was able to retrieve it, and hurry to the gate.
When dealing with medical information such as patient demographics and images, one cannot afford to suffer from jet lag, sleep deprivation or not being 100 percent fit. I know that it is hard if you get a call in the middle of the night to fix a study that was unidentified, rejected by the PACS, or not posted on a work list for whatever reason, but you have a responsibility to the patient to make sure you don't forget anything. The best way to do this is to have a fixed rule or process in place that is easy to follow and to check the fix when you are fresh and bright in the morning.
When fixing things, also make sure that you use the right tools and know what you are doing. I often hear professionals using tools that they find for free on the Internet, which might not be validated or tested for use in a clinical environment. Now, there are many great free and open source utilities, many of them I use myself, but just make sure that when you use one of them, you ask around and/or do some testing with these prior to relying on them. Let me give you some examples of what I have seen in the field.
There are devices that create duplicate unique identifiers (UIDs). By definition a UID is supposed to be unique worldwide and used to uniquely identify studies, series, images, frame of references, etc. It is used to index the database, for storing and retrieving. Therefore, duplicate UID’s used for different entities undermines the UID system and creates major problems. If a device has a poor UID generator, one might need to replace and fix this UID. One solution I heard of is that some administrators take the UID and add a ".1" to the end of the UID string. This is very dangerous as this is not necessarily unique. The solution is to use a tool that creates a new UID, using its own “root.”
Another example is when one changes the image header with a utility that does not recalculate and update the number of bytes in a particular group. The problem is that early versions of DICOM had a so-called “group length” attribute in the header, which indicated how many bytes there are in a particular group such as the patient information group. These attributes have long been retired, and are rarely being created, however, some applications still create and/or use this information or just check the value, and, if incorrect, will reject the image or report an error.
Another example is the correction of incorrect overlay information, which is not uncommon for ultrasound images. If a technologist forgets to change the patient information when scanning a new patient, it is possible that the incorrect patient information is “burned in” as an overlay into the image data. A similar, but even more severe patient safety issue occurs when the Left/Right marker is on the wrong side of an X-ray. In both cases, the pixel data has to be changed, assuming that an exam retake and/or recapture of the image is not possible. Many administrators simply use an overlay utility and put “XXXX” over the incorrect text or markers. This might appear correctly on a PACS viewing station, but when displaying on a different vendor workstation, teleradiology, or web-based viewers these overlays might disappear leaving an incorrectly identified image. This is also an issue when migrating the images to another PACS vendor system as overlays are often not migrated. One should use a tool that eliminates and replaces the actual pixel values.
In conclusion, it is important to be alert and keep your eyes open and use the right tools when dealing with patient and image information.
Subscribe to:
Posts (Atom)