"Vendor neutral archiving" is the current buzz-term from the 2009 RSNA meeting in Chicago. It is a requirement if an institution would like to have a single enterprise solution for all of its images. In most cases, additional sources of images after radiology are cardiology, surgery, endoscopy, as well as other specialties such as dentistry, and dermatology. Using an individual image archive for each department that generates medical images makes no sense.
A single data repository for all medical images in an institution is driven, in part, by IT professionals who see this solution as achieving economies of scale for both physical infrastructure and administrative support. In addition, the increasing implementation of EMR technology literally demands a single source for image data.
However, early experiences with trying to disconnect and isolate the archive component from the PACS have not always been easy or painless. Unfortunately, the IHE has not yet delivered a bulletproof profile, as the image manager and image archive do not yet have a standardized protocol and/or interface. Users have found that there is still proprietary information that is vendor specific in their archives. Also, archive performance has been reported also to be an issue.
What should a system administrator do to prepare for transitioning their department to a shared, enterprise-wide archive?
First, it is prudent to maintain the first archiving tier from the same PACS vendor. Images are still retrieved from this primary server, and retrievals for the first few days and/or weeks are served by this first tier archive. A copy can be sent to the vendor-neutral enterprise archive for distribution as part of the EMR. Image retrievals and/or pre-fetching of images after the roll-off date to the second archive (enterprise) tier (such as 1-3 months after implementation) will be from the enterprise archive.
Second, a well defined interface specification, performance requirements and comprehensive acceptance testing should be part of the enterprise archive purchasing process. References from other sites who have implemented an enterprise archive using your flavor of PACS with that archive are important, too.
One test I recommend conducting is to run a "compare" script. This will demonstrate that images are archived and retrieved in an identical fashion from both the PACS archive and the enterprise archive. Note that synchronization between tier 1 and tier 2 archives can be tricky, particularly when changes are made—such as modifying patient demographics—at one or the other archive level. Make sure that the DICOM standards that deal with Presentation States and Key Images are supported.
Tuesday, December 1, 2009
Healthcare Information Technology Management, Part 4 of 4
Part four of a four-part series on healthcare information technology. The information found in this article can be used as part of a preparation program for the American Board of Imaging Informatics (ABII) Imaging Informatics Professional certification program, which awards the Certified Imaging Informatics Professional (CIIP) designation.
For further information on an extensive set of topics of interest to prospective CIIP candidates, please go tohttps:/www.otpedia.com.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Network Hardware and Software Implementation and Maintenance
Hardware
A network is comprised of a variety of hardware devices.
These include repeaters or hubs, switches, routers, gateways, and servers. In addition, depending on the speed of the desired network connection, different type of cabling may be used such as unshielded twisted pair (UTP), shielded twisted pair (STP), coaxial, or fiber optic. In some instances, wireless networks (wLAN) may also be deployed.
Software
As a healthcare IT professional you will be working with a number of operating systems. Microsoft Windows is the major operating system you will face, but the some systems also use UNIX and Linux operating systems. You may also be called upon to use DOS as a means of trouble shooting network problems.
An operating system (OS) runs the hardware and control interfaces and peripherals. In a PACS an OS is the system software responsible for the direct control and management of hardware and basic computer system operations, as well as running application software such as image processing programs and Web browsers. In general, the OS is the first layer of software loaded into computer memory when it starts up.
Utility programs perform tasks that maintain a computer's health-hardware or data. Utilities perform:
Server Architecture
There are basically four types of server architecture-mainframe, file sharing, client/server, and application server provider (ASP)-with variations and mixed environments common in healthcare IT.
Mainframe architectures host all data at a central host computer. Users interact with it through a terminal that captures keystrokes and sends that information to the mainframe.
File sharing architectures are where a server downloads files from a shared location to a desktop. The requested task is then run (both logic and data) on the desktop.
Client/server architecture utilizes a database server to replace the file server. Employing a relational database management system (RDBMS), user queries are answered directly. The client/server architecture reduces network traffic by providing a query response rather than total file transfer.
In an ASP-based architecture, the application is provided to users over a network and is accessed by users through a Web browser or client software provided by the ASP vendor. Medical billing, PACS, and archive services are some of the commercial offerings available with this model.
IT Replacement Schedule Development
Planning for Obsolescence
Not matter what computer technology or software a healthcare IT administrator installs, there will soon be something faster or "better" soon. Obsolescence is a fact of life in technology. Planning for it is the responsibility of the healthcare IT administrator.
Hardware is usually the slowest component to become obsolete; however, be aware that new software applications, such as 3D modeling, can accelerate hardware obsolescence. Generally, features such as extra memory and external drives can be added to extend the usable life of hardware.
Software obsolescence depends on when vendors decide to add new features and upgrades and when users demand to have them. It is important to note that vendors will decide that they are no longer going to support older software versions, which forces many of their customers to upgrade to the latest iteration.
Technology Lifetime
Recognizing that technology has a definitive lifetime is an important consideration when selecting a particular technology or format for deployment in a healthcare IT enterprise.
For example, consider music playback technology. Wax cylinders gave way to 78 RPM disks, which were supplanted by 45 RPM, then 33-1/3 RPM records. 8-track cassette tapes were rendered obsolete by smaller-format cassette tapes, then records and tapes disappeared from the market with the advent of CDs. Of late, CD technology for music distribution and playback has been threatened by the explosive growth of MP3.
The lesson is that there will continually be new developments in technology. Savvy healthcare IT administrators need to plan for the technologies of tomorrow by ensuring that their current deployments support accepted standards-ensuring that their data can be successfully migrated to new platforms.
Moore's Law
Moore’s Law is an observation made in 1965 by the co-founder of Intel, Gordon Moore, that the number of transistors per square inch on integrated circuits had doubled every year since its invention.
Moore predicted that this trend would continue for the foreseeable future. In recent years, the pace has slowed, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law.
For further information on an extensive set of topics of interest to prospective CIIP candidates, please go tohttps:/www.otpedia.com.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Network Hardware and Software Implementation and Maintenance
Hardware
A network is comprised of a variety of hardware devices.
These include repeaters or hubs, switches, routers, gateways, and servers. In addition, depending on the speed of the desired network connection, different type of cabling may be used such as unshielded twisted pair (UTP), shielded twisted pair (STP), coaxial, or fiber optic. In some instances, wireless networks (wLAN) may also be deployed.
Software
As a healthcare IT professional you will be working with a number of operating systems. Microsoft Windows is the major operating system you will face, but the some systems also use UNIX and Linux operating systems. You may also be called upon to use DOS as a means of trouble shooting network problems.
- UNIX: mostly for the back-end (servers)
- Windows: some back-end, mostly for workstations
- Linux: some view stations, some RIS systems
An operating system (OS) runs the hardware and control interfaces and peripherals. In a PACS an OS is the system software responsible for the direct control and management of hardware and basic computer system operations, as well as running application software such as image processing programs and Web browsers. In general, the OS is the first layer of software loaded into computer memory when it starts up.
Utility programs perform tasks that maintain a computer's health-hardware or data. Utilities perform:
- File Management
- Disk Management
- Memory Management
- Backup
- Data Recovery
- Data Compression
- Anti-virus
Server Architecture
There are basically four types of server architecture-mainframe, file sharing, client/server, and application server provider (ASP)-with variations and mixed environments common in healthcare IT.
Mainframe architectures host all data at a central host computer. Users interact with it through a terminal that captures keystrokes and sends that information to the mainframe.
File sharing architectures are where a server downloads files from a shared location to a desktop. The requested task is then run (both logic and data) on the desktop.
Client/server architecture utilizes a database server to replace the file server. Employing a relational database management system (RDBMS), user queries are answered directly. The client/server architecture reduces network traffic by providing a query response rather than total file transfer.
In an ASP-based architecture, the application is provided to users over a network and is accessed by users through a Web browser or client software provided by the ASP vendor. Medical billing, PACS, and archive services are some of the commercial offerings available with this model.
IT Replacement Schedule Development
Planning for Obsolescence
Not matter what computer technology or software a healthcare IT administrator installs, there will soon be something faster or "better" soon. Obsolescence is a fact of life in technology. Planning for it is the responsibility of the healthcare IT administrator.
Hardware is usually the slowest component to become obsolete; however, be aware that new software applications, such as 3D modeling, can accelerate hardware obsolescence. Generally, features such as extra memory and external drives can be added to extend the usable life of hardware.
Software obsolescence depends on when vendors decide to add new features and upgrades and when users demand to have them. It is important to note that vendors will decide that they are no longer going to support older software versions, which forces many of their customers to upgrade to the latest iteration.
Technology Lifetime
Recognizing that technology has a definitive lifetime is an important consideration when selecting a particular technology or format for deployment in a healthcare IT enterprise.
For example, consider music playback technology. Wax cylinders gave way to 78 RPM disks, which were supplanted by 45 RPM, then 33-1/3 RPM records. 8-track cassette tapes were rendered obsolete by smaller-format cassette tapes, then records and tapes disappeared from the market with the advent of CDs. Of late, CD technology for music distribution and playback has been threatened by the explosive growth of MP3.
The lesson is that there will continually be new developments in technology. Savvy healthcare IT administrators need to plan for the technologies of tomorrow by ensuring that their current deployments support accepted standards-ensuring that their data can be successfully migrated to new platforms.
Moore's Law
Moore’s Law is an observation made in 1965 by the co-founder of Intel, Gordon Moore, that the number of transistors per square inch on integrated circuits had doubled every year since its invention.
Moore predicted that this trend would continue for the foreseeable future. In recent years, the pace has slowed, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law.
Sunday, November 1, 2009
Do You PHR Yet?
Each spring, I travel with a group of Rotarians to Nicaragua to help build healthcare clinics in remote areas. Before the trip, I always make an elaborate spreadsheet containing the medical information of each member of our team- allergies, immunizations, chronic conditions, medications, etc. -in case something happens and any of us need medical attention.
Although useful, the spreadsheet model obviously isn't the most efficient tool available for compiling and storing a medical record. Recently, I’ve been intrigued by personal health record (PHR) applications that are available online. The functional model of a PHR, defined by the HL7 organization, allows for the easy query of immunization records and access to medical information-subject to the proper authorization. The capability to access the health record of any member of the Nicaragua team from an Internet-capable device is certainly better than carrying around a spreadsheet file.
On a personal level, a recent trip to my local clinic convinced me of the benefits of maintaining a PHR. It took a receptionist ten minutes to type in my demographic information and medical history-even though I had been seen at that location less than two years ago. It turns out the facility had deployed a new health record system and had been unable to migrate any historic data. A simple URL link to my PHR with the proper authorization could have eliminated this problem.
These are two simple examples of how a PHR can increase efficiency; on a more pragmatic level, this technology has the potential to minimize the possibility of adverse drug interactions as well as prevent other common medical errors.
PHR’s are available today from many providers. Of particular note, IT industry heavyweights Google and Microsoft are attempting to carve out a share of this market. HIS and EMR vendors are also trying to stake a claim in this space; however, many of these PHR systems are "tethered" to a facility or vendor application.
For example, a hospital here in Dallas encourages its patients to use a PHR it provides. Unfortunately, the PHR is a vendor-based system that is, for all practical information transfer/access purposes, useless outside that institution.
Other than the obvious benefit to be gained from having a PHR, why are these applications of interest to medical imaging IT professionals? I’m fairly certain that it won’t be too long before patients begin arriving with a URL link to their prior imaging studies, which are stored in their PHR. And yes, there will undoubtedly be interoperability issues with a PACS.
Although useful, the spreadsheet model obviously isn't the most efficient tool available for compiling and storing a medical record. Recently, I’ve been intrigued by personal health record (PHR) applications that are available online. The functional model of a PHR, defined by the HL7 organization, allows for the easy query of immunization records and access to medical information-subject to the proper authorization. The capability to access the health record of any member of the Nicaragua team from an Internet-capable device is certainly better than carrying around a spreadsheet file.
On a personal level, a recent trip to my local clinic convinced me of the benefits of maintaining a PHR. It took a receptionist ten minutes to type in my demographic information and medical history-even though I had been seen at that location less than two years ago. It turns out the facility had deployed a new health record system and had been unable to migrate any historic data. A simple URL link to my PHR with the proper authorization could have eliminated this problem.
These are two simple examples of how a PHR can increase efficiency; on a more pragmatic level, this technology has the potential to minimize the possibility of adverse drug interactions as well as prevent other common medical errors.
PHR’s are available today from many providers. Of particular note, IT industry heavyweights Google and Microsoft are attempting to carve out a share of this market. HIS and EMR vendors are also trying to stake a claim in this space; however, many of these PHR systems are "tethered" to a facility or vendor application.
For example, a hospital here in Dallas encourages its patients to use a PHR it provides. Unfortunately, the PHR is a vendor-based system that is, for all practical information transfer/access purposes, useless outside that institution.
Other than the obvious benefit to be gained from having a PHR, why are these applications of interest to medical imaging IT professionals? I’m fairly certain that it won’t be too long before patients begin arriving with a URL link to their prior imaging studies, which are stored in their PHR. And yes, there will undoubtedly be interoperability issues with a PACS.
Healthcare Information Technology Management, Part 3 of 4
Part three of a four-part series on healthcare information technology. The information found in this article can be used as part of a preparation program for the American Board of Imaging Informatics (ABII) Imaging Informatics Professional certification program, which awards the Certified Imaging Informatics Professional (CIIP) designation.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Structured Query Language (SQL)
SQL is used to communicate with a relational database, which is a structure in which the data consists of a collection of tables related to one another through common values.
This is fundamentally different than a hierarchical database; in a relational database there is no hierarchy among tables and any table can be accessed directly or potentially linked with any other table; there are no hard-coded, predefined paths among the data.
The two most prominent characteristics of a relational database are:
A row (also known as a record) represents a collection of information about a separate item (for example a patient).
Certain fields may be designated as keys, which mean that searches for specific values of that field will use indexing to speed them up.
A relationship is a logical link between two tables.
Where fields in two different tables take values from the same set, a join operation can be performed to select related records in the two tables by matching values in those fields.
Often, but not always, the fields will have the same name in both tables. For example, an "orders" table might contain (patient_id, image_code) pairs and a "patient" table might contain (patient_id, service_date and physician_code) so to review all of a patient's records you would join the patient_id fields of the two tables.
This can be extended to joining multiple tables on multiple fields. Because these relationships are only specified at retrieval time, relational databases are classed as a dynamic database management system.
Although most database systems use SQL, most of them also have their own additional proprietary extensions that are usually only used on their system.
Performance Indicators
Typically, a diagnostic imaging IT administrator acts as a PACS database manager—which includes the responsibilities of controlling read/write access, specifying report generation, and analyzing usage.
Common reports that can (and should) be generated as system and department performance generators in radiology include:
All relational database management systems (RDBMS) share the following characteristics:
The term "ORD" is sometimes used to describe external software products running over traditional DBMSs to provide similar features; these systems are more correctly referred to as object relational mapping systems.
Whereas RDBMS or SQL-DBMS products focused on the efficient management of data drawn from a limited set of data types (defined by the relevant language standards), an ORDBMS allows software developers to integrate their own types and the methods and apply to them into the DBMS.
Another advantage to the object-relational model is that the database can make use of the relationships between data to easily collect related records.
In an object database data is stored as objects.
Each object has a unique identifier. Data can be interpreted only using the methods specified by its class. The relationship between similar objects is preserved (inheritance) as are references between objects.
Two basic methods are used to store objects by different database vendors:
Dashboards are performance analysis tools that aggregate a collection of network metrics and present it to the healthcare IT administrator through a single interface.
For diagnostic imaging IT administrators, Paul Nagy, Ph.D., has developed and made freely available an open-source PACS dashboard, PacsPulse, for analyzing the performance of DICOM archiving traffic.
In addition to administrator dashboards, the concept has been extended by some vendors to the user work environment with dashboards available for radiologists and technologists in some PACS products.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Structured Query Language (SQL)
SQL is used to communicate with a relational database, which is a structure in which the data consists of a collection of tables related to one another through common values.
This is fundamentally different than a hierarchical database; in a relational database there is no hierarchy among tables and any table can be accessed directly or potentially linked with any other table; there are no hard-coded, predefined paths among the data.
The two most prominent characteristics of a relational database are:
- Data is stored in tables; and
- There are relationships between tables.
A row (also known as a record) represents a collection of information about a separate item (for example a patient).
Certain fields may be designated as keys, which mean that searches for specific values of that field will use indexing to speed them up.
A relationship is a logical link between two tables.
Where fields in two different tables take values from the same set, a join operation can be performed to select related records in the two tables by matching values in those fields.
Often, but not always, the fields will have the same name in both tables. For example, an "orders" table might contain (patient_id, image_code) pairs and a "patient" table might contain (patient_id, service_date and physician_code) so to review all of a patient's records you would join the patient_id fields of the two tables.
This can be extended to joining multiple tables on multiple fields. Because these relationships are only specified at retrieval time, relational databases are classed as a dynamic database management system.
Although most database systems use SQL, most of them also have their own additional proprietary extensions that are usually only used on their system.
Performance Indicators
Typically, a diagnostic imaging IT administrator acts as a PACS database manager—which includes the responsibilities of controlling read/write access, specifying report generation, and analyzing usage.
Common reports that can (and should) be generated as system and department performance generators in radiology include:
- Utilization
- Uptime
- Capacity
- Exceptions
- Duplicates
- Lost studies
- Unread studies
All relational database management systems (RDBMS) share the following characteristics:
- Data model -- An RDBMS stores data in a database consisting of one or more tables of rows and columns. The rows correspond to a record (rule); the columns correspond to attributes (fields in the record). Each column has a data type (for example, date).
- Query language -- A view is a subset of a database that is the result of the evaluation of a query. The types of queries supported run the gamut from simple single-table queries to very complicated multi-table queries involving joins, nesting, set union/differences, and others.
- Computational model -- All processing is based on values in fields of records. Records do not have unique identifiers. The presentation of data as tables is independent of the way the data is physically stored on disk.
- There are no provisions for references from one record to another. Examining the result of a query is done under the control of a cursor that allows the user to step through the result set one record at a time. The same is true for updates.
The term "ORD" is sometimes used to describe external software products running over traditional DBMSs to provide similar features; these systems are more correctly referred to as object relational mapping systems.
Whereas RDBMS or SQL-DBMS products focused on the efficient management of data drawn from a limited set of data types (defined by the relevant language standards), an ORDBMS allows software developers to integrate their own types and the methods and apply to them into the DBMS.
Another advantage to the object-relational model is that the database can make use of the relationships between data to easily collect related records.
In an object database data is stored as objects.
Each object has a unique identifier. Data can be interpreted only using the methods specified by its class. The relationship between similar objects is preserved (inheritance) as are references between objects.
Two basic methods are used to store objects by different database vendors:
- Each object has a unique ID and is defined as a subclass of a base class, using inheritance to determine attributes.
- Virtual memory mapping is used for object storage and management. (a unique identifier for the location)
Dashboards are performance analysis tools that aggregate a collection of network metrics and present it to the healthcare IT administrator through a single interface.
For diagnostic imaging IT administrators, Paul Nagy, Ph.D., has developed and made freely available an open-source PACS dashboard, PacsPulse, for analyzing the performance of DICOM archiving traffic.
In addition to administrator dashboards, the concept has been extended by some vendors to the user work environment with dashboards available for radiologists and technologists in some PACS products.
Thursday, October 1, 2009
A Day in the Life of a PACS Administrator Goes Viral!
The latest YouTube video about a day in the life of a PACS administrator has gone viral, with more than 2,000 viewings to date.
OTech is challenging the PACS community with a competition; we're looking for similar videos that show the humorous side of PACS administration. We all know that it can be a challenge dealing with "some" technologists and physicians, but laughing about the common issues we all share always helps ease the strain of the daily grind.
I’m looking forward to seeing your next "episode"!
OTech is challenging the PACS community with a competition; we're looking for similar videos that show the humorous side of PACS administration. We all know that it can be a challenge dealing with "some" technologists and physicians, but laughing about the common issues we all share always helps ease the strain of the daily grind.
I’m looking forward to seeing your next "episode"!
How to Get the Most Out of Your PACS Using IHE
The following is a brief synopsis of our Webcast featuring John Evers and Herman Oosterwijk that was broadcast on Sept. 17, 2009. Click here if you'd like to view the complete presentation.
The majority of PACS systems are running at a sub-optimal level due to a lack of re-engineering for IHE profile implementations. Proper IHE implementations can eliminate unnecessary steps and can greatly increase the data integrity and efficiency of your PACS.
A poll held during our Webcast found that only 4% of our audience used the DICOM Modality Performed Procedure Step (MPPS) for all modalities. The MPPS communicates the exam status, number of images and procedures changes from a modality to the PACS and RIS. We also discovered that only 28% used MPPS for some modalities, while 65% had not yet implemented this feature.
This seems to demonstrate that nearly two-thirds of PACS implementations still do not use the IHE scheduled workflow capabilities to their fullest extent. Based on our poll, it’s a safe assumption that there are quite a few practices using unnecessary steps and actions, as well as inefficient workflow scenarios, to manage their PACS.
Apparently, full integration of RIS and PACS still has a ways to go.
At a time when many users are switching PACS vendors, or considering moving to a new system, issues with poor integration might not be due to a lack of functionality on your current deployment, but rather a lack of understanding of the full potential of your RIS/PACS integration. I have visited institutions where the department had paid for integration using the IHE workflow profile capabilities, but never switched it on; worse, they did not even know they paid for this functionality.
How can you ensure that your RIS/PACS are fully integrated and optimized? First, map the current workflow, and identify the potential bottlenecks. Second, look at the workflow as described by the IHE Use Case scenarios. These scenarios not only detail standard department operations, but also show how to deal with unscheduled exams, patient updates, procedure updates, multiple orders for the same procedure (such as a chest-abdomen-pelvis CT) and other "exception" cases.
The next step is to make an inventory of the current IHE support and capabilities of all your modalities and the PACS and RIS, which can be verified by the IHE integration statements of these devices. It is very likely you’ll find that you have some devices that do not have MPPS, or even Storage Commitment, configured. Your final step is to make a plan to upgrade the devices that are lacking these features, and then roll out the workflow changes. I know this is quite a bit of work, but the results of having a much more efficient operation will be worth it.
The majority of PACS systems are running at a sub-optimal level due to a lack of re-engineering for IHE profile implementations. Proper IHE implementations can eliminate unnecessary steps and can greatly increase the data integrity and efficiency of your PACS.
A poll held during our Webcast found that only 4% of our audience used the DICOM Modality Performed Procedure Step (MPPS) for all modalities. The MPPS communicates the exam status, number of images and procedures changes from a modality to the PACS and RIS. We also discovered that only 28% used MPPS for some modalities, while 65% had not yet implemented this feature.
This seems to demonstrate that nearly two-thirds of PACS implementations still do not use the IHE scheduled workflow capabilities to their fullest extent. Based on our poll, it’s a safe assumption that there are quite a few practices using unnecessary steps and actions, as well as inefficient workflow scenarios, to manage their PACS.
Apparently, full integration of RIS and PACS still has a ways to go.
At a time when many users are switching PACS vendors, or considering moving to a new system, issues with poor integration might not be due to a lack of functionality on your current deployment, but rather a lack of understanding of the full potential of your RIS/PACS integration. I have visited institutions where the department had paid for integration using the IHE workflow profile capabilities, but never switched it on; worse, they did not even know they paid for this functionality.
How can you ensure that your RIS/PACS are fully integrated and optimized? First, map the current workflow, and identify the potential bottlenecks. Second, look at the workflow as described by the IHE Use Case scenarios. These scenarios not only detail standard department operations, but also show how to deal with unscheduled exams, patient updates, procedure updates, multiple orders for the same procedure (such as a chest-abdomen-pelvis CT) and other "exception" cases.
The next step is to make an inventory of the current IHE support and capabilities of all your modalities and the PACS and RIS, which can be verified by the IHE integration statements of these devices. It is very likely you’ll find that you have some devices that do not have MPPS, or even Storage Commitment, configured. Your final step is to make a plan to upgrade the devices that are lacking these features, and then roll out the workflow changes. I know this is quite a bit of work, but the results of having a much more efficient operation will be worth it.
Tuesday, September 1, 2009
Is Your PACS Running on Empty?
Based on my discussions with other PACS professionals, as well as conversations I've had with PACS administrators in our training classes, I’ve come to the conclusion that most PACS run at about 20%-30% of their optimum efficiency, speed, and workflow effectiveness.
Many PACS users are looking to replace their systems, but they might end up with the same inefficiencies and issues as with the PACS they replaced if they don’t make the effort to utilize new system to its fullest.
What is missing?
There are two major items that many users overlook. First is the absence of a re-engineered workflow. A department with needless duplication, unnecessary paperwork, and avoidable manual processes will only become more chaotic with a PACS implementation.
The second missing item is a proper deployment of the Integrating the Healthcare Enterprise (IHE) profiles. The IHE defined the scheduled workflow profile more than 10 years ago. It clearly defines how the proper use of Modality Performed Procedure Steps can eliminate many manual workflow actions and deliver better data integrity.
However, when I poll PACS administrators in our training classes, less than 10% have implemented it. The sad part is that it does not seem to be getting better, but appears to be getting worse. As more smaller institutions and practices are implementing PACS, the level of IHE awareness seems to be proportional to the size of the institution.
I recommend that anyone administering a PACS familiarize themselves with the profiles on the IHE Web site,https:/www.ihe.net/profiles/index.cfm. If you’d like more information about them, please take a look at our free OTPedia resource, https:/www.otpedia.com/index.cfm. Last but not least, watch our upcoming interactive Webcast on this topic. I am looking forward to your feedback on why you believe IHE implementation is still lagging.
Many PACS users are looking to replace their systems, but they might end up with the same inefficiencies and issues as with the PACS they replaced if they don’t make the effort to utilize new system to its fullest.
What is missing?
There are two major items that many users overlook. First is the absence of a re-engineered workflow. A department with needless duplication, unnecessary paperwork, and avoidable manual processes will only become more chaotic with a PACS implementation.
The second missing item is a proper deployment of the Integrating the Healthcare Enterprise (IHE) profiles. The IHE defined the scheduled workflow profile more than 10 years ago. It clearly defines how the proper use of Modality Performed Procedure Steps can eliminate many manual workflow actions and deliver better data integrity.
However, when I poll PACS administrators in our training classes, less than 10% have implemented it. The sad part is that it does not seem to be getting better, but appears to be getting worse. As more smaller institutions and practices are implementing PACS, the level of IHE awareness seems to be proportional to the size of the institution.
I recommend that anyone administering a PACS familiarize themselves with the profiles on the IHE Web site,https:/www.ihe.net/profiles/index.cfm. If you’d like more information about them, please take a look at our free OTPedia resource, https:/www.otpedia.com/index.cfm. Last but not least, watch our upcoming interactive Webcast on this topic. I am looking forward to your feedback on why you believe IHE implementation is still lagging.
Healthcare Information Technology Image Management, Part 2 of 5
Part two of a five-part series on healthcare information technology. The information found in this article can be used as part of a preparation program for the American Board of Imaging Informatics (ABII) Imaging Informatics Professional certification program, which awards the Certified Imaging Informatics Professional (CIIP) designation.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Network Protocols
The International Standards Organization (ISO) created the Open Systems Interconnection (OSI) Model in 1978 and it has defined the framework on which most modern protocols are based.
The OSI Model separates network functionality into seven distinct layers; at each layer, a distinct task or set of tasks is performed. The OSI architecture is split between seven layers, from lowest to highest:
The Transport Layer of the OSI Model ensures that all data arrives at the proper destination across the network intact and error free. It protects end-to-end data integrity, checks for errors, ensures data is in sequence, identifies processes within a host, and segments data for applications (ports).
TCP - Transmission Control Protocol is the most popular transport layer protocol, and provides for error detection and re-transmission, sequencing of data, and other features.
UDP - User Datagram Protocol is another transport layer protocol that does not provide error detection or sequencing, but has less overhead than TCP. Port numbers or sockets are used at this layer to segment data for the applications.
Transmission Control Protocol/Internet Protocol (TCP/IP) is the suite of communications protocols used to connect hosts on the Internet. TCP/IP uses several protocols, the two main ones being TCP and IP. TCP/IP is built into the UNIX operating system and is used by the Internet, making it the de facto standard for transmitting data over networks. Even network operating systems that have their own protocols, such as Novell's Netware, also support TCP/IP.
The TCP/IP protocol suite is actually composed of several protocols including IP which handles the movement of data between host computers, TCP which manages the movement of data between applications, UDP which also manages the movement of data between applications but is less complex and reliable than TCP, and Internet Control Message Protocol (ICMP), which transmits error messages and network traffic statistics.
The Ethernet protocol is by far the most widely used network transmission protocol. Ethernet uses an access method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system where each computer listens to the cable before sending anything through the network. If the network is clear, the computer will transmit. If some other node is already transmitting on the cable, the computer will wait and try again when the line is clear.
Sometimes, two computers attempt to transmit at the same instant. When this happens a collision occurs. Each computer then backs off and waits a random amount of time before attempting to retransmit. With this access method, it is normal to have collisions. However, the delay caused by collisions and retransmitting is very small and does not normally affect the speed of transmission on the network.
The Ethernet protocol allows for linear bus, star, or tree topologies. Data can be transmitted over wireless access points, twisted pair, coaxial, or fiber optic cable at a speed of 10 Mbps up to 1000 Mbps.
Fault Tolerance and Load Balancing
Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both.
Fault tolerance is the capability of a system to cope with internal hardware problems (eg, a disk drive failure) and still continue to operate with minimal impact, such as by bringing a backup system online. There are many levels of fault tolerance, the lowest being the capability to continue operation in the event of a power failure. Many fault-tolerant computer systems mirror all operations -- that is, every operation is performed on two or more duplicate systems, so if one fails the other can take over. Fault tolerance requires redundant hardware and modifications to the operating system.
In the healthcare IT environment, it is critical that systems are extremely fault tolerant. For example, Microsoft Windows NT Server includes fault tolerance for a failed disk drive by disk mirroring (RAID 1) or disk striping with parity (RAID 5).
In addition, clustering provides fault tolerance for individual computers.
Network Hardware and Components
A network is comprised of a variety of hardware devices.
These include repeaters or hubs that are used to extend the physical length of an Ethernet network. These are seldom used today, and have been mostly replaced by switches in a healthcare IT network.
A switch connects similar networks at layer 2 (the data link layer) of the OSI Model.
A router is a layer 3 device (network layer) of the OSI Model and it makes decisions about path as well as connects different networks based on network addressing.
A gateway is a machine that connects two or more networks using different network protocols and performs the necessary protocol conversions.
Network Configuration
There are three main network configurations, or topologies, that are used:
A ring network is a bus network that has been attached at both ends. The data in a ring network travels in only in one direction.
In order to eliminate data crashes, an improvement was made to the ring network by adding a token, which became known as a Token Ring Network. A token is an electronic impulse that runs around the ring. A device can only send a request or receive data when it has the token. This eliminated data clashes since two requests could no longer be entered at the same time.
A star network is a network where each network device is connected to a central computer, called a server. The server holds all the software, and the other devices, called nodes, request the software from it. It is possible to have a star network spawn another star network. Star networks are the most commonly employed topologies.
Network Metrics
Basic network performance metrics describe the performance of the network as seen by a user and are divided into four broad categories: availability, loss and errors, delay, and bandwidth.
Availability metrics assess how robust a network is; for example, the amount of time it runs without a problem that impacts service availability (also known as “uptime”).
Loss and error metrics indicate network congestion problems, transmission errors, or equipment malfunctioning.
Delay metrics are specific to network congestion problems and can also be used to measure the effect of routing changes.
Bandwidth metrics measure the amount of data that can be transferred across a network in a set amount of time.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation.
Network Protocols
The International Standards Organization (ISO) created the Open Systems Interconnection (OSI) Model in 1978 and it has defined the framework on which most modern protocols are based.
The OSI Model separates network functionality into seven distinct layers; at each layer, a distinct task or set of tasks is performed. The OSI architecture is split between seven layers, from lowest to highest:
- Physical layer
- Data Link layer
- Network layer
- Transport layer
- Session layer
- Presentation layer
- Application layer
- Input information received at the specified layer
- Output information that will be delivered to the next layer
- Once messages are delivered to other devices, the function performed at a given layer at the sending device will be processed by the same layer at the receiving device. The same layer in different devices would be considered "peers".
The Transport Layer of the OSI Model ensures that all data arrives at the proper destination across the network intact and error free. It protects end-to-end data integrity, checks for errors, ensures data is in sequence, identifies processes within a host, and segments data for applications (ports).
TCP - Transmission Control Protocol is the most popular transport layer protocol, and provides for error detection and re-transmission, sequencing of data, and other features.
UDP - User Datagram Protocol is another transport layer protocol that does not provide error detection or sequencing, but has less overhead than TCP. Port numbers or sockets are used at this layer to segment data for the applications.
Transmission Control Protocol/Internet Protocol (TCP/IP) is the suite of communications protocols used to connect hosts on the Internet. TCP/IP uses several protocols, the two main ones being TCP and IP. TCP/IP is built into the UNIX operating system and is used by the Internet, making it the de facto standard for transmitting data over networks. Even network operating systems that have their own protocols, such as Novell's Netware, also support TCP/IP.
The TCP/IP protocol suite is actually composed of several protocols including IP which handles the movement of data between host computers, TCP which manages the movement of data between applications, UDP which also manages the movement of data between applications but is less complex and reliable than TCP, and Internet Control Message Protocol (ICMP), which transmits error messages and network traffic statistics.
The Ethernet protocol is by far the most widely used network transmission protocol. Ethernet uses an access method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system where each computer listens to the cable before sending anything through the network. If the network is clear, the computer will transmit. If some other node is already transmitting on the cable, the computer will wait and try again when the line is clear.
Sometimes, two computers attempt to transmit at the same instant. When this happens a collision occurs. Each computer then backs off and waits a random amount of time before attempting to retransmit. With this access method, it is normal to have collisions. However, the delay caused by collisions and retransmitting is very small and does not normally affect the speed of transmission on the network.
The Ethernet protocol allows for linear bus, star, or tree topologies. Data can be transmitted over wireless access points, twisted pair, coaxial, or fiber optic cable at a speed of 10 Mbps up to 1000 Mbps.
Fault Tolerance and Load Balancing
Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both.
Fault tolerance is the capability of a system to cope with internal hardware problems (eg, a disk drive failure) and still continue to operate with minimal impact, such as by bringing a backup system online. There are many levels of fault tolerance, the lowest being the capability to continue operation in the event of a power failure. Many fault-tolerant computer systems mirror all operations -- that is, every operation is performed on two or more duplicate systems, so if one fails the other can take over. Fault tolerance requires redundant hardware and modifications to the operating system.
In the healthcare IT environment, it is critical that systems are extremely fault tolerant. For example, Microsoft Windows NT Server includes fault tolerance for a failed disk drive by disk mirroring (RAID 1) or disk striping with parity (RAID 5).
In addition, clustering provides fault tolerance for individual computers.
Network Hardware and Components
A network is comprised of a variety of hardware devices.
These include repeaters or hubs that are used to extend the physical length of an Ethernet network. These are seldom used today, and have been mostly replaced by switches in a healthcare IT network.
A switch connects similar networks at layer 2 (the data link layer) of the OSI Model.
A router is a layer 3 device (network layer) of the OSI Model and it makes decisions about path as well as connects different networks based on network addressing.
A gateway is a machine that connects two or more networks using different network protocols and performs the necessary protocol conversions.
Network Configuration
There are three main network configurations, or topologies, that are used:
- Bus Network
- Ring Network
- Star Network
A ring network is a bus network that has been attached at both ends. The data in a ring network travels in only in one direction.
In order to eliminate data crashes, an improvement was made to the ring network by adding a token, which became known as a Token Ring Network. A token is an electronic impulse that runs around the ring. A device can only send a request or receive data when it has the token. This eliminated data clashes since two requests could no longer be entered at the same time.
A star network is a network where each network device is connected to a central computer, called a server. The server holds all the software, and the other devices, called nodes, request the software from it. It is possible to have a star network spawn another star network. Star networks are the most commonly employed topologies.
Network Metrics
Basic network performance metrics describe the performance of the network as seen by a user and are divided into four broad categories: availability, loss and errors, delay, and bandwidth.
Availability metrics assess how robust a network is; for example, the amount of time it runs without a problem that impacts service availability (also known as “uptime”).
Loss and error metrics indicate network congestion problems, transmission errors, or equipment malfunctioning.
Delay metrics are specific to network congestion problems and can also be used to measure the effect of routing changes.
Bandwidth metrics measure the amount of data that can be transferred across a network in a set amount of time.
Saturday, August 1, 2009
The Ultimate Data Protection: The Vendor-Neutral Archive
Many institutions are about to upgrade or replace their PACS, and in many cases this will be done with a product from a different vendor. Many of these institutions are in for a rude awakening--the images and related information in their archives will have to be migrated to meet their new vendor's data formats. This process can be expensive; for the best case scenario, the cost will be as much as half a million dollars and will take months to accomplish.
This sad truth of data "portability" has created a burgeoning market in data migration, companies who are expert in the various image encodings and formatting. Some of the image related information such as overlays, measurements, key images, and notes may even have been stored in a proprietary manner in the image header or database and will be lost when migrating the data to the new archive. The worst case scenarios will cost more and take longer.
It is amazing that many vendors still “mess” with the data integrity of the information trusted to their archive. During our training class in Asia, I had one user tell me that his nuclear medicine images could not be processed after they were archived by the PACS and subsequently retrieved back at the modalities. Another user shared their frustration that a vendor’s workstation could not reformat CT images--also after they were archived and retrieved.
These types of problems are forcing users to come up with their own solutions to mitigate the issues caused by proprietary archive formats. Many believe that the best answer is the “vendor-neutral” archive.
The most common implementation is to use an archive solution, often from a different vendor, that interfaces with the PACS front-end using open standards. Initial attempts are somewhat encouraging; however, many have found that there is much more happening between a PACS archive and front-end (database-image manager and workflow manager) than they were aware was going on. Also, PACS vendors are somewhat hesitant to let go of that part of their system. But many institutions, especially those who want to share an archive between multiple institutions with different PACS products, are pushing for this type of solution.
I believe that the vendor-neutral archive is the right way to go. I expect that there will be some resistance from established PACS vendors to let others enter their turf, and there will be integration issues. However, PACS archives must be able to be easily replaced and be able to be upgraded separately from other PACS components. As such, the clear trend is toward commoditization of this PACS domain. Users must have management and control of their data, and they should not be expected to jump through data and dollar hoops to move this data each time a new PACS vendor is selected.
This sad truth of data "portability" has created a burgeoning market in data migration, companies who are expert in the various image encodings and formatting. Some of the image related information such as overlays, measurements, key images, and notes may even have been stored in a proprietary manner in the image header or database and will be lost when migrating the data to the new archive. The worst case scenarios will cost more and take longer.
It is amazing that many vendors still “mess” with the data integrity of the information trusted to their archive. During our training class in Asia, I had one user tell me that his nuclear medicine images could not be processed after they were archived by the PACS and subsequently retrieved back at the modalities. Another user shared their frustration that a vendor’s workstation could not reformat CT images--also after they were archived and retrieved.
These types of problems are forcing users to come up with their own solutions to mitigate the issues caused by proprietary archive formats. Many believe that the best answer is the “vendor-neutral” archive.
The most common implementation is to use an archive solution, often from a different vendor, that interfaces with the PACS front-end using open standards. Initial attempts are somewhat encouraging; however, many have found that there is much more happening between a PACS archive and front-end (database-image manager and workflow manager) than they were aware was going on. Also, PACS vendors are somewhat hesitant to let go of that part of their system. But many institutions, especially those who want to share an archive between multiple institutions with different PACS products, are pushing for this type of solution.
I believe that the vendor-neutral archive is the right way to go. I expect that there will be some resistance from established PACS vendors to let others enter their turf, and there will be integration issues. However, PACS archives must be able to be easily replaced and be able to be upgraded separately from other PACS components. As such, the clear trend is toward commoditization of this PACS domain. Users must have management and control of their data, and they should not be expected to jump through data and dollar hoops to move this data each time a new PACS vendor is selected.
DICOM Structured Reporting, Part 2 of 2
Part two of a two-part series on DICOM structured reporting. The information found in this article can be used as part of a preparation program for the American Board of Imaging Informatics (ABII) Imaging Informatics Professional Certification Program, which awards the Certified Imaging Informatics Professional (CIIP) designation.
Perhaps the biggest issue for DICOM structured reporting is support—from the creation of the report at the modality to its display on the PACS workstation. I've gotten quite a few calls from users with concerns that their DICOM structured reports are not able to be communicated over their PACS.
The first step in addressing this issue is that the PACS must be able to accept different DICOM objects. A DICOM object, as you might know, is represented by an SOP class. So what we need to look for is support for SOP classes, so that the PACS archive can handle them.
The next issue is the support of the diagnostic workstations. When a workstation retrieves a structured report, say from an ultrasound modality, the report must be properly supported so that it can be properly displayed.
Last, but not least, the reporting software must also support DICOM structured reports. A voice recognition system has to be able to support structured report information so it can automatically populate the appropriate sections of the diagnostic report.
The DICOM conformance statement is what the system administrator will use to ascertain if their PACS vendor provides DICOM structured report support. The DICOM conformance statement typically has a table in the first or the second page that displays what SOP classes are supported by the PACS. Once you’ve determined whether or not the PACS supports DICOM structured reports, you will need to check the DICOM conformance statements for the diagnostic workstation and your reporting application.
The next component that you need to look for is details about the content of the structured report. In some cases, certain institutions have specific requirements of what needs to be in the structured report. Depending on what these requirements are, this will require that certain information must be in the image header.
So what do we need to look for? We need to look at the details of what is in the header, what is in the contents, and what information is in the structured report. First check the SOP class report. Second, check the contents in the templates in the structured report.
There are three different SOP classes when it comes to structured reports. The SOP classes are divided among basic, enhanced, and comprehensive text. Most modalities support the comprehensive-text SOP. The reason being is that the comprehensive-text SOP doesn’t have any limitations as to the information it can contain. So, with regards to the SOP classes, you need to make sure that you support exactly the same structured report type on each device.
When you look at a conformance statement of a DICOM structured report, after you make sure that the SOP classes match, the next thing you want to look for is the template identification. The templates are very important because they can be modified. A template is almost like a definition of a DICOM header within a structured format, but basically has the contents of the report.
You need to check these templates because some of the information you require in your practice might not be there. You can go to the DICOM conformance statement and find out what exactly is filled in.
DICOM structured reports are not rocket science. They are just like any DICOM objects and can be handled like DICOM images. Although they are currently used mostly by ultrasound systems, computer-aided detection (CAD) applications for digital mammography (and other modalities, such as CT) are using them, too.
The biggest challenge to more widespread adoption of DICOM structured reporting is the support of PACS products. Unfortunately, PACS archives and workstations are lagging in this regard, as are voice recognition systems.
In summary, you need to make sure to do two things: You need to look at the DICOM conformance statements for support for these SOP classes; and you want to make sure that the template information in these structured reports is appropriate and meets the requirements of your institution. In many cases the templates are configurable, so you can go back to your vendor and configure them accordingly.
Structured reports represent major improvement for capturing some of the data from some modalities that can help radiologists be more efficient and effective—and can reduce errors and improve the quality of patient care. You just need to make sure that you have properly prepared your imaging information infrastructure.
Perhaps the biggest issue for DICOM structured reporting is support—from the creation of the report at the modality to its display on the PACS workstation. I've gotten quite a few calls from users with concerns that their DICOM structured reports are not able to be communicated over their PACS.
The first step in addressing this issue is that the PACS must be able to accept different DICOM objects. A DICOM object, as you might know, is represented by an SOP class. So what we need to look for is support for SOP classes, so that the PACS archive can handle them.
The next issue is the support of the diagnostic workstations. When a workstation retrieves a structured report, say from an ultrasound modality, the report must be properly supported so that it can be properly displayed.
Last, but not least, the reporting software must also support DICOM structured reports. A voice recognition system has to be able to support structured report information so it can automatically populate the appropriate sections of the diagnostic report.
The DICOM conformance statement is what the system administrator will use to ascertain if their PACS vendor provides DICOM structured report support. The DICOM conformance statement typically has a table in the first or the second page that displays what SOP classes are supported by the PACS. Once you’ve determined whether or not the PACS supports DICOM structured reports, you will need to check the DICOM conformance statements for the diagnostic workstation and your reporting application.
The next component that you need to look for is details about the content of the structured report. In some cases, certain institutions have specific requirements of what needs to be in the structured report. Depending on what these requirements are, this will require that certain information must be in the image header.
So what do we need to look for? We need to look at the details of what is in the header, what is in the contents, and what information is in the structured report. First check the SOP class report. Second, check the contents in the templates in the structured report.
There are three different SOP classes when it comes to structured reports. The SOP classes are divided among basic, enhanced, and comprehensive text. Most modalities support the comprehensive-text SOP. The reason being is that the comprehensive-text SOP doesn’t have any limitations as to the information it can contain. So, with regards to the SOP classes, you need to make sure that you support exactly the same structured report type on each device.
When you look at a conformance statement of a DICOM structured report, after you make sure that the SOP classes match, the next thing you want to look for is the template identification. The templates are very important because they can be modified. A template is almost like a definition of a DICOM header within a structured format, but basically has the contents of the report.
You need to check these templates because some of the information you require in your practice might not be there. You can go to the DICOM conformance statement and find out what exactly is filled in.
DICOM structured reports are not rocket science. They are just like any DICOM objects and can be handled like DICOM images. Although they are currently used mostly by ultrasound systems, computer-aided detection (CAD) applications for digital mammography (and other modalities, such as CT) are using them, too.
The biggest challenge to more widespread adoption of DICOM structured reporting is the support of PACS products. Unfortunately, PACS archives and workstations are lagging in this regard, as are voice recognition systems.
In summary, you need to make sure to do two things: You need to look at the DICOM conformance statements for support for these SOP classes; and you want to make sure that the template information in these structured reports is appropriate and meets the requirements of your institution. In many cases the templates are configurable, so you can go back to your vendor and configure them accordingly.
Structured reports represent major improvement for capturing some of the data from some modalities that can help radiologists be more efficient and effective—and can reduce errors and improve the quality of patient care. You just need to make sure that you have properly prepared your imaging information infrastructure.
Subscribe to:
Posts (Atom)