Tuesday, September 1, 2009

Is Your PACS Running on Empty?

Based on my discussions with other PACS professionals, as well as conversations I've had with PACS administrators in our training classes, I’ve come to the conclusion that most PACS run at about 20%-30% of their optimum efficiency, speed, and workflow effectiveness. 

Many PACS users are looking to replace their systems, but they might end up with the same inefficiencies and issues as with the PACS they replaced if they don’t make the effort to utilize new system to its fullest. 

What is missing? 

There are two major items that many users overlook. First is the absence of a re-engineered workflow. A department with needless duplication, unnecessary paperwork, and avoidable manual processes will only become more chaotic with a PACS implementation. 

The second missing item is a proper deployment of the Integrating the Healthcare Enterprise (IHE) profiles. The IHE defined the scheduled workflow profile more than 10 years ago. It clearly defines how the proper use of Modality Performed Procedure Steps can eliminate many manual workflow actions and deliver better data integrity. 

However, when I poll PACS administrators in our training classes, less than 10% have implemented it. The sad part is that it does not seem to be getting better, but appears to be getting worse. As more smaller institutions and practices are implementing PACS, the level of IHE awareness seems to be proportional to the size of the institution. 

I recommend that anyone administering a PACS familiarize themselves with the profiles on the IHE Web site,https:/www.ihe.net/profiles/index.cfm. If you’d like more information about them, please take a look at our free OTPedia resource, https:/www.otpedia.com/index.cfm. Last but not least, watch our upcoming interactive Webcast on this topic. I am looking forward to your feedback on why you believe IHE implementation is still lagging. 

Healthcare Information Technology Image Management, Part 2 of 5

Part two of a five-part series on healthcare information technology. The information found in this article can be used as part of a preparation program for the American Board of Imaging Informatics (ABII) Imaging Informatics Professional certification program, which awards the Certified Imaging Informatics Professional (CIIP) designation.
The backbone of Healthcare Information Technology (HIT) is comprised of physical and logical networks, servers, databases, and archives. These elements comprise the engine the drives the HIT enterprise. The design and deployment of these HIT components requires a thorough knowledge of the interaction of one with the other, the capabilities of each, the standards affecting their implementation, and a planned upgrade path for when current technology is eclipsed by future innovation. 

Network Protocols
The International Standards Organization (ISO) created the Open Systems Interconnection (OSI) Model in 1978 and it has defined the framework on which most modern protocols are based. 

The OSI Model separates network functionality into seven distinct layers; at each layer, a distinct task or set of tasks is performed. The OSI architecture is split between seven layers, from lowest to highest:
  1. Physical layer
  2. Data Link layer
  3. Network layer
  4. Transport layer
  5. Session layer
  6. Presentation layer
  7. Application layer
Each layer uses the layer immediately below it and provides a service to the layer above. In some implementations a layer may itself be composed of sub-layers. Each vendor is free to design its implementation any it chooses, they all must agree on the following:
  • Input information received at the specified layer
  • Output information that will be delivered to the next layer
  • Once messages are delivered to other devices, the function performed at a given layer at the sending device will be processed by the same layer at the receiving device. The same layer in different devices would be considered "peers".
Data Transmission Protocols
The Transport Layer of the OSI Model ensures that all data arrives at the proper destination across the network intact and error free. It protects end-to-end data integrity, checks for errors, ensures data is in sequence, identifies processes within a host, and segments data for applications (ports). 

TCP - Transmission Control Protocol is the most popular transport layer protocol, and provides for error detection and re-transmission, sequencing of data, and other features. 

UDP - User Datagram Protocol is another transport layer protocol that does not provide error detection or sequencing, but has less overhead than TCP. Port numbers or sockets are used at this layer to segment data for the applications. 

Transmission Control Protocol/Internet Protocol (TCP/IP) is the suite of communications protocols used to connect hosts on the Internet. TCP/IP uses several protocols, the two main ones being TCP and IP. TCP/IP is built into the UNIX operating system and is used by the Internet, making it the de facto standard for transmitting data over networks. Even network operating systems that have their own protocols, such as Novell's Netware, also support TCP/IP. 

The TCP/IP protocol suite is actually composed of several protocols including IP which handles the movement of data between host computers, TCP which manages the movement of data between applications, UDP which also manages the movement of data between applications but is less complex and reliable than TCP, and Internet Control Message Protocol (ICMP), which transmits error messages and network traffic statistics. 

The Ethernet protocol is by far the most widely used network transmission protocol. Ethernet uses an access method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system where each computer listens to the cable before sending anything through the network. If the network is clear, the computer will transmit. If some other node is already transmitting on the cable, the computer will wait and try again when the line is clear. 

Sometimes, two computers attempt to transmit at the same instant. When this happens a collision occurs. Each computer then backs off and waits a random amount of time before attempting to retransmit. With this access method, it is normal to have collisions. However, the delay caused by collisions and retransmitting is very small and does not normally affect the speed of transmission on the network. 

The Ethernet protocol allows for linear bus, star, or tree topologies. Data can be transmitted over wireless access points, twisted pair, coaxial, or fiber optic cable at a speed of 10 Mbps up to 1000 Mbps. 

Fault Tolerance and Load Balancing 
Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both. 

Fault tolerance is the capability of a system to cope with internal hardware problems (eg, a disk drive failure) and still continue to operate with minimal impact, such as by bringing a backup system online. There are many levels of fault tolerance, the lowest being the capability to continue operation in the event of a power failure. Many fault-tolerant computer systems mirror all operations -- that is, every operation is performed on two or more duplicate systems, so if one fails the other can take over. Fault tolerance requires redundant hardware and modifications to the operating system. 

In the healthcare IT environment, it is critical that systems are extremely fault tolerant. For example, Microsoft Windows NT Server includes fault tolerance for a failed disk drive by disk mirroring (RAID 1) or disk striping with parity (RAID 5). 

In addition, clustering provides fault tolerance for individual computers. 

Network Hardware and Components 
A network is comprised of a variety of hardware devices. 

These include repeaters or hubs that are used to extend the physical length of an Ethernet network. These are seldom used today, and have been mostly replaced by switches in a healthcare IT network. 

A switch connects similar networks at layer 2 (the data link layer) of the OSI Model. 

A router is a layer 3 device (network layer) of the OSI Model and it makes decisions about path as well as connects different networks based on network addressing. 

A gateway is a machine that connects two or more networks using different network protocols and performs the necessary protocol conversions. 

Network Configuration 
There are three main network configurations, or topologies, that are used:
  1. Bus Network
  2. Ring Network
  3. Star Network
A bus network is a line of computers connected together by a cable. The cable is called the bus. The bus must be terminated at both ends. 

A ring network is a bus network that has been attached at both ends. The data in a ring network travels in only in one direction. 

In order to eliminate data crashes, an improvement was made to the ring network by adding a token, which became known as a Token Ring Network. A token is an electronic impulse that runs around the ring. A device can only send a request or receive data when it has the token. This eliminated data clashes since two requests could no longer be entered at the same time. 

A star network is a network where each network device is connected to a central computer, called a server. The server holds all the software, and the other devices, called nodes, request the software from it. It is possible to have a star network spawn another star network. Star networks are the most commonly employed topologies. 

Network Metrics
Basic network performance metrics describe the performance of the network as seen by a user and are divided into four broad categories: availability, loss and errors, delay, and bandwidth. 

Availability metrics assess how robust a network is; for example, the amount of time it runs without a problem that impacts service availability (also known as “uptime”). 

Loss and error metrics indicate network congestion problems, transmission errors, or equipment malfunctioning. 

Delay metrics are specific to network congestion problems and can also be used to measure the effect of routing changes. 

Bandwidth metrics measure the amount of data that can be transferred across a network in a set amount of time.