Tuesday, June 5, 2018

SIIM18 meeting: Is AI reality yet?


The Artificial Intelligence (AI) hype from the RSNA meeting in Chicago definitely spilled over to the SIIM meeting held at the National Harbor in DC, May 31-June 2, 2018. There were several new upstart companies that were showing various new algorithms being applied to medical images and there were quite a few presentations about this subject.
Here are my top observations on this subject:

·        AI is nothing new - As Dr. Eliot Siegel from the VA in Baltimore said at one of the sessions. “I use AI all day, when I use my worklist, when I do image processing, or when I apply certain calculations; I have been doing that for several years before the term AI was coined.”

·        The scope of AI is continuously changing - as pointed out by the anonymous Wikipedia contributors on the definition of AI, what was considered AI technology several years ago, e.g. optical character recognition, is now considered routine; in other words, “AI is anything that hasn’t been done yet.”

·        Even the FDA realized that CAD (a form of AI) is becoming a mainstream, mature technology. The FDA has proposed reclassifying what it calls radiological medical image analyzers from class III to class II devices. The list includes CAD products for mammography for breast cancer, ultrasound for breast lesions, radiographic imaging for lung nodules, and radiograph dental caries detection devices.

·        AI can determine which studies are critical - With a certain level of confidence, AI algorithms can distinguish between studies that very likely have no finding and those that require immediate attention and sort them accordingly. Note that this requires the AI software to be tightly integrated with the workstation worklist that drives the studies to be reviewed for the radiologist, which could be challenging.

The "AI" domain name has become
popular among these early
implementers
·        There are many different AI algorithms, and none of them are all inclusive (yet) - If you would take all of the different AI implementations, one might end up with maybe ten or more different software plug-ins for your PACS, each one looking for a different type of image and disease. Even for one body part an AI application does not cover each finding, for example, looking at a vendor’s chest analysis, it listed 7 most common findings, but it did not include the detection of bone fractures.

·        What about incidental findings? - The keynote speech at the SIIM was by e-patient Dave who made a very compelling case for participative medicine, i.e. partnering with your physician, being possible by sharing information and using web resources. His story started with an incidental finding of a tumor in his lung which happened to show up in a shoulder X-ray. If this image was being interpreted by AI that was only looking for fractures, his cancer would have been missed, and he would not have been here today.

·        There is no CPT code for AI - This leads to the question of how to pay for AI. Especially for in-patients, for whom additional procedures such as processing by an AI algorithm are an additional cost. Any extra investment and/or work needs to have a positive return on investment. This would be different of course if AI can improve efficiency, accuracy, or has any other measurable impact on the ROI.
Example of Presentation State
display on image

·        Consistent presentation of AI results is a challenge - AI results are typically presented in the form of an overlay on the image and/or in combinations of key images indicating in which slices of a CT, MR or ultrasound study certain findings are shown. These overlays are either created in the form of a DICOM Presentation State (preferably color) or, if there is no support for that, as additional screen saves with the annotations “burned” into the pixel data, both appearing as separate series in the study and stored on the PACS. A couple of AI vendors noted the poor support by PACS vendors of the color presentation states as several of those apparently changed the display color upon presentation on the PACS workstation.

·        Few vendors display the accuracy - It is critical for a physician to see the accuracy or confidence level of the AI finding. However, as noted in one of the use groups, accuracy is more than just sensitivity and specificity, and there is no standard for that, i.e. how would one compare a certain number between two different vendors?

The definition of AI is being debated, some prefer to call it Augmented or Assisted Intelligence. Some argue that it is nothing new, and indeed, in practice the definition seems to be shifting towards “anything new.” Implementations are still piecemeal, covering relatively small niche applications.

As with self-driving cars, or even auto-pilots in a plane, we are far from relying on machines to perform diagnosis with a measurable and reliable accuracy. In the meantime, for routine tasks AI could provide some (limited) support. An example is for TB or mammography screening, where an AI algorithm could determine that with 99.999 % accuracy there is no finding. The question is what to do with the 0.001 % and with incidental findings, which could become more of an ethical than technical issue.