Koen Eppenhof. Foto | Bart van Overbeeke

Home Stretch | Artificial intelligence brings greater precision to operations

Operations based on an MRI or CT scan are made trickier by the fact that people can never lie completely still. Doctoral candidate Koen Eppenhof has shown that an algorithm based on deep learning can be used to correct for the inevitable movements.

To administer radiation or to operate as accurately as possible, the area to be treated is first drawn onto a scan (MRI or CT) by the doctor. This area - the site of a tumor, for example - is then localized on the operating table using a new scan. This is no simple matter: the patient's position is never exactly the same in the two scans - and then there's the inevitable movement and deformation of the organs due to the breathing. An entire specialism, medical image registration, has arisen to deal with these difficulties, and this forms one aspect of the work of the Medical Image Analysis group at the Department of Biomedical Engineering.

According to PhD candidate Koen Eppenhof at Medical Image Analysis, doctors already have smart software that enables them to match the person in the scanner with the image made and carefully analyzed at an earlier date. “However, it takes a computer some minutes to run the calculation whereas ideally you'd like to be able to match the two scans in real time.”

When Eppenhof started his doctoral study a little under five years ago, the principle of deep learning was just rearing its head; this is a form of artificial intelligence capable of completing this task much more quickly. According to the doctoral candidate, this technology seems to have fulfilled its promise. “Initially, at conferences I was one of the few people working with deep learning whereas now almost everyone in medical image analysis is using it.”

Gaming computer

The challenge lies in coupling every pixel in the original image with the corresponding pixel in the new scan, Eppenhof explains. To do this, he ‘trained’ what is called a deep neural network, which runs on Graphics Processing Units (GPUs) - comparable to the processors in gaming computers. “Our group keeps a cluster of these GPUs in a cooled room on the High Tech Campus, and we can log into them.”

This kind of neural network of GPUs teaches itself, as it were, how to perform its task by referring to thousands of examples. But there is a shortage of training material. Take the problem of lung photos: there are simply too few sets of ‘registered’ images of lungs in various stages of breathing in and out. So Eppenhof decided to manipulate an existing image in countless different ways and to use this to feed the neural network. “Next, I set the trained network loose on a set of a couple of dozen real CT scans, registered by multiple experts based on hundreds of recognized anatomical landmarks, such as the sites where blood vessels split or cross.”

Prostate cancer

It turned out that Eppenhof's trained network performed almost as well as the individual experts. “So this shows that you can train deep neural networks using simulated data rather than real medical images. It actually works amazingly well, and I think that is the most important result of my research.” His neural network also proved itself capable of analyzing the images in less than a second - no mean improvement on the minutes currently needed by the calculation methods used in hospitals.

This makes his work of interest to UMC Utrecht, where prostate cancer patients currently receive radiation ín an MRI scanner. This helps doctors to establish the exact location of the prostate immediately before treatment is given. “In fact, the prostate also moves slowly during radiation; it is pushed aside as the bladder fills with urine. In principle my method is fast enough to track this movement.”

Whether his version of deep learning will find its way into hospitals before long is debatable. This is because it is still unclear how exactly this neural network works - a problem that many AI applications are struggling with. It is a black box and this hampers its assessment by the authorities responsible for safety, Eppenhof explains. “In any event, techniques of this kind will never be allowed to operate fully automatically. There must always be a person watching to make sure the computer doesn't botch the whole thing.”

Share this article