Multi-modal image registration is highly challenging when ultrasound is involved:
Typically, ultrasound images are 2D and arbitrarily oriented in space. There is no standard grid such as in CT or MRI, an external optical, electro-magnetic or robotic tracking system is needed for accurately measuring the ultrasound transducer’s position in space at any time, and complex ultrasound reconstruction (“compounding”) is needed to be even able to compare 3D ultrasound against anything else. Often, there are lots of areas between US frames where there is no information available.
The point-spread function of ultrasound transducers is quite anisotropic. In-plane, especially around the focal spot, we can achieve a good effective resolution, but out of plane, objects several millimeters away from the imaging plane may also generate echo.
Ultrasound imaging suffers from all kinds of artifacts, including shadowing and reverberation. Ultrasound cannot penetrate beyond an air gap (lungs), and seeing anything behind a bone surface is really hard (dedicated transducers) to impossible.
For the acquisition, the probe needs to be pressed against the skin to achieve good acoustic coupling, sometimes creating large deformations. Image registration algorithms need to take that into account, either by undoing the deformation on the ultrasound frames to produce a deformation-free 3D ultrasound reconstruction, or by also applying the deformation onto the other, tomographic image modality.
At ImFusion, the ultrasound group is dedicated to overcoming these challenges in a product-grade way, and enabling medical devices that utilize 3D ultrasound for diagnostics, interventional guidance, and even treatment.
A common issue is the initialization of the registration in the first place: The 3D ultrasound scan is arbitrarily oriented in space without any reference to a typical global reference system. Since intensity-based registration algorithms have a limited capture range, a close initialization is important. We have developed two independent approaches to achieve this global initialization.
On the one hand, we have trained a network to regress the absolute orientation of ultrasound as well as a confidence for each prediction. By selecting most confident frames, and a segmentation-based center-of-gravity alignment of a given organ, the registration can be initialized.
On the other end, we could show that by learning dense keypoint descriptors it becomes possible to directly estimate the registration using matching pairs and RANSAC.
Refer to the following publications for more details:
Global Multi-modal 2D/3D Registration via Local Descriptors Learning Inproceedings Forthcoming
In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2022, Springer LNCS, Forthcoming.
Orientation Estimation of Abdominal Ultrasound Images with Multi-Hypotheses Networks Inproceedings Forthcoming
In: Medical Imaging with Deep Learning — MIDL 2022, Forthcoming.