节点文献

基于定量定性互信息的医学图像配准

Medical Image Registration Based on Quantitative-Qualitative Measure of Mutual Information

【作者】 栾红霞

【导师】 戚飞虎;

【作者基本信息】 上海交通大学 , 计算机应用技术, 2007, 博士

【摘要】 综合利用同一患者在不同时期或从不同成像设备获得的图像信息是医学图像分析的一个基本任务。为了对多幅图像所提供的信息进行整合,首先要解决多幅图像之间的匹配问题,即图像配准问题。医学图像配准是指对一幅医学图像寻求一种(或一系列)空间变换,使它与另一幅图像上的对应点达到空间位置和解剖位置上的完全一致。配准的结果应使两幅图像中所有解剖点、或至少是所有具有诊断意义上的点都达到匹配。近年来,研究人员提出了许多不同的配准方法。其中,应用最为广泛的方法当属基于互信息的配准方法。目前,所有基于互信息的配准方法在计算两幅图像的互信息时均假设图像中的像素是独立同分布的。但事实上,不同的像素在图像中的重要性以及它们对图像配准的效用是不同的。独特的像素在图像配准中往往具有更高的效用,所以在决定两幅图像的变换过程中应该起更大的作用。例如,由于匹配靠近大脑皮层的白质点比匹配位于大面积白质区域内的白质点更为有效,因此靠近大脑皮层的白质点在计算两幅脑图像的互信息时应该贡献更多的作用。为了在图像配准过程中结合像素的效用,本文首先从控制论的角度出发,提出了一种新的信息测度-定量定性互信息。然后,提出了基于定量定性互信息的配准方法。为了定义两幅图像的定量定性互信息,本文提出使用独特性值来表示像素在图像中的重要性以及它相对于图像配准的效用,并通过综合两幅图像像素的效用来定义图像亮度对的联合效用。实验结果表明:与基于传统互信息的配准方法相比,基于定量定性互信息的配准方法极大的提高了配准算法的成功率(成功率的提高量约为20%),从而显示了所提出方法的鲁棒性。为了确保配准方法的精确性,本文又提出了基于定量定性互信息的层次化的配准方法。在层次化的配准方法中,像素的效用不再是固定不变的,而是随着配准的进展而不断地变化,并在配准的最后阶段,使所有的像素对配准起相同的贡献作用。即,在配准的初始阶段,像素的初始效用由独特性值决定;随着图像配准的进展,像素的效用逐渐变为1。于是,通过在配准的初始阶段依靠具有较高效用的像素或区域,配准方法的鲁棒性得到了提高;通过在配准的最后阶段将像素的效用逐渐变为1,配准方法得到了与基于传统互信息的配准方法类似的配准精度。在本文中,基于定量定性互信息的层次化的配准方法被应用于3D临床数据(例如,MR,CT和PET)的刚体配准中。实验结果表明:与传统互信息产生的配准函数相比,定量定性互信息产生的配准函数不仅更为光滑,而且还拥有更大的收敛范围。同时,实验结果还表明:层次化的配准方法不仅提高了配准方法的鲁棒性,还使配准方法拥有可达到子体素精度的配准结果。在许多临床应用中,刚体变换并不足以描述图像间的变形,这时就需要考虑非刚体配准,因此本文还研究了一般化的基于定量定性互信息的非刚体配准方法。另外,为了节省计算时间,我们还推导了定量定性互信息相对于变换参数的梯度的解析形式,从而使得配准算法可以采用基于梯度的优化方法。基于定量定性互信息的非刚体配准方法被应用于MR breast序列图像的运动校正。实验结果表明:与层次化的刚体配准方法相比,基于定量定性互信息的非刚体配准方法可以有效地减少由breast运动引起的图像差异,从而得到更好的配准结果。

【Abstract】 A fundamental problem in medical image analysis is the integration of information from multiple images of the same subject, acquired using the same or different modalities and possibly at different time. In order to fuse the information from different images, an essential problem, which should be solved firstly, is to align one image to the other images. Medical image registration is to find the geometric relationship between corresponding points in different images. After image registration, all anatomical points and other points of interest in the images should be easily related. Various registration methods have been proposed over recent years. Among them, registration strategy based on maximization of mutual information (MI) has been proved to be a promising method and has been widely used in medical image registration.However, almost all mutual information based registration methods treat the voxels of the images equally, when calculating their mutual information. In fact, different voxels have different characteristic and utilities on image registration. Salient voxels should have higher utility, and hence contribute more to determine the transformation between two images. For example, when measuring the mutual information of two brain images, the white matter (WM) voxels near the cortex should contribute more than the WM voxels inside the large WM regions since it is more effective to match WM voxels near cortex than the inside regions.To incorporate utility information into the image registration procedure, we propose a new information measure, named quantative-qualtitative measure of mutual information (Q-MI), in the view of cybernetics. Then, we propose an image registration method based on Q-MI. To define the Q-MI of two images, we use salient values to represent voxels’s significance in the image and also utility in image registration. Moreover, the joint utilities of intensity pairs are calculated from integrating the voxels utilities in the two images. In order to test the performance of the proposed method, we design lots of quantitative experiments using simulate brain images. Experimental results show that compared to MI-based registration method, Q-MI-based registration method has a higher successful rate and the increased rate can achieve more than 20 percent, which indicates the robustness of the proposed method.To assure that the registration method has high accuracy, we propose a hierarchical registration strategy based on Q-MI. In the hierarchical registration strategy, the utility values of voxels are not fixed, and they will be hierarchically updated during the registration procedure, with all voxels contributing equally in the final stage. In particular, the initial utility of each voxel will assigned according to its saliency value; with the progress of image registration, this utility will gradually move towards to one. Thus, by mainly focusing on the voxels (or the regions) with higher utilities in the initial registration procedure, the robustness of registration can be improved. Also, by changing each joint utility to one in the final stage, the sub-voxel accuracy of registration can be retained as that obtained by the conventional MI-based registration methods, because of using MI in the final registration procedure.In this paper, the proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images. Experimental results demonstrate that the registration function generated by Q-MI is much smoother than that by MI, and it has a larger capture range due to the incorporation of the joint utilities of the two images into the Q-MI measurement. Moreover, experimental results also show that hierarchical registration strategy not only improves the robustness of registration method, but also makes it have sub-voxel accuracy.In many applications, a rigid transformation is not sufficient to describe the deformation between two images, thus nonrigid transformations are required. In this thesis, we studied a general nonrigid registration method based on quantitative-qualitative measure of mutual information. In addition, we also derive the analytic expression for the gradient of Q-MI w.r.t the transformation parameter when partial volume interpolation is used. Therefore, the registration strategy can hire gradient-based optimization method. We applied the proposed method to correct the motion between MR breast images.Experimental results show that the proposed method performed well and can reduce effectively the difference casued by the motion of breast.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络