Automatic pre-treatment validation in radiotherapy by computer aided 3D-2D image alignment

Chen, Xin (2008) Automatic pre-treatment validation in radiotherapy by computer aided 3D-2D image alignment. Doctoral thesis, University of Central Lancashire.

[thumbnail of Thesis document] PDF (Thesis document) - Submitted Version
Restricted to Repository staff only
Available under License Creative Commons Attribution Non-commercial Share Alike.

7MB

Abstract

Most cancer patients may need external beam radiotherapy at certain times. Sufficient exposure of malignant cells and better avoidance of surrounding healthy tissues are necessary in order to minimise damage to non-cancerous regions. Therefore,
accurate patient positioning and proper radiation directions as well as dose distributions are crucial. A vital part of external beam radiotherapy is the geometric verification of treatment delivery. This thesis extensively studied three-dimensional
(31)) to two-dimensional (2D) image registration methods. The author has proposed two novel methods to register computerised tomography (CT) based volume image data acquired for treatment planning with orthogonal planar images acquired at treatment simulation, which are able to recover the transformation errors in full six degrees of freedom (three translations and three rotations).
The first proposed method is a novel feature-based method. It is based on automatic or semi-automatic extraction of object contours from the orthogonal X-ray images, matching with the 3D contours automatically extracted from the planning CT data. The registration is based on the Z-bnffer projection algorithm and the iterative closest point (ICP) algorithm. The novelty of the method proposed by the author is that: with the depth information of the projected features from the 3D model retained by the Z-bnffer method and the correspondence points found by ICP method, the 2D correspondence points are back projected to 3D space. Then the 3D-2D registration can be solved as a 3D-3D registration problem which enables
the cost function to be easily built and optimised. The proposed method has been evaluated using simulated data as well as phantom data. For the simulated data, the root-mean-square (RMS) registration errors were 0.70 mm + 0.21 mm for
translations and 0.49° + 0.46° for rotations with the capture range up to 18 mm (measured by mean target registration error (mTRE)). For the phantom data, the alignment errors were found to vary from 0.04 mm to 3.3 mm with an average of
1.27 mm for translation, and to vary from 0.02° to 1.64° with an average of 0.82° for rotation. The accuracy compares favourably against some of the other feature-based registration methods, and the computational cost is significantly lower than intensity-based registration methods.
Another significant contribution of this research work is the proposed hybrid 3D-2D image registration framework. The novel framework is distinct from other methods by combining the advantages from both feature-based methods and intensity based methods. It can be performed fully automatically which consists of two stages. The first stage is a coarse registration procedure, which is based on the idea of region based segmentation. It enables a fast and rough alignment that can successfully reduce the searching range for the subsequent fine registration. In the fine registration stage, an accelerated digitally reconstructed radiograph (DRR) generation method based on iso-region leaping method is proposed. Based on the generation of a series of region of interest (ROl) bone structure DRRS along the projected anatomical features, the proposed method is computationally effIcient, with registration error less than 0.5 mm measured by rnTRE. The capture range was up to 50 mm for the tested simulated data. For the evaluated phantom data sets from different parts of body, the proposed method was also able to achieve an acceptable registration accuracy with 2.15 mm + 0.82 mm (measured by mTRE). By providing a comparable registration accuracy, the proposed method was shown to be computationally more
efficient than other software based methods (i.e. conventional ray casting method and accelerated ray casting based on pre-computations). In addition, an easy-to-use graphic user interface (GUI) was developed which enables the proposed framework to be further evaluated and compared with current clinical software.


Repository Staff Only: item control page