Abstract
The segmentation of anatomical structures is a crucial first stage of most medical imaging analysis procedures. A primary example is the segmentation of the left ventricle (LV), from cardiac imagery. Accuracy in the segmentation often requires a considerable amount of expert intervention and guidance which are expensive. Thus, automating the segmentation is welcome, but difficult because of the LV shape variability within and across individuals. To cope with this difficulty, the algorithm should have the skills to interpret the shape of the anatomical structure (i.e. LV shape) using distinct kinds of information, (i.e. different views of the same feature space). These different views will ascribe to the algorithm a more general capability that surely allows for the robustness in the segmentation accuracy. In this paper, we propose an on-line co-training algorithm using a bottom-up and top-down classifiers (each one having a different view of the data) to perform the segmentation of the LV. In particular, we consider a setting in which the LV shape can be partitioned into two distinct views and use a co-training as a way to boost each of the classifiers, thus providing a principled way to use both views together. We testify the usefulness of the approach on a public data base illustrating that the approach compares favorably with other recent proposed methodologies.