A correct evaluation and mammographic interpretation demands a high level of expertise of the observing radiologist, it depends directly on an adequate visual analysis of the findings and the correlation of the radiological characteristics extracted from different mammographic projections. This article presents an automatic classification scheme of nodules contained in Regions of Interest (RoIs), extracted from two different mammographic projections (Lateral Oblique Medium and CreaneoCaudal) obtained from the same mammary gland, through a strategy of ipsilateral information fusion. Once the specialist radiologist selects a Region of Interest in the two mammographic projections, these are characterized by multi-resolution and multi-scale decompositions, for which each RoI is projected on two different spaces defined by the moments of Zernike and the transformed Curvelet, respectively. Thus this heterogeneous information is optimally merged by means of a Multiple Kernel Learning strategy built by training with vectorial support machines (SVM). The performance of the proposed strategy to classify malignant and benign nodules was evaluated with respect to a classification scheme based on the analysis of the RoI corresponding to a single projection, for which a set of 980 RoI extracted from 490 case studies of the mammography database (DDSM) and 216 RoI extracted from 108 case studies of the INBreast database were used. The results obtained report a sensitivity of 98.3% and specificity of 94.5% versus a sensitivity of 96.2% and specificity of 93.1% obtained when multi-resolution characteristics were used in a single projection. These results suggest that the proposed strategy may be useful in clinical scenarios and may contribute to the training of new radiologists as second readers.
|Translated title of the contribution||Automatic Mammographic Nodule Classification Based on MultiView Information Fusion|
|Original language||Spanish (Ecuador)|
|Title of host publication||Automatic classification of mammographic nodules based on multi-view information fusion|
|State||Published - 26 Jun 2019|