Local Time : 02:28 CET

C-1776 - Machine Learning for Improved PET/CT Fusion Images

A. W. Sauter1, K. S. Mader2, G. Sommer1, B. stieltjes1; 1 Basel/CH 2 Zürich/CH Type: Scientific Exhibit
Area of Interests: Nuclear medicine, Contrast agents, Computer applications
Imaging Techniques: CAD, Image manipulation / Reconstruction, PET-CT
Procedures: Computer Applications-Virtual imaging, Computer Applications-Detection, diagnosis, CAD
Special Focuses: Cancer, Image verification
Please note: This poster is part of the session VoE 23 - German on Saturday, March 4, 10:00 - 11:00 Add poster to my schedule In your schedule (remove)

Direct link to access the poster (click here)

Aims and objectives: Diagnosing patients using PET/CT images is a time-consuming task requiring frequent switching between modalities. Fusion images are produced by an overlay of color-coded PET on CT images [1]. Interpretation of these fusion images can be hampered due to a huge variation of background pixel intensity[...]

Methods and materials: The creation of a fusion image involves a number of different steps. The standard model is enhanced with the addition of a “Machine Learning” based component illustrated in figure 1. For this machine learning, we apply a proven approach from pathology and microscopy based on convolutional neural net[...]

Results: Initial results of the new fusion approach compared to standard fusion on two example images from one patient with metastasized NSCLC are shown in attached figures (2 and 3). Our initial results show more readily interpretable fusion images with integrated tumor tissue properties compared with the s[...]

Conclusion: We show the viability of incorporating machine learning approaches to improve the visualization of fused PET-CT data. In addition this approach makes the first steps to improve the interpretability of complex neural networks. We plan to extend the training and perform a large scale reader study to [...]

Personal information:

References: 1. Zaidi H, ed. Quantitative Analysis in Nuclear Medicine Imaging. Boston, MA: Springer US; 2006. doi:10.1007/b107410. 2. Xie M, Jean N, Burke M, Lobell D, Ermon S. Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping. 3. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-Aided Diagno[...]

This website uses cookies. Learn more