The conference is organized in the form of plenary and problem-thematic sessions and round-table meetings. Plenary reports will be presented on the main areas of the conference.
|Miсhalis Zervakis – vice rector of the Technical University of Crete (TUC)||Traditional versus Deep Learning Approaches in Medical Imaging||Traditional image processing has provided efficient methodologies for information extraction in the medical environment and for tasks including segmentation, tissue classification, registration, 3D reconstruction and abnormality/tumor identification in early stages. The rigorous formation of algorithmic pipelines of such schemes has been recently challenged with the speed offered by deep learning approaches and, in particular, with the use of convolutional neural networks (CNNs). Although deep CNNs are shown to achieve high accuracy, they are operating like black-box functions with multiple layers of nonlinearities, which is difficult to be interpreted by clinicians and/or radiologist experts. Another problem of deep learning is associated with the aspects of overtraining that renders the generalization of predictions quite untrustworthy. Towards the interpretability of deep learning schemes, research efforts have started to emerge, such as in justifying the predictions of deep CNN based on the generated saliency maps. Furthermore, the residual between the input image and its reconstruction from a representation, forms another approach to identify significant regions in the image that highly contribute in the CNN training. The main issue in deep CNN interpretability relates to justify predictions in a semantically meaningful way so that the network can provide transparent classification procedures. This is of particular importance in medical imaging, where the complexity of tissue content and the modalities used often leads to significant deviance of even human evaluators. |
In this study we attempt to link the traditional medical pipeline with the processing layers of deep neural networks as to provide a conceptual interpretation of training stages and assign a justifiable role to each module of the deep CNN formation. We link the convolutional layers with wavelet filters, the pooling stages with down-sampling and the fully connected layers with spatial transforms (like PCA, LDA or even space expansion as in SVMs), which provide a more efficient space for clustering, classification and segmentation. We further present several commercial tools and databases for the efficient design of deep learning approaches.