The conference is organized in the form of plenary and problem-thematic sessions and round-table meetings. Plenary reports will be presented on the main areas of the conference.

Plenary speakers:

PhotoNameTitleAbstract


Miсhalis Zervakis – vice rector of the Technical University of Crete (TUC)Traditional versus Deep Learning Approaches in Medical Imaging Traditional image processing has provided efficient methodologies for information extraction in the medical environment and for tasks including segmentation, tissue classification, registration, 3D reconstruction and abnormality/tumor identification in early stages. The rigorous formation of algorithmic pipelines of such schemes has been recently challenged with the speed offered by deep learning approaches and, in particular, with the use of convolutional neural networks (CNNs). Although deep CNNs are shown to achieve high accuracy, they are operating like black-box functions with multiple layers of nonlinearities, which is difficult to be interpreted by clinicians and/or radiologist experts. Another problem of deep learning is associated with the aspects of overtraining that renders the generalization of predictions quite untrustworthy. Towards the interpretability of deep learning schemes, research efforts have started to emerge, such as in justifying the predictions of deep CNN based on the generated saliency maps. Furthermore, the residual between the input image and its reconstruction from a representation, forms another approach to identify significant regions in the image that highly contribute in the CNN training. The main issue in deep CNN interpretability relates to justify predictions in a semantically meaningful way so that the network can provide transparent classification procedures. This is of particular importance in medical imaging, where the complexity of tissue content and the modalities used often leads to significant deviance of even human evaluators.
In this study we attempt to link the traditional medical pipeline with the processing layers of deep neural networks as to provide a conceptual interpretation of training stages and assign a justifiable role to each module of the deep CNN formation. We link the convolutional layers with wavelet filters, the pooling stages with down-sampling and the fully connected layers with spatial transforms (like PCA, LDA or even space expansion as in SVMs), which provide a more efficient space for clustering, classification and segmentation. We further present several commercial tools and databases for the efficient design of deep learning approaches.

Dr Naim Dahnoun – Reader in Teaching & Learning in Signal Processing University of Bristol
2020: DSP EvolutionThe transition from analogue to digital signal processing started in the 1980’s when the first digital Signal Processors (DSPs) were invented. This was followed by a large number of applications that transformed the world. However, the high performance, power and form factor imposed on the processors are making many applications, such as medical, high-end imaging, high-performance computing and core networking more demanding for an increase in data traffic and device to device communication. These are putting a high demand on the processor(s) and associated software and lead to processor manufacturers sustaining Moore’s law by introducing multicore processors. Multicore is re-emerging as the way forward if low power consumption and high processing performance are required for an application. However, the scant number of traditional (single core) programmers are lacking the knowledge and expertise to take advantage of this new technology or are diverted to IoT applications that use microcontrollers and that are now performing DSP operations and slowing the DSP market. However, this may change by the emerging Machine Learning and Artificial Intelligence technologies. This talk will address issues the industry and academia need to consider for embracing this technology and look at new and future processors’ developments.

Prof. Velko Milutinovic (Serbia) DataFlow SuperComputing for BigData DeepAnalytics and SignalProcessing  Acording to Alibaba and Google, as well as the open literature, the DataFlow paradigm, compared to the ControlFlow paradigm, offers: (a) Speedups of at least 10x to 100x and sometimes much more (depends on the algorithmic characteristics of the most essential loops and the spatial/temporal characteristics of the Big Data Streem, etc.), (b) Potentials for a better precision (depends on the characteristics of the optimizing compiler and the operating system, etc.), (c) Power reduction of at least 10x (depends on the clock speed and the internal architecture, etc.), and (d) Size reduction of well over 10x (depends on the chip implementation and the packiging technology, etc.). However, the programming paradigm is different, and has to be mastered.

Authorization
*
*
Registration
*
*
*
Password generation