Keynote speakers

Andrey Alekseevich Galyaev, Cor. Mem. of RAS, V.A. Trapeznikov Institute of Control Problems
of the Russian Academy of Sciences, Moscow
Optimization statements in detection tasks
The report will present optimization statements that arise when formalizing the problem of detecting a deterministic signal in the presence of ambient noise, which at first glance is a binary classification problem. However, decomposing the problem raises questions about what information about the properties of the signal and noise is known and how to properly understand the solution to this problem.
In general, the detection task is a binary classification, for which it is necessary to form a rule for deciding on the presence or absence of a useful signal in the signal-noise mixture. In this case, the signs formed on the basis of the incoming time sequence of samples can belong to many different classes, into which both the useful signal and the noise are divided. The more classes are offered for decision-making, the more precisely it is required to know the time-frequency properties of the signal and noise. In binary classification, the distinction between two hypotheses is based on the discrimination function, which shows the limiting capabilities of the deterministic signal detection method in noise. This can be considered as some kind of analogue of the uncertainty principle in quantum mechanics or digital signal processing (Kotelnikov's Theorem), since even increasing the number of features cannot solve the problem of detecting a signal in noise with a small signal-to-noise ratio. It turns out that the key physical characteristics of the signal-noise mixture that are known or can be measured are the type and length of the time window, sampling frequency, signal-to-noise ratio, statistical properties of noise, and the amount of discretionary signal.
The remaining characteristics are attributed to local or integral information characteristics, which are functions (functionals) of the incoming time series. Therefore, it is necessary to form optimal rules for the synthesis of an information system that solves the problem of detection based on limited a priori information. However, it should be borne in mind that the peculiarity of such optimal processing is the instability to changes in the statistical properties of noise and a priori uncertainty about the properties of the signal, and the fact that many mathematical constructions turn out to be unrealizable or implementable with large errors.
The current report is devoted to discussing the properties of the detection problem and developing an approach to improve the quality of the solution. Several successive stages of formalization and solution of the detection problem will be considered, based on the well-known formulations of the problem of separation of two hypotheses and the Neiman-Pearson Lemma. Then the criteria, properties and features of a deterministic signal will be highlighted, optimal signal processing methods will be applied with the calculation of probabilistic time, frequency and time-frequency distributions, and at the final stage statistics for model experiments will be constructed.

Vasiliy Stanislavovich Usatyuk, Head of the Information Theory Group at T8 Ltd., Moscow, researcher at South-West State University, candidate of sciences
Graph‑Based Compressed Sensing Meets Deep Neural Network Feature Embedding's
The Sparse Fourier Transform and compressed sensing (CS) have reshaped magnetic resonance imaging—drastically reducing power consumption and computational load—and now enable high‑fidelity reconstruction across biomedical imaging, radar, synthetic‑aperture satellite systems, communications, remote sensing, and quantum tomography. This talk centers on the second Restricted Isometry Property, providing an intuitive signal‑processing interpretation of its guarantee for stable, noise‑robust recovery and surveying recent advances—from accelerated sparse‑FFT algorithms to massive‑MIMO channel‑state‑information compression for 5G Release 18. A modern pipeline that merges graph‑based CS with deep neural sparsifiers is then introduced. Classical machine‑learning tasks (clustering or classification) are recast as learning sparse embedding on graphs, where deep networks act as data‑driven sparsifiers that precondition signals before measurement, improving matrix conditioning and cutting required samples. Two deployment scenarios illustrate the impact. Edge‑AI scenarios – ultra‑low‑power devices performing real‑time inference with far fewer measurements. Data‑center scenarios – high‑throughput systems leveraging graph‑structured sparsity to accelerate large batch (images or text tokens) processing without accuracy loss. By uniting CS theory, graph signal processing, and deep learning, the approach delivers universally efficient sensing—saving power on the edge while boosting throughput at scale.
Vasiliy Stanislavovich Usatyuk is the Head of the Information Theory Group at T8 Ltd. in Moscow and a researcher at South-West State University. He works in the fields of information theory, digital signal processing, and machine learning. His research focuses on developing algorithms and CPU, GPU, QPU, FPGA and ASIC oriented architectures for deep neural networks, forward error correction, and advanced solutions for quantum computation, sensing, and communication systems. He has previously held a senior research engineer position at Huawei and led the Parallel Computation and Information Security Laboratory at Bratsk State University. Vasiliy is the author of more than 45 international patents and has contributed to industry standards in post-quantum cryptography and 5G error correction

Vladimir Andreevich Volokhov, Senior researcher at STC-innovations Ltd., Saint Petersburg, associate professor of ITMO University, candidate of sciences, associate professor
Current Trends in the Development of Voice Biometrics
The report discusses the pipeline of a modern voice biometrics system. Particular attention is paid to the description of the block for computing of speaker embeddings, which is based on deep neural network architectures such as CNN and Transformers. Some information is provided on training these neural networks in relation to the speaker recognition task, as well as estimates of the number of parameters used and the time complexity, which make it possible to understand the practical applicability of such algorithms. The quality assessment of modern speaker recognition systems on standard benchmarks and promising directions for the development of voice biometrics are considered
Vladimir Andreevich Volokhov– senior researcher at STC-innovations Ltd., Saint Petersburg, candidate of sciences, associate professor of ITMO University. He works in the field of digital speech signal processing: speaker and spoken language recognition, speaker and language diarization. He is the head of the Speaker Recognition course at ITMO University and is a participant in international conferences such as SPECOM, Odyssey, INTERSPEECH and ICASSP.