The LOS seminar is the working seminar of the LOS research center.
The LOS seminar is the working seminar of the LOS research center.
All seminars, except where otherwise indicated, will be Tuesdays between 14:00 and 15.50. The seminars are held locally at Hall 214 (“Google”) of the Faculty of Mathematics and Computer Science, University of Bucharest, but can also be occasionally held remotely.
To receive announcements about the seminar, please send an email to los@fmi.unibuc.ro.
Horațiu Cheval (LOS)
The Tikhonov-Mann iteration for families of mappings
Abstract: We present recent results from [3] in which we generalize the strongly convergent
Krasnoselskii-Mann-type iteration defined by Boț and Meier in [1] from Hilbert spaces
to the abstract setting of W-hyperbolic spaces and we compute effective rates of asymptotic
regularity for our generalization. This extends at the same time results by Leuştean and the
author on the Tikhonov-Mann [2] iteration from single mappings to families of mappings,
providing similar computational results as for the former.
References:
[1] R.I. Boț, D. Meier. A strongly convergent Krasnosel’skii–Mann-type
algorithm for finding a common fixed point of a countably infinite family
of nonexpansive operators in Hilbert spaces. Journal of Computational and
Applied Mathematics, 395:113589, 2021
[2] H. Cheval, L. Leuștean. Quadratic rates of asymptotic regularity for the
Tikhonov-Mann iteration. Optimization Methods and Software, 37(6):2225–2240, 2022
[3] H. Cheval. Rates of asymptotic regularity of the Tikhonov-Mann iteration for families of mappings.
arXiv:2304.11366 [math.OC], 2023
Andrei Sipoş (LOS)
An example-based proof mining tutorial
Abstract: I will present the workings of proof mining through three case studies: Ulrich Berger's
didactic example for the classical Herbrand theorem, Terence Tao's finite monotone convergence
principle and my own analysis of a convergence theorem on the unit interval due to D. Borwein
and J. Borwein.
Andrei Pătraşcu (LOS)
Discussions on the applications of optimization over metric spaces in sparse representations problems
Abstract: In this seminar we will discuss several papers on optimization algorithms over
various spaces and sparse representations.
Paul Irofti (LOS)
Dictionary Learning with Uniform Sparse Representations for Anomaly Detection
Abstract: Many applications like audio and image processing show that sparse representations are a
powerful and efficient signal modeling technique. Finding an optimal dictionary that generates at
the same time the sparsest representations of data and the smallest approximation error is a hard
problem approached by dictionary learning (DL). We study how DL performs in detecting abnormal samples
in a dataset of signals. In this paper we use a particular DL formulation that seeks uniform sparse
representations model to detect the underlying subspace of the majority of samples in a dataset,
using a K-SVD-type algorithm. Numerical simulations show that one can efficiently use this resulted
subspace to discriminate the anomalies over the regular data points.
This is joint work with Cristian Rusu and Andrei Pătrașcu.
Dongwon Lee (Pennsylvania State University)
Generative Language Model, Deepfake, and Fake News 2.0: Scenarios and Implications
Abstract: The recent explosive advancements in both generative language models in NLP and
deepfake-enabling methods in Computer Vision have greatly helped trigger a new surge in AI
research and introduced a myriad of novel AI applications. However, at the same time, these
new AI technologies can be used by adversaries for malicious usages, opening a window of
opportunity for fake news creators and state-sponsored hackers. In this talk, I will present
three plausible scenarios where adversaries could exploit these cutting-edge AI techniques
to their advantage, producing more sophisticated fake news by synthesizing realistic artifacts
or evading detection of fake news from state-of-the-art detectors. I will conclude the talk by
discussing the important implications of the new type of fake news (i.e., Fake News 2.0) and
some future research directions.