Organizers: Paul Irofti, Laurențiu Leuștean and Andrei Pătraşcu

The LOS seminar is the working seminar of the LOS research center.

All seminars, except where otherwise indicated, will be Tuesdays between 14:00 and 15.50. The seminars are held locally at Hall 214 (“Google”) of the Faculty of Mathematics and Computer Science, University of Bucharest, but can also be occasionally held remotely.

To receive announcements about the seminar, please send an email to los@fmi.unibuc.ro.

Tuesday, 11 March, 2025

Ion Necoară (University Politehnica Bucharest)
Higher order methods for nonlinear least-squares

Abstract: Nonlinear least-squares minimization involves a collection of nonlinear functions which are aggregated in a nonsmooth manner, often via a norm. We present a higher-order majorization-minimization algorithm for solving such (nonconvex) problems, where the nonlinear functions are assumed to have smooth higher-order derivatives. Our algorithm replaces each component in the composite model with a higher-order Taylor approximation and adds a proper regularization term, leading to a higher-order Gauss-Newton type method. We present convergence guarantees for this algorithm under different assumptions on problem’s data, including rates. In special cases, where complexity bounds are known for some particular (first-order) algorithms, our convergence results recover the existing bounds. Applications to non-linear least-squares (including phase retrieval), optimal control, and functional constrained minimization are presented. Finally, some open questions related to higher-order optimization are discussed.

Tuesday, 5 February, 2025

Andrei Pătrașcu (LOS)
Learning Explicitly Conditioned Sparsifying Transforms

Abstract: Sparsifying transforms became in the last decades widely known tools for finding structured sparse representations of signals in certain transform domains. Despite the popularity of classical transforms such as DCT and Wavelet, learning optimal transforms that guarantee good representations of data into the sparse domain has been recently analyzed in a series of papers. Typically, the conditioning number and representation ability are complementary key features of learning square transforms that may not be explicitly controlled in a given optimization model. Unlike the existing approaches from the literature, we consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms. The numerical experiments show that our model presents better numerical behavior than the state-of-the-art.