CMX Lunch Seminar
Measure-to-measure operator learning provides a principled framework for learning maps whose inputs and outputs are best viewed as probability measures, empirical distributions, or evolving ensembles, rather than as fixed-dimensional data representations. This perspective is increasingly relevant in applications where information is intrinsically distribution-valued, including Bayesian data assimilation, LiDAR imaging, interacting particle systems, and mean-field control. Transformers and related measure-centric neural operator architectures offer a flexible route to learning such maps while respecting permutation invariance, variable particle sizes, and the geometry of the space of probability distributions. This talk presents recent approximation theory for such architectures. The results cover both density function and empirical measure formulations, with controlled finite-particle quantization errors in the latter case. Probabilistic conditioning serves as a motivating example throughout the talk.
