Mathematics & Machine Learning Seminar
A fundamental challenge in building adaptive and intelligent systems is to measure and share knowledge across diverse tasks and agents. In this talk, I will present a line of work that bridges optimal transport theory and reinforcement learning to address this challenge from both theoretical and algorithmic perspectives. I will begin with our work on 2-Wasserstein task embeddings, a model-agnostic and training-free framework for quantifying task similarity. By embedding datasets through a geometry-aware representation, we obtain a fast and reliable measure of how related two tasks are and enable efficient task transfer. Building on this foundation, I will discuss how task embeddings and modulating masks can facilitate collective and lifelong learning among multiple agents.
Through a series of collaborative studies, we explore distributed and decentralized lifelong reinforcement learning, where agents exchange knowledge via on-demand communication and mask-based parameter isolation. We further introduce the MOSAIC framework, which enables agents to selectively query, share, and integrate policies to accelerate learning across tasks, as an important step toward agentic AI systems capable of self-organization and knowledge reuse. Together, these works outline a pathway toward scalable and collaborative intelligence that learns efficiently both within and across agents.
