GALCIT Colloquium
As we deploy robots in increasingly dynamic and unstructured environments with data-driven policies, the need to be able to make guarantees on the reliability and safety of these systems keeps growing. In this talk, I will present two perspectives on uncertainty quantification. First, I will present a conformal prediction-based framework for making in-distribution guarantees on the safety of a learned perception and planning system. Next, I will present a risk-aware reinforcement learning framework for quadrupedal locomotion that adapts to out-of-distribution scenarios. I will provide the experimental validation of these methods on ground robots for navigation that are inspired by applications for subterranean search and rescue. Finally, I will present future directions and challenges in attaining reliable autonomy under distribution shifts.
