Data Science Seminar: Hoang Tran

Zoom

Black-Box Optimization with a Novel Nonlocal Gradient and Its Applications to Deep Learning The problem of minimizing multi-modal loss functions with a large number of local optima frequently arises in machine learning and model calibration problems. Since the local gradient points to the direction of the steepest slope in an infinitesimal neighborhood, an optimizer guided by […]

Data Science Seminar: Christos Mavridis

Online Deterministic Annealing: Progressive Learning for Cyber-Physical Systems The continuously increasing interest in intelligent autonomous systems is accentuating the need for new developments on cyber-physical systems that can learn, adapt, and reason. Towards this direction, we will formally analyze the properties of learning as a continuous, dynamic, and adaptive process of such systems, in applications where […]

Data Science Seminar: Enrique Mallada

TBD

Model-Free Analysis of Dynamical Systems Using Recurrent SetsIn this talk, we develop model-free methods for the analysis of dynamical systems using data. Our key insight is to replace the notion of invariance, a core concept in Lyapunov Theory, with the more relaxed notion of recurrence. A set is τ-recurrent (resp. k-recurrent) if every trajectory that […]

Data Science Seminar: Inbar Serrousi

Krieger 411, JHU

From Stochastic to Deterministic: SGD dynamics of non-convex models in high dimensionsStochastic gradient descent (SGD) stands as a cornerstone of optimization and modern machine learning. However, understanding why SGD performs so well remains a major challenge. In this talk, I will present a theory for SGD in high dimensions when the number of samples and […]