报告题目:PolarGrad: A Class of Matrix-Gradient Optimizers from a Unifying Preconditioning Perspective
报告人:Weijie Su the University of Pennsylvania
报告时间:2025年7月4日上午10:00
报告地点:数学楼第一报告厅
校内联系人:贾继伟 jiajiwei@jlu.edu.cn
报告摘要:The ever-growing scale of deep learning models and datasets underscores the critical importance of efficient optimization methods. While preconditioned gradient methods such as Adam and AdamW are the de facto optimizers for training neural networks and large language models, structure-aware preconditioned optimizers like Shampoo and Muon, which utilize the matrix structure of gradients, have demonstrated promising evidence of faster convergence. In this talk, we introduce a unifying framework for analyzing ``matrix-aware'' preconditioned methods, which not only sheds light on the effectiveness of Muon and related optimizers but also leads to a class of new structure-aware preconditioned methods. A key contribution of this framework is its precise distinction between preconditioning strategies that treat neural network weights as vectors (addressing curvature anisotropy) versus those that consider their matrix structure (addressing gradient anisotropy). This perspective provides new insights into several empirical phenomena in language model pre-training, including Adam's training instabilities, Muon's accelerated convergence, and the necessity of learning rate warmup for Adam. Building upon this framework, we introduce PolarGrad, a new class of preconditioned optimization methods based on the polar decomposition of matrix-valued gradients. As a special instance, PolarGrad includes Muon with updates scaled by the nuclear norm of the gradients. We provide numerical implementations of these methods, leveraging efficient numerical polar decomposition algorithms for enhanced convergence. Our extensive evaluations across diverse matrix optimization problems and language model pre-training tasks demonstrate that PolarGrad outperforms both Adam and Muon.
报告人简介:Weijie Su is an Associate Professor at the Wharton School and, by courtesy, in the Departments of Mathematics and Computer Science, at the University of Pennsylvania. He is a co-director of Penn Research in Machine Learning (PRiML) Center. Prior to joining Penn, he received his Ph.D. and bachelor’s degree from Stanford University in 2016 and Peking University in 2011, respectively. His research interests span the mathematical foundations of generative AI, privacy-preserving machine learning, optimization, and high-dimensional statistics. He serves as an associate editor of the Journal of Machine Learning Research, Operations Research, and Journal of the American Statistical Association. He is a Fellow of the IMS. His work has been recognized with several awards, such as the Stanford Anderson Dissertation Award, NSF CAREER Award, Sloan Research Fellowship, IMS Peter Hall Prize, SIAM Early Career Prize in Data Science, ASA Noether Early Career Award, and the ICBS Frontiers of Science Award in Mathematics.