Andrew Wagenmaker


Test

I am a postdoctoral researcher in Electrical Engineering and Computer Science at UC Berkeley working with Sergey Levine. Previously, I completed a PhD in Computer Science at the University of Washington, where I was advised by Kevin Jamieson. While in graduate school, I also spent time at Microsoft Research, mentored by Dylan Foster, as well as the Simons Institute, and my work was supported by an NSF Graduate Research Fellowship. Before that, I completed a master's and bachelor's degree at the University of Michigan, both in Electrical Engineering.

My research centers on developing learning-based algorithms for decision-making in sequential environments, both in theory and practice, and has spanned settings such as multi-armed bandits, reinforcement learning, and continuous control. In particular, much of my work has focused on developing algorithms which go beyond the worst case and provably adapt to the difficulty of, and perform optimally on, each particular problem instance.

Obtaining such instance-optimal guarantees often necessitates novel algorithmic techniques, especially in developing methods that effectively explore: efficiently collecting information about a given environment in order to learn to accomplish a desired goal. As such, a primary focus of my work has been in designing novel algorithmic approaches to exploration in dynamic environments. At present, I am primarily interested in developing effective approaches to real-world decision-making problems.

Mail / Google Scholar

Selected Publications (Show All):

Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL
Andrew Wagenmaker, Kevin Huang, Liyiming Ke, Byron Boots, Kevin Jamieson, and Abhishek Gupta
NeurIPS, 2024

Active Learning of Neural Population Dynamics Using Two-Photon Holographic Optogenetics
Andrew Wagenmaker*, Lu Mi*, Marton Rozsa, Matthew S. Bull, Karel Svoboda, Kayvon Daie†, Matthew D. Golub†, and Kevin Jamieson†
NeurIPS, 2024

Sample Complexity Reduction via Policy Difference Estimation in Tabular Reinforcement Learning
Adhyyan Narang, Andrew Wagenmaker, Lillian Ratliff, and Kevin Jamieson
NeurIPS, 2024 (Spotlight)

Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Jifan Zhang, Lalit Jain, Yang Guo, Jiayi Chen, Kuan Lok Zhou, Siddharth Suresh, Andrew Wagenmaker, Scott Sievert, Timothy Rogers, Kevin Jamieson, Robert Mankoff, and Robert Nowak
NeurIPS, 2024 (Datasets & Benchmarks Track, Spotlight)

Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification
Haolin Liuα, Artin Tajdiniα, Andrew Wagenmakerα, and Chen-Yu Weiα
NeurIPS, 2024

Fair Active Learning in Low-Data Regimes
Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, and Kevin Jamieson
UAI, 2024

ASID: Active Exploration for System Identification in Robotic Manipulation
Marius Memmel, Andrew Wagenmaker, Chuning Zhu, Patrick Yin, Dieter Fox, and Abhishek Gupta
ICLR, 2024 (Oral)

Optimal Exploration for Model-Based RL in Nonlinear Systems
Andrew Wagenmaker, Guanya Shi, and Kevin Jamieson
NeurIPS, 2023 (Spotlight) [Code]

Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory
Andrew Wagenmaker and Dylan Foster
COLT, 2023 [Talk]

Leveraging Offline Data in Online Reinforcement Learning
Andrew Wagenmaker and Aldo Pacchiano
ICML, 2023 [Talk]

Instance-Dependent Near-Optimal Policy Identification in Linear MDPs via Online Experiment Design
Andrew Wagenmaker and Kevin Jamieson
NeurIPS, 2022

Active Learning with Safety Constraints
Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, and Kevin Jamieson
NeurIPS, 2022

Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes
Andrew Wagenmaker, Yifang Chen, Max Simchowitz, Simon S. Du, and Kevin Jamieson
ICML, 2022

First-Order Regret in Reinforcement Learning with Linear Function Approximation: A Robust Estimation Approach
Andrew Wagenmaker, Yifang Chen, Max Simchowitz, Simon S. Du, and Kevin Jamieson
ICML, 2022 (Long Talk) [Talk]

Beyond No Regret: Instance-Dependent PAC Reinforcement Learning
Andrew Wagenmaker, Max Simchowitz, and Kevin Jamieson
COLT, 2022 [Talk]

Best Arm Identification with Safety Constraints
Zhenlin Wang, Andrew Wagenmaker, and Kevin Jamieson
AISTATS, 2022

Task-Optimal Exploration in Linear Dynamical Systems
Andrew Wagenmaker, Max Simchowitz, and Kevin Jamieson
ICML, 2021 (Long Talk)

Experimental Design for Regret Minimization in Linear Bandits
Andrew Wagenmaker*, Julian Katz-Samuels*, and Kevin Jamieson
AISTATS, 2021

Active Learning for Identification of Linear Dynamical Systems
Andrew Wagenmaker and Kevin Jamieson
COLT, 2020 [Talk]

Robust Photometric Stereo via Dictionary Learning
Andrew Wagenmaker, Brian Moore, and Raj Rao Nadakuditi
IEEE Transactions on Computational Imaging, 2018

Robust Photometric Stereo Using Learned Image and Gradient Dictionaries
Andrew Wagenmaker, Brian Moore, and Raj Rao Nadakuditi 
ICIP, 2017

Robust Surface Reconstruction from Gradients via Adaptive Dictionary Regularization
Andrew Wagenmaker, Brian Moore, and Raj Rao Nadakuditi
ICIP, 2017

A Bisimulation-Like Algorithm for Abstracting Control Systems
Andrew Wagenmaker and Necmiye Ozay
Allerton, 2016

* denotes equal contribution, † denotes equal advising, α denotes alphabetical ordering