The organizing team was assembled to cover the full failure-mode stack for agentic AI: multi-turn training, evaluation, safety, grounding, security, and community-facing workshop execution.

Zihan Wang
Northwestern University
Workshop Operations Owner
Zihan Wang is a PhD student at Northwestern University working on reinforcement learning for reasoning agents in multi-turn stochastic environments. He leads overall workshop operations, including submissions, program assembly, and day-of execution.

Northwestern University
Review Owner
Canyu Chen is a PhD student at Northwestern University whose research focuses on foundation-model agents, trustworthiness, and multimodality. He leads the contributed-paper workflow, reviewer coordination, and evaluation quality.

NVIDIA Research
Review Owner
David Acuna is a Senior Research Scientist at NVIDIA Research working on reasoning models, synthetic data, inference-time scaling, agents, and reinforcement learning. He supports reviews, speaker coordination, and external outreach.

NVIDIA Research and University of Washington
Website and Artifact Owner
Jaehun Jung studies how to train and evaluate models with other models under minimal human supervision. He leads evaluation-facing aspects of the workshop, including website artifacts and reproducibility expectations.

humans& and Carnegie Mellon University
Panel Coordinator and Designer
Niloofar Mireshghallah works at the intersection of privacy, NLP, and the societal implications of machine learning. She leads discussion design, panel structure, and community-engagement planning for the workshop.

Stanford University
Speaker and Award Owner
Yejin Choi is a Professor of Computer Science at Stanford and a Senior Fellow at HAI. Her research spans language, reasoning, model behavior, evaluation, and human-centered foundation models and agents.

University of California, Berkeley
Speaker and Award Owner
Dawn Song is a Professor of Computer Science at UC Berkeley whose work focuses on security, privacy, adversarial robustness, and trustworthy machine learning in realistic deployment settings.

Northwestern University and Amazon Scholar
Speaker and Award Owner
Manling Li is an Assistant Professor at Northwestern University whose research focuses on grounded reasoning and multimodal knowledge for foundation-model agents. She leads coverage of world-facing agent failures and grounding-related themes.
The program committee is being assembled with coverage across agents, safety, evaluation, and multimodality.
The current public site reflects the accepted workshop proposal plus organizer-confirmed updates that are ready to share publicly.