ICML 2026FMAI
ICML 2026 Workshop

Failure Modes in Agentic AI (FMAI): Reproducible Triggers, Trace Diagnostics, and Verified Fixes

Foundation-model agents are now deployed in browsing, scientific analysis, and long-horizon decision-making. FMAI creates a focused venue for turning agent failures into concrete research assets: clear definitions, reproducible triggers, comparable diagnostics, and verified repair strategies.

Requires an OpenReview account.

Focus

Reproducible triggers, trace diagnostics, and verified fixes for agentic failures.

Format

Keynotes, contributed spotlights, posters, and a panel on practical agent failure modes.

Status

OpenReview submissions are now open. Deadline: May 4, 2026, 11:59 PM UTC.

Overview

Why This Workshop

Agent quality is governed by long-horizon interaction. Small stepwise mistakes can compound through tool calls, memory writes, and recovery decisions, shifting the reliability and safety boundaries of the whole system.

FMAI focuses on actionable failure modes in closed-loop agent systems. We treat failures as first-class research objects and push toward four concrete deliverables: operational definitions, reproducible triggers, comparable diagnostics, and verifiable fixes.

We especially welcome work that turns vague failure anecdotes into reusable scientific artifacts that other researchers can reproduce, measure, stress-test, and improve upon.

Current status

OpenReview submissions are now open for FMAI. See the deadline below and submit through the official venue page.

Submission deadline

May 4, 2026, 11:59 PM UTC

The accepted proposal plans a fully in-person workshop with keynote talks, contributed spotlights, posters, and a panel discussion. Livestreaming and fallback remote support may be arranged for exceptional cases.

Workshop date, room, and camera-ready timeline are still to be published through the official ICML 2026 workshop schedule.

Topics of Interest

What Fits FMAI

01

Failure Taxonomies and Mechanisms

Operational definitions, triggering preconditions, minimal reproductions, composable failure primitives, and falsifiable mechanistic hypotheses for agent failures.

02

Closed-loop Evaluation and Trace Diagnostics

Long-horizon or open-world evaluation protocols, interpretable process metrics, counterfactual tests, and logging tools that expose failures beyond terminal success.

03

Training and Systems Interventions

Mitigations, recovery strategies, tool and memory interface improvements, reward and budget design, and repair mechanisms with clearly verifiable trade-offs.

  • OpenReview submissions are now open for FMAI.
  • Submission deadline: May 4, 2026, 11:59 PM UTC (May 4, 2026, 6:59 PM CDT).
  • Submit through the official OpenReview venue page. An OpenReview account is required.
  • High-quality negative results are encouraged when they include controlled failure cases, clear attribution, and transferable lessons for the field.

Invited Speakers

Invited Speakers

The proposed program covers failure mechanisms, diagnostics, evaluation, security, and practical deployment.

View all speakers →
Yoshua Bengio

Yoshua Bengio

Universite de Montreal, LawZero, and Mila

AI safety, frontier model governance, and deep learning foundations

Yoshua Bengio is a Full Professor of Computer Science at Université de Montréal, Founder of Mila (the Quebec AI Institute), and Co-President of LawZero. A recipient of the 2018 Turing Award alongside Geoffrey Hinton and Yann LeCun, his research spans deep learning foundations and, increasingly, AI safety and the governance of frontier systems.

GFlowNetsInternational AI Safety Report
James Zou

James Zou

Stanford University and CZ Biohub

Tool use failures and automated red-teaming

James Zou is an Associate Professor of Biomedical Data Science, Computer Science, and Electrical Engineering at Stanford. His work focuses on making AI more reliable, human-compatible, and statistically rigorous, with major applications in health and biomedicine.

AvaTaRAutoRedTeamer
Bo Li

Bo Li

University of Illinois Urbana-Champaign and Virtue AI

Agent attack surfaces and tool-chain exploits

Bo Li is the Wexler AI Scholar and an Associate Professor at UIUC, where she works on trustworthy machine learning, AI safety, security, privacy, and robustness. She is also the founder and CEO of Virtue AI.

ShieldAgentAutoRedTeamer

Logistics

Workshop Logistics

Primary contact

fmaiworkshop@gmail.com

Current sponsor

Sponsors currently listed for the public site are Abaka AI and O2 Lab. The accepted proposal also notes at least $1,000 of Abaka AI support for student support and awards.

Submission status

OpenReview submissions are now open. Deadline: May 4, 2026, 11:59 PM UTC (May 4, 2026, 6:59 PM CDT).

Website

Public workshop site: fmai-workshop.github.io

Contact

Stay in Touch

We are using this site as the working home for the FMAI workshop. If you want to ask about submissions, sponsorship, or program details, reach out directly.