๐Ÿ”’ Trustworthy FMs Workshop

Trust Before Use: Building Foundation Models that You Can Trust

๐Ÿ“ In conjunction with ICCV 2025 โ€” Oct. 19โ€“23, Honolulu, Hawai'i

๐Ÿ”Ž Overview

Foundation models are revolutionizing the way we interact with AIโ€”powering everything from search engines to scientific discovery. But as their reach expands, so do the risks. Can we truly trust these systemsโ€”before putting them to use? ๐Ÿค”

From AlexNet to LLaVA ๐Ÿง , the pace of innovation is staggering. Yet one thing remains constant: the urgent need for trustworthiness.
In the foundation model era, we ask:
๐ŸŒ What does trust mean at scale?
๐Ÿ“œ Can classical insights still guide us?

This workshop brings together researchers, engineers, and thought leaders to confront these challenges head-on. Weโ€™ll explore how to create models that are not just powerful, but robust, fair, interpretable, and accountable.

๐Ÿ“ฃ Call for Papers

We welcome short (4-page) and full-length (8-page) submissions on topics related to trustworthy foundation models and their applications.
Accepted papers will not be published in proceedings, allowing future resubmission.


Topics of interest include (but are not limited to):

  • ๐Ÿง  Machine unlearning for foundation models
  • ๐Ÿงญ Bias discovery and mitigation in large-scale foundation models
  • ๐Ÿ›ก๏ธ Adversarial attacks and defenses in foundation models
  • ๐Ÿ“ Robustness and generalization in trustworthy foundation models
  • โš–๏ธ Fairness-aware training and evaluation
  • ๐Ÿ” Explainability and interpretability for large/black-box models
  • ๐Ÿงช Ethical concerns and responsible deployment
  • ๐Ÿ“Š Benchmarks for foundation model trustworthiness
  • ๐Ÿ“‰ Uncertainty quantification
  • ๐Ÿ” Data provenance and security

๐Ÿ—“ Submission Deadline: August 1, 2025

๐Ÿ“ฌ Notification: August 16, 2025

๐Ÿ”— Submit via: OpenReview

Invited Speakers

Speaker 1

Salman Avestimehr

University of Southern California / FedML

Speaker 2

Yao Qin

UC Santa Barbara / DeepMind

Speaker 3

Sharon Li

University of Wisconsin Madison

Organizers

Shirley Wu

Shirley Wu

Stanford

Cai Mu

Cai Mu

DeepMind

Kai Zhang

Kai Zhang

OSU

Senior Organizers

Chenliang Xu

Chenliang Xu

University of Rochester

Jure Leskovec

Jure Leskovec

Stanford

Dan Hendrycks

Dan Hendrycks

Center for AI Safety

Jindong Wang

Jindong Wang

The College of William & Mary

Lingjuan Lyu

Lingjuan Lyu

Sony AI

Hangfeng He

Hangfeng He

University of Rochester

Ting Wang

Ting Wang

Stony Brook University

Zhiheng Li

Zhiheng Li

Amazon

Registration

Please note that at least one author of each accepted paper is required to register for the workshop and present the work. We also encourage all participants to register for the main ICCV 2025 conference. Registration details are available at https://iccv.thecvf.com/.

Discussion Opportunities

The workshop will actively support and foster discussion beyond the Q&A sessions following individual talks in two main ways. First, we will host both an onsite discussion room and virtual โ€œposter sessions,โ€ where authors of contributed papers can informally present their work and engage in dialogue with attendees. Second, the workshop will feature a panel discussion to facilitate broader conversation around key themes.

Access

With speakers' permission, we plan to record all talks and discussion sessions and make them available on the workshop website. All speakers will deliver their presentations either in person or remotely. We will also publish all accepted papers and extended abstracts on the website, pending authorsโ€™ consent. Furthermore, the website will be maintained after the workshop to serve as an ongoing repository for existing and newly published work relevant to building trustworthy foundation models.