ICLR 2025 Workshop on
Foundation Models in the Wild
April 27/28, 2025
The workshop will be held in a hybrid format.




News

  • Please follow us on Twitter for the latest news, or join us on the Slack for workshop issues and active discussions.
  • Call for reviewers: We are actively looking for reviewers to join the program committee for the workshop. We encourage all interested researchers to apply, especially those from underrepresented groups. Prior reviewer experience is a nice-to-have, but not required. Interest and familiarity with subject matters related to the workshop is required. If you are interested, please fill out the application form to join us.
  • Call for papers: Workshop submission portal open at OpenReview: https://openreview.net/group?id=ICLR.cc/2025/Workshop/FM-Wild

About

In the era of AI-driven transformations, foundation models (FMs) have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, reshape the future of scientific research and the broader human society, but also introduce challenges in their in-the-wild deployments. The Workshop on FMs in the wild delves into the urgent need for these models to be useful when deployed in our societies. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the successful integration of FMs into in-the-wild frameworks necessitates a careful consideration of many properties, including adaptivity, reliability, efficiency, and reasoning ability.

Key Problems We Aim to Address

In-the-wild Adaptation
How can we leverage techniques such as Retrieval-Augmented Generation (RAG), In-context Learning (ICL), or Fine-tuning (FT) to adapt FMs for specific domains, such as drug discovery, education, or clinical health?
Reasoning and Planning
How can FMs be enhanced to tackle more complex in-the-wild tasks that require multi-step reasoning or decision-making, such as multi-hop question answering, mathematical problem-solving, theorem proving, code generation, or robot planning scenarios?
Reliability and Responsibility
How can FMs work reliably outside their training distribution? And how can we address issues like hallucination, fairness, ethics, safety and privacy within the society?
Practical Limitations in Deployment
How can FMs tackle challenges in practical applications, such as system constraints, memory requirements, response time demands, data acquisition barriers, and computational costs for inference-time scaling and long-context input?

Call for Papers

The Workshop on Foundation Models in the Wild@ICLR 2025 invite submissions from researchers in the fields of machine learning pertaining to foundation models and its in-the wild applications. Additionally, we welcome contributions from scholars in the natural sciences (such as physics, chemistry, and biology) and social sciences (including pedagogy and sociology) that necessitate the use of foundation models.

Key Dates

  • Paper Deadline: February 3, 2025 (AoE)
  • Notification: March 5, 2025, (AoE)
  • Camera-ready: March 20, 2025, (AoE)

Deadlines are strict and will not be extended under any circumstances. All deadlines follow the Anywhere on Earth (AoE) timezone.

Submission Site

Submit papers through the Foundation Models in the Wild Workshop Submission Portal on OpenReview (Workshop Submission Portal )

Scope

We welcome contributions across a broad spectrum of topics, including but not limited to:

  • Innovations in techniques for customizing models to individual user preferences, tasks, or domains
  • Advancements in the reasoning and planning abilities of FMs in complex real-world challenges
  • Theoretical and empirical investigations into the reliability and responsibility of various FMs
  • Strategies for overcoming practical limitations (e.g., memory, time, data) of FMs in broad applications
  • Methods for integrating multiple modalities (e.g., text, images, action) into a unified in-the-wild framework
  • Discussions on FM agents that perform intricate tasks through interaction with the environment
  • In-depth discussions exploring the in-the-wild deployments and applications of FMs
  • Benchmark methodologies for assessing FMs in real-world settings

Submission Guidelines

Format:  All submissions must be a single PDF file. We welcome high-quality original papers in the following two tracks:

  • Tiny papers: between 2 and 4 pages
  • Long Papers: between 6 and 9 pages
References and appendices are not included in the page limit, but the main text must be self-contained. Reviewers are not required to read beyond the main text

Style file:   You must format your submission using the ICLR 2025 LaTeX style file. For your convenience, we have modified the main conference style file to refer to our workshop: iclr_fmwild.sty. Please include the references and supplementary materials in the same PDF. The maximum file size for submissions is 50MB. Submissions that violate the ICLR style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.

Dual-submission and non-archival policy:  We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted, , provided they do not breach any dual-submission or anonymity policies of those venues. The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility:   Submissions and reviews will not be public. Only accepted papers will be made public.

Double-blind reviewing:   All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy. This policy applies to any supplementary or linked material as well, including code. If you are including links to any external material, it is your responsibility to guarantee anonymous browsing. Please do not include acknowledgements at submission time. If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing. Any papers found to be violating this policy will be rejected.

Contact:   For any questions, please contact us at fmwild2025@googlegroups.com.


Schedule

This is the tentative schedule of the workshop. All slots are provided in local time.

Morning Session

08:50 - 09:00 Introduction and opening remarks
09:00 - 09:30 Invited Talk 1
09:30 - 10:00 Invited Talk 2
10:00 - 10:15 Contributed Talk 1
10:15 - 11:15 Poster Session 1
11:15 - 11:45 Invited Talk 3
11:45 - 12:15 Invited Talk 4
12:15 - 13:30 Break

Afternoon Session

13:30 - 14:00 Invited Talk 5
14:00 - 14:30 Invited Talk 6
14:30 - 14:45 Contributed Talk 2
14:45 - 15:45 Poster Session 2
15:45 - 16:15 Invited Talk 7
16:15 - 16:30 Contributed Talk 3
16:30 - 17:00 Invited Talk 8
17:00 - 18:00 Panel discussion

Invited Speakers




Anima Anandkuma

California Institute of Technology

Xinyun Chen

Google DeepMind

Chelsea Finn

Stanford University

Tatsunori Hashimoto

Stanford University

Reza Shokri

National University of Singapore Microsoft

Jie Tang

Tsinghua University

Yuandong Tian

Meta AI Research

René Vidal

University of Pennsylvania

Workshop Organizers




Xinyu Yang

Carnegie Mellon University

Huaxiu Yao

UNC-Chapel Hill

Mohit Bansal

UNC-Chapel Hill

Beidi Chen

Carnegie Mellon University

Junlin Han

University of Oxford Meta AI

Pavel Izmailov

New York University Anthropic

Jinqi Luo

University of Pennsylvania

Pang Wei Koh

University of Washington

Weijia Shi

University of Washington

Wenjie Qu

National University of Singapore

Philip Torr

University of Oxford

Zhaoyang Wang

UNC-Chapel Hill

Songlin Yang

Massachusetts Institute of Technology

Luke Zettlemoyer

University of Washington Meta

Jiaheng Zhang

National University of Singapore