Efficient Systems for Foundation Models
Workshop at the International Conference on Machine Learning (ICML) 2024.
π call for papers
We welcome submissions addressing emerging research questions and challenges associated with foundation model training and inference. Notably, we accept submissions concerning the entire spectrum of foundation models: from BERT-sized Transformers, to large models with 100B+ parameters.
Beyond βstandardβ papers focusing on algorithms and evaluations, we encourage contributions which may be more engineering/systems/code-focused than what is commonly accepted at machine learning conferences, and which feature open codebases. Implementation details, performance benchmarks, and other nitty gritty details are welcome at this workshop.
Accepted works will be presented as posters during the workshop, and selected works will be featured as contributed talks.
Submissions are closed for the 2024 edition of the workshop!
π important dates
- Submission deadline:
2024/06/03
; - Acceptance notification: `2024/07/01;
- Camera-ready deadline:
2024/07/17
.
β relevant topics
- Training and inference:
- Training and inference of large models;
- Training on cloud/heterogeneous infrastructure;
- Training and inference in bandwith/compute/memory/energy-constrained settings;
- Low-precision training and inference;
- Decentralized/collaborative training and inference.
- Algorithms for improved training and inference efficiency:
- Model architectures for efficient inference and training;
- Large batch training and optimization;
- Compression algorithms for communication-efficient training.
- Systems for foundation models:
- System architectures for inference and training;
- Programming languages and compilers for efficient training and inference;
- Benchmarks for large models training and inference.
π₯Έ additional details
- Formatting: all submissions must be in PDF format, following the ICML template. Although this will not be enforced as a hard limit, we recommend submissions to be concise, up to four pages of main content, with unlimited additional pages for references and supplementary. Supporting code may be provided either as a supplementary .zip or as a link to an anonymized repository.
- Reviewing: reviewing will be double-blind, so authors should take care to adequately anonymize their submission, especially regarding supplementary code. Reviewers will be instructed to focus on correctness and relevance to the workshop.
- Non-archival: this workshop is non-archival and will not result in proceedings; workshop submissions can be submitted to other venues.
- Dual submission: we welcome papers that may have been already accepted at ICML 2024 but which are also relevant to this workshop, or that under reviews at other venues (e.g., NeurIPS 2024).
- Contact: reach out to
esfomo.workshop@gmail.com
.