Notable papers
Here are examples of the kind of work we would expect might influence participants in our call for papers.
- Mitchell, Margaret, et al. “Model cards for model reporting.” Proceedings of the conference on fairness, accountability, and transparency. 2019.
- Chiang, Wei-Lin, et al. “Chatbot arena: An open platform for evaluating llms by human preference.” Forty-first International Conference on Machine Learning. 2024.
- Report Cards: Qualitative Evaluation of LLMs Using Natural Language Summariespdf icon
- Blair Yang, Fuyang Cui, Keiran Paster, Jimmy Ba, Pashootan Vaezipoor, Silviu Pitis, Michael R. Zhang
- Eyring, Veronika, et al. “Pushing the frontiers in climate modelling and analysis with machine learning.” Nature Climate Change 14.9 (2024): 916-928.
- An Adversarial Perspective on Machine Unlearning for AI Safety
- Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando
- Bommasani, Rishi, et al. “On the opportunities and risks of foundation models.” arXiv preprint arXiv:2108.07258 (2021).
- Liang, Percy, et al. “Holistic evaluation of language models.” arXiv preprint arXiv:2211.09110 (2022).
- Kapoor, Sayash, et al. “On the societal impact of open foundation models.” arXiv preprint arXiv:2403.07918 (2024).
- Schneider, Ian, et al. “Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends.” arXiv preprint arXiv:2502.01671 (2025).
- Wu, Carole-Jean, et al. “Sustainable ai: Environmental implications, challenges and opportunities.” Proceedings of Machine Learning and Systems 4 (2022): 795-813.
- Reddi, Vijay Janapa, et al. “Mlperf inference benchmark.” 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2020.
- Wang, Keyu, et al. “Mitigating Downstream Model Risks via Model Provenance.” arXiv preprint arXiv:2410.02230 (2024).
- Rolnick, David, et al. “Tackling climate change with machine learning.” ACM Computing Surveys (CSUR) 55.2 (2022): 1-96.
- Lacoste, Alexandre, et al. “Quantifying the carbon emissions of machine learning.” arXiv preprint arXiv:1910.09700 (2019).
- Henderson, Peter, et al. “Towards the systematic reporting of the energy and carbon footprints of machine learning.” Journal of Machine Learning Research 21.248 (2020): 1-43.
- Brownlee, Alexander EI, et al. “Exploring the accuracy–energy trade-off in machine learning.” 2021 IEEE/ACM International Workshop on Genetic Improvement (GI). IEEE, 2021.
- Oala, Luis, et al. “DMLR: Data-centric Machine Learning Research–Past, Present and Future.” arXiv preprint arXiv:2311.13028 (2023).
- Carlini, Nicolas, et al. “Extracting training data from diffusion models.” 32nd USENIX Security Symposium (USENIX Security 23). 2023.
- Weidinger, Laura, et al. “Taxonomy of risks posed by language models.” Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. 2022.