The Conference on Parsimony and Learning (CPAL) is an annual research conference focused on addressing the parsimonious, low dimensional structures that prevail in machine learning, signal processing, optimization, and beyond. We are interested in theories, algorithms, applications, hardware and systems, as well as scientific foundations for learning with parsimony.
Conference Proceedings
The CPAL 2024 Proceedings are now available on PMLR.
Congratulations to the CPAL 2024 Best Paper Award winners!
Best Paper Award: Yuangang Pan, Yinghua Yao, Ivor Tsang, PC-X: Profound Clustering via Slow Exemplars.
Best Paper Runner-Up Award: Chanwoo Chun, Daniel Lee, Sparse Activations with Correlated Weights in Cortex-Inspired Neural Networks.
Conference Program
The full CPAL 2024 program has been announced! Highlights include:
- Keynotes from leading experts
- Six oral sessions featuring authors of accepted proceedings track papers
- Two spotlight poster sessions showcasing the recent spotlight track
- Three rising stars presentation sessions featuring CPAL Rising Stars awardees
- Open-to-the-public tutorials in two parallel tracks
- A panel discussion on Day 2
- Social and networking events for registered attendees
- Morning tailored wellness sessions, open to all
Keynote Speakers
Information on the speakers’ planned talks is available here.
Dan Alistarh
Institute of Science and Technology Austria / Neural Magic
SueYeon Chung
New York University / Flatiron Institute
Kostas Daniilidis
University of Pennsylvania
Maryam Fazel
University of Washington
Tom Goldstein
University of Maryland
Yingbin Liang
Ohio State University
Dimitris Papailiopoulos
University of Wisconsin-Madison
Stefano Soatto
University of California, Los Angeles
Jong Chul Ye
Korea Advanced Institute of Science and Technology (KAIST)
Conference Program (Schedule View)
All times below are in HKT (GMT+8).
- 08:00 AM
- 08:30 AM
- 09:00 AM
- 09:30 AM
- 10:00 AM
- 10:30 AM
- 11:00 AM
- 11:30 AM
- 12:00 PM
- 12:30 PM
- 01:00 PM
- 01:30 PM
- 02:00 PM
- 02:30 PM
- 03:00 PM
- 03:30 PM
- 04:00 PM
- 04:30 PM
- 05:00 PM
- 05:30 PM
- 06:00 PM
- 06:30 PM
- 07:00 PM
- 07:30 PM
- 08:00 PM
- 08:30 PM
- 09:00 PM
Day 1 (Jan 3)
Wednesday- 10:00 AM–11:00 AM"Representation and Control of Meanings in Large Language Models and Multimodal Foundation Models"
- Coffee Break
- 11:20 AM–12:20 PM
- Lunch Break
- 1:30 PM–2:30 PM"Flat Minima and Generalization in Learning: The Case of Low-rank Matrix Recovery"
- 2:30 PM–3:30 PM
- Coffee Break
- 4:00 PM–5:00 PM"Accurate Model Compression at GPT Scale"
- Reception5:00 PM–6:30 PM
Day 2 (Jan 4)
Thursday- 8:00 AM–8:40 AM
- 9:00 AM–10:00 AM"In-Context Convergence of Transformers"
- 10:00 AM–11:00 AM
- Coffee Break
- 11:20 AM–12:20 PM
- Lunch Break
- 1:30 PM–2:30 PM"Parsimony through Equivariance"
- 2:30 PM–3:30 PM
- Coffee Break
- 5:00 PM–6:30 PM
- Banquet7:00 PM–9:00 PM
Day 3 (Jan 5)
Friday- 8:00 AM–8:40 AM
- 9:00 AM–10:00 AM"Enlarging the Capability of Diffusion Inverse Solvers by Guidance"
- 10:00 AM–11:00 AM
- Coffee Break
- 11:20 AM–12:20 PM
- Lunch Break
- 1:30 PM–2:30 PM"Teaching arithmetic to small language models"
- 2:30 PM–3:30 PM
- Coffee Break
- 4:00 PM–5:00 PM
- 5:00 PM–6:30 PM
- Tram Tour7:00 PM–9:00 PM
Day 4 (Jan 6)
Saturday- 8:00 AM–8:40 AM
- 9:00 AM–10:00 AM"Statistical methods for addressing safety and security issues of generative models"
- 10:00 AM–11:00 AM"Multi-level theory of neural representations: Capacity of neural manifolds in biological and artificial neural networks"
- Coffee Break
- Lunch Break
- Coffee Break