With the rapid growth of large-scale data across domains such as vision, natural language, healthcare, and environmental sciences, the need for scalable learning approaches with minimal labeled supervision has become crucial.
This workshop focuses on emerging techniques in learning effective data representations using limited or weak labels, leveraging strategies such as self-supervised learning, zero-shot and few-shot learning, active learning, transfer learning, and in-context learning. We are also interested in statistical theories that improve our understanding of deep learning and its ability to learn with limited data. Additional areas of interest include efficient model training and learning paradigms, as well as applications of machine learning in limited data scenarios.