Recently, learning from limited supervision has drawn tremendous research interests within the machine learning and computer vision communities. The purpose is to mitigate the lack of full and laborious annotations in a breadth of important application areas. In this talk, I will discuss some recent developments in this direction, highlighting and connecting various popular settings of learning from limited supervision: Few-shot learning, semi/weakly supervised learning and unsupervised domain adaptation.
I will focus on how to tackle these problems by enforcing various types of regularizers, priors and constraints on training deep neural networks, which can leverage unlabeled data and embed domain-specific knowledge. I will discuss several key technical aspects in the context of learning with limited labels, including constrained optimization, Laplacian/Conditional Random Fields (CRFs) regularization and Shannon-Entropy/Mutual-Information losses.
I will emphasize how more attention should be paid to optimization methods, going beyond standard gradient descent. The talk includes various experimental illustrations and applications.