Learning objective-specific active learning strategies with Attentive Neural Processes

Abstract

Pool-based active learning (AL) is a promising technology for increasing data-efficiency of machine learning models. However, surveys show that performance of recent AL methods is very sensitive to the choice of dataset and training setting, making them unsuitable for general application. To tackle this problem, we propose a novel Learning Active Learning (LAL) method that exploits symmetry and independence properties of the active learning problem with an Attentive Conditional Neural Process model. Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to objectives besides standard classification accuracy. A prominent real-life example of such objectives appear in imbalanced data settings, where rare classes are typically more important than their standard contribution to the loss or accuracy suggests. We perform an extensive survey of recent AL methods, and show they underperform in such imbalanced data setting. We then provide experiments with the myopic oracle, which suggest that it provides a strong learning signal, especially in such settings. We experimentally verify that our Neural Process model outperforms a variety of baselines in these settings. Finally, our experiments show that our model exhibits a tendency towards improved stability to changing datasets. However, performance is sensitive to choice of classifier and more work is necessary to reduce the performance the gap with the myopic oracle and to improve scalability. We present our work as a proof-of-concept for LAL on nonstandard objectives and hope our analysis and modelling considerations inspire future LAL work.

Publication
Under submission at TMLR
Tim Bakker
Tim Bakker
PhD researcher in Machine Learning

My research interests include active learning/sensing, reinforcement learning, ML for simulations, and everything Bayesian.