Few-Shot Learning with Global Class Representations

Tiange Luo1*     Aoxue Li1*     Xiang Tao2     Weiran Huang3     Liwei Wang1    
(*: indicates joint first authors)

1 Peking University     2 University of Surrey     3 Huawei Noah’s Ark Lab    
Accepted by ICCV 2019

[Paper] [BibTex] [Code (Github)]

Abstract

In this paper, we propose to tackle the challenging few-shot learning (FSL) problem by learning global class representations using both base and novel class training samples. In each training episode, an episodic class mean computed from a support set is registered with the global representation via a registration module. This produces a registered global class representation for computing the classification loss using a query set. Though following a similar episodic training pipeline as existing meta learning based approaches, our method differs significantly in that novel class training samples are involved in the training from the beginning. To compensate for the lack of novel class training samples, an effective sample synthesis strategy is developed to avoid overfitting. Importantly, by joint base-novel class training, our approach can be easily extended to a more practical yet challenging FSL setting, i.e., generalized FSL, where the label space of test data is extended to both base and novel classes. Extensive experiments show that our approach is effective for both of the two FSL settings.

Illustration

Figure 1. The first block shows a base class and a novel class in an embedding space. The base class contains sufficient labeled data while the novel class has only a few labeled data. The two classes have intersections, and we aim to learn global representations for each class which are used for recognizing test data. The second block illustrates the two key components of the proposed model. First, we generate new samples (orange cross) to increase intra-class variance for novel classes. Second, a registration module is proposed to encourage sample to `pull' its global representation to itself and `push' other global representations away. Similarly, global representations would influence the sample. The last block shows the results after learning global representations jointly using both base and novel class samples. The two classes become more separable, and the global representations are more distinguishable.

Whole Framework

Figure 2. First, we propose a sample synthesis method to synthesize episodic representation for each class in the support set. Second, the registration module is leveraged to select global representation according to their episodic representation, and the selected global representations are then used to classify query images. The classification loss and registration loss are used to jointly optimize the global representations, the registration module, and the feature extractor.