Journal of Xidian University ›› 2021, Vol. 48 ›› Issue (6): 48-56.doi: 10.19665/j.issn1001-2400.2021.06.007

• Special Issue:Key Technology of Architecture and Software for Intelligent Embedded Systems • Previous Articles     Next Articles

Efficient self-supervised meta-transfer algorithm for few shot learning

SHI Jiahui(),HAO Xiaohui(),LI Yanni()   

  1. Institute of Intelligent Media and Data Engineering,Xidian University,Xi’an 710071,China
  • Received:2021-06-30 Online:2021-12-20 Published:2022-02-24
  • Contact: Yanni LI E-mail:shijh@stu.xidian.edu.cn;xhhao@stu.xidian.edu.cn;yannili@mail.xidian.edu.cn

Abstract:

A key difficulty of current deep learning is the problem of few shots.Although some more effective few-shot algorithms/models have appeared,the existing deep models have limited features and the ability of the models to make generalization is low.If the distribution of the data in the new class and that of the data in the training dataset differ greatly,the classification result will be poor.In view of the above-mentioned shortcomings of the existing algorithms,the author proposes the residual attention dilation convolutional network as the feature extractor of the network model.The design of dilation branch increases the model’s receptive field and can extract features of different sizes.Image-based residual attention enhances the model’s attention to important features.A self-supervised network model pre-training algorithm is proposed.The self-supervised method is used in the pre-training stage to rotate the image data at different angles and establish corresponding labels.The rotation classifier based on the image structure information is designed to increase the supervision information in the training task so as to enhance the further mining of data information and the ability of the algorithm to make generalization.On the benchmark few-shot datasets miniImageNet and Fewshot-CIFAR100,the algorithm proposed in this paper is compared with the latest and best few-shot algorithm,with experimental results showing that the algorithm in this paper has achieved the latest and best performance.

Key words: few-shot, self-supervised, dilated convolution, residual attention

CLC Number: 

  • TP183

Baidu
map