电子科技 ›› 2024, Vol. 37 ›› Issue (5): 1-8.doi: 10.16180/j.cnki.issn1007-7820.2024.05.001

• •    下一篇

融合注意力和胶囊池化的轻量型胶囊网络

朱子豪, 宋燕   

  1. 上海理工大学 光电信息与计算机工程学院,上海 200093
  • 收稿日期:2022-04-06 出版日期:2024-05-15 发布日期:2024-05-21
  • 作者简介:朱子豪 (1997-),男,硕士研究生。研究方向:计算机图形处理。
    宋燕 (1979-),女,教授,博士生导师。研究方向:模式识别、数据分析和预测控制等。
  • 基金资助:
    国家自然科学基金(62073223);上海市自然科学基金(22ZR1443400);航天飞行动力学技术国防科技重点实验室开放课题(6142210200304)

Lightweight Capsule Network Fusing Attention and Capsule Pooling

ZHU Zihao, SONG Yan   

  1. School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2022-04-06 Online:2024-05-15 Published:2024-05-21
  • Supported by:
    National Natural Science Foundation of China(62073223);Shanghai Natural Science Foundation(22ZR1443400);Open Project of the National Defense Science and Technology Key Laboratory of Aerospace Flight Dynamics(6142210200304)

摘要:

针对胶囊网络特征信息传播低效性和路由过程存在较大计算开销等问题,文中提出了一种融合注意力和胶囊池化的轻量型胶囊网络。该网络主要有以下两方面的优势:1)提出了胶囊注意力。将注意力作用于初级胶囊层,增强对重要胶囊的关注,提高低级胶囊对高级胶囊预测的准确性;2) 提出新的胶囊池化。在初级胶囊层所有特征图的对应位置筛选出权重最大的胶囊,在减少模型参数量的同时以少量的重要胶囊表示有效特征信息。公共数据集的结果表明,提出的胶囊网络在CIFAR10上达到92.60%的精度,并在复杂数据集上具有良好的白盒对抗攻击鲁棒性。此外,提出的胶囊网络在AffNIST数据集上达到95.74%的精度,具有较好的仿射变换鲁棒性。计算效率结果表明,所提网络的浮点运算量比传统胶囊网络减少了31.3%,参数量减少了41.9%。

关键词: 深度学习, 图像分类, 胶囊网络, 胶囊池化, 注意力机制, 鲁棒性, 对抗攻击, 轻量型

Abstract:

In view of the inefficiency of feature information propagation in capsule networks and the huge computational overhead in the routing process, a graph pooling capsule network that combines attention and capsule pooling is proposed. The network mainly has the following two advantages: 1) The capsule attention is proposed, and the attention is applied to the primary capsule layer, which enhances the attention to the important capsules, and improves the accuracy of the prediction of the lower capsules to the higher capsules; 2) A new capsule pooling is proposed. The capsule with the largest weight is screened out at the corresponding positions of all feature maps in the primary capsule layer, and the effective feature information is represented by a small number of important capsules while reducing the number of model parameters. Results on public data sets show that the proposed capsule network achieves the accuracy of 92.60% on CIFAR10 and has excellent robustness against white-box adversarial attacks on complex datasets. In addition, the proposed capsule network achieves 95.74% accuracy on the AffNIST data set with superior affine transformation robustness. The calculation efficiency results show that the amount of floating-point operations of the proposed capsule is reduced by 31.3% and the number of parameters is reduced by 41.9% when compared with traditional CapsNet.

Key words: deep learning, image classification, capsule network, capsule pooling, attention mechanism, robustness, adversarial attack, lightweight

中图分类号: 

  • TP391.4
Baidu
map