Chinese Journal of Liquid Crystals and Displays, Volume. 39, Issue 9, 1223(2024)
Hourglass attention and progressive hybrid Transformer for image classification
[4] DOSOVITSKIY A, BEYER L, KOLESNIKOV A et al. An image is worth 16×16 words: transformers for image recognition at scale[C](2021).
[5] HASSANI A, WALTON S, SHAH N et al. Escaping the big data paradigm with compact transformers[J/OL]. arXiv(2021).
[13] MEHTA S, RASTEGARI M. MobileViT: light-weight, general-purpose, and mobile-friendly vision transformer[C](2022).
[16] PAN Z Z, CAI J F, ZHUANG B H. Fast vision transformers with HiLo attention[C](2022).
[17] LIU Y H, SANGINETO E, BI W et al. Efficient training of visual transformers with small datasets[C](2021).
[18] ZHANG Q L, YANG Y B. ResT V2: simpler, faster and stronger[C](2022).
Get Citation
Copy Citation Text
Yanfei PENG, Yun CUI, Kun CHEN, Yongxin LI. Hourglass attention and progressive hybrid Transformer for image classification[J]. Chinese Journal of Liquid Crystals and Displays, 2024, 39(9): 1223
Category:
Received: Oct. 25, 2023
Accepted: --
Published Online: Nov. 13, 2024
The Author Email: Yun CUI (1727015916@qq.com)