Squeeze-and-Excitation

Year: 2,018
Authors: Jie Hu, Li Shen, Gang Sun
Journal:  IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Programming languages: C, Cuda

Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, several recent approaches have shown the benefit of enhancing spatial encoding. In this work, we focus on the channel relationship and propose a novel architectural unit, which we term the “Squeezeand-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We demonstrate that by stacking these blocks together, we can construct SENet architectures that generalise extremely well across challenging datasets. Crucially, we find that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.