Papers
arxiv:2101.08525

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

Published on Jan 21, 2021
Authors:
,
,
,

Abstract

The proposed method uses shift operations to generate redundant features in SISR, reducing computational costs while maintaining performance.

AI-generated summary

Modern single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs. The problem on feature redundancy is well studied in visual recognition task, but rarely discussed in SISR. Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i.e., ghost features). Compared with depth-wise convolution which is time-consuming on GPU-like devices, shift operation can bring a practical inference acceleration for CNNs on common hardwares. We analyze the benefits of shift operation on SISR task and make the shift orientation learnable based on Gumbel-Softmax trick. Besides, a clustering procedure is explored based on pre-trained models to identify the intrinsic filters for generating intrinsic features. The ghost features will be derived by moving these intrinsic features along a specific orientation. Finally, the complete output features are constructed by concatenating the intrinsic and ghost features together. Extensive experiments on several benchmark models and datasets demonstrate that both the non-compact and lightweight SISR models embedded with the proposed method can achieve a comparable performance to that of their baselines with a large reduction of parameters, FLOPs and GPU inference latency. For instance, we reduce the parameters by 46%, FLOPs by 46% and GPU inference latency by 42% of times2 EDSR network with basically lossless performance.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2101.08525 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2101.08525 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2101.08525 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.