Abstract
Deep learning has revolutionized image super-resolution, yet challenges persist in preserving intricate details and avoiding overly smooth reconstructions. In this work, we introduce a novel architecture, the Residue and Semantic Feature-based Dual Subpixel Generative Adversarial Network (RSF-DSGAN), which emphasizes the critical role of semantic information in addressing these issues. The proposed generator architecture is designed with two sequential stages: the Premier Residual Stage and the Deuxième Residual Stage. These stages are concatenated to form a dual-stage upsampling process, substantially augmenting the model’s capacity for feature learning. A central innovation of our approach is the integration of semantic information directly into the generator. Specifically, feature maps derived from a pre-trained network are fused with the primary feature maps of the first stage, enriching the generator with high-level contextual cues. This semantic infusion enhances the fidelity and sharpness of reconstructed images, particularly in preserving object details and textures. Inter- and intra-residual connections are employed within these stages to maintain high-frequency details and fine textures. Additionally, spectral normalization is introduced in the discriminator to stabilize training. Comprehensive evaluations, including visual perception and mean opinion scores, demonstrate that RSF-DSGAN, with its emphasis on semantic information, outperforms current state-of-the-art super-resolution methods.
Original language | English |
---|---|
Article number | 104226 |
Number of pages | 1 |
Journal | Computer Vision and Image Understanding |
Volume | 250 |
Early online date | 11 Nov 2024 |
DOIs | |
Publication status | E-pub ahead of print - 11 Nov 2024 |
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
Keywords
- Super-resolution
- Convolutional Neural Networks
- Generative Adversarial Networks
- Residual learning
- Spectral normalization