SABA: Scale-adaptive Attention and Boundary Aware Network for real-time semantic segmentation

Luo, Huilan, Liu, Chunyan and Shark, Lik orcid iconORCID: 0000-0002-9156-2003 (2025) SABA: Scale-adaptive Attention and Boundary Aware Network for real-time semantic segmentation. Expert Systems with Applications, 282 . ISSN 0957-4174

[thumbnail of AAM] PDF (AAM) - Accepted Version
Restricted to Repository staff only until 23 April 2027.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

18MB

Official URL: https://doi.org/10.1016/j.eswa.2025.127680

Abstract

Balancing accuracy and speed is crucial for semantic segmentation in autonomous driving. While various mechanisms have been explored to enhance segmentation accuracy in lightweight deep learning networks, adding more mechanisms does not always lead to better performance and often significantly increases processing time. This paper investigates a more effective and efficient integration of three key mechanisms—context, attention, and boundary—to improve real-time semantic segmentation of road scene images. Based on an analysis of recent fully convolutional encoder–decoder networks, we propose a novel Scale-adaptive Attention and Boundary Aware (SABA) segmentation network. SABA enhances context through a new pyramid structure with multi-scale residual learning, refines attention via scale-adaptive spatial relationships, and improves boundary delineation using progressive refinement with a dedicated loss function and learnable weights. Evaluations on the Cityscapes benchmark show that SABA outperforms current real-time semantic segmentation networks, achieving a mean intersection over union (mIoU) of up to 76.7% and improving accuracy for 17 out of 19 object classes. Moreover, it achieves this accuracy at an inference speed of up to 83.4 frames per second, significantly exceeding real-time video frame rates. The code is available at https://github.com/liuchunyan66/SABA.


Repository Staff Only: item control page