Khan, Zulfiqar Ahmad, Ullah, Fath U min ORCID: 0000-0002-1243-9358, Yar, Hikmat, Ullah, Waseem, Khan, Noman, Kim, Min Je and Baik, Sung Wook (2025) Optimized cross-module attention network and medium-scale dataset for effective fire detection. Pattern Recognition, 161 . p. 111273. ISSN 0031-3203
Full text not available from this repository.
Official URL: https://doi.org/10.1016/j.patcog.2024.111273
Abstract
Over a decade, computer vision has shown a keen interest toward vision-based fire detection due to its wide range of applications. Primarily, fire detection relies on color features that inspired recent deep models to achieve reasonable performance. However, a perfect balance between high fire detection rate and computational complexity over mainstream surveillance setups is a challenging task. To establish a better tradeoff between model complexity and fire detection rate, this article develops an efficient and effective Cross Module Attention Network (CANet) for fire detection. CANet is developed from scratch with a squeezing and expansive paths to focus on the fire regions and its location. Next, the channel attention and Multi-Scale Feature Selection (MSFS) modules are integrated to accomplish the most important channels, selectively emphasize the contributions of feature maps, and enhance the discrimination potential of fire and non-fire objects. Furthermore, the CANet is optimized by removing a significant number of parameters for real-world applications. Finally, we introduce a challenging database for fire classification comprised of multiple classes and highly similar fire and non-fire object images. CANet improved accuracy by 2.5 % for the BWF, 2.2 % for the DQFF, 1.42 % for the LSFD, 1.8 % for the DSFD, and 1.14 % for the FG, Additionally, CANet achieved a 3.6 times higher FPS on resource-constrained devices compared to baseline methods.
Repository Staff Only: item control page