FCN-Transformer Feature Fusion for Polyp Segmentation

Sanderson, Edward orcid iconORCID: 0000-0002-3794-5513 and Matuszewski, Bogdan orcid iconORCID: 0000-0001-7195-2509 (2022) FCN-Transformer Feature Fusion for Polyp Segmentation. Medical Image Understanding and Analysis. .

[thumbnail of AAM] PDF (AAM) - Accepted Version
Restricted to Repository staff only

2MB
[thumbnail of Version of Record]
Preview
PDF (Version of Record) - Published Version
Available under License Creative Commons Attribution.

3MB

Official URL: https://doi.org/10.1007/978-3-031-12053-4_65

Abstract

Colonoscopy is widely recognised as the gold standard procedure for the early detection of colorectal cancer (CRC). Segmentation is valuable for two significant clinical applications, namely lesion detection and classification, providing means to improve accuracy and robustness. The manual segmentation of polyps in colonoscopy images is time-consuming. As a result, the use of deep learning (DL) for automation of polyp segmentation has become important. However, DL-based solutions can be vulnerable to overfitting and the resulting inability to generalise to images captured by different colonoscopes. Recent transformer-based architectures for semantic segmentation both achieve higher performance and generalise better than alternatives, however typically predict a segmentation map of h4×w4 spatial dimensions for a h×w input image. To this end, we propose a new architecture for full-size segmentation which leverages the strengths of a transformer in extracting the most important features for segmentation in a primary branch, while compensating for its limitations in full-size prediction with a secondary fully convolutional branch. The resulting features from both branches are then fused for final prediction of a h×w segmentation map. We demonstrate our method’s state-of-the-art performance with respect to the mDice, mIoU, mPrecision, and mRecall metrics, on both the Kvasir-SEG and CVC-ClinicDB dataset benchmarks. Additionally, we train the model on each of these datasets and evaluate on the other to demonstrate its superior generalisation performance.


Repository Staff Only: item control page