Unsupervised domain adaptation to classify medical images using zero-bias convolutional auto-encoders and context-based feature augmentation
Journal Publication ResearchOnline@JCUAbstract
The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are sparse, mainly related to the complexity in manual annotation. Deep convolutional neural networks (CNNs), with transferable knowledge, have been employed as a solution to limited annotated data through: 1) fine-tuning generic knowledge with a relatively smaller amount of labelled medical imaging data, and 2) learning image representation that is invariant to different domains. These approaches, however, are still reliant on labelled medical image data. Our aim is to use a new hierarchical unsupervised feature extractor to reduce reliance on annotated training data. Our unsupervised approach uses a multi-layer zero-bias convolutional auto-encoder that constrains the transformation of generic features from a pre-trained CNN (for natural images) to non-redundant and locally relevant features for the medical image data. We also propose a context-based feature augmentation scheme to improve the discriminative power of the feature representation. We evaluated our approach on 3 public medical image datasets and compared it to other state-of-the-art supervised CNNs. Our unsupervised approach achieved better accuracy when compared to other conventional unsupervised methods and baseline fine-tuned CNNs.
Journal
IEEE Transactions on Medical Imaging
Publication Name
N/A
Volume
39
ISBN/ISSN
1558-254X
Edition
N/A
Issue
7
Pages Count
10
Location
N/A
Publisher
IEEE
Publisher Url
N/A
Publisher Location
N/A
Publish Date
N/A
Url
N/A
Date
N/A
EISSN
N/A
DOI
10.1109/TMI.2020.2971258