Link to Pubmed [PMID] – 36044485
Link to DOI – 10.1109/TMI.2022.3203022.
The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. A highlight of this work is the manner in which the model’s hyperparameters are estimated. The detriments of ad-hoc parameter estimation are well known, but this issue remains largely unaddressed in the context of CNN-based segmentation. Using a novel min-max formulation of the segmentation cost function our proposed method analytically estimates the model’s hyperparameters, while simultaneously learning the CNN weights during training. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re-annotation of the data. Experimental validations on multi-cellular images strongly suggest the effectiveness of the proposed technique, and our quantitative results show at least 15% and 10% improvement in cell segmentation on brightfield and fluorescence images respectively compared to contemporary supervised segmentation methods.