Abstract

OCT flfluid segmentation is a crucial task for diagnosis and therapy in ophthalmology. The current convolutional neural networks (CNNs) supervised by pixel-wise annotated masks achieve great success in OCT flfluid segmentation. However, requiring pixel-wise masks from OCT images is time-consuming, expensive and expertise needed. This paper proposes an Intraand inter-Slice Contrastive Learning Network (ISCLNet) for OCT flfluid segmentation with only point supervision. Our ISCLNet learns visual representation by designing contrastive tasks that exploit the inherent similarity or dissimilarity from unlabeled OCT data. Specififically, we propose an intra-slice contrastive learning strategy to leverage the flfluid-background similarity and the retinal layer-background dissimilarity. Moreover, we construct an interslice contrastive learning architecture to learn the similarity of adjacent OCT slices from one OCT volume. Finally, an end-to-end model combining intra- and inter-slice contrastive learning processes learns to segment flfluid under the point supervision. The experimental results on two public OCT flfluid segmentation datasets (i.e., AI Challenger and RETOUCH) demonstrate that the ISCLNet bridges the gap between fully-supervised and weakly-supervised OCT flfluid segmentation and outperforms other well-known point-supervised segmentation methods.