Masked Autoencoders Pre-training in Multiple Instance Learning for Whole Slide Image Classification

Jianpeng An, Yunhao Bai, Huazhen Chen, Zhongke Gao, Geert Litjens

Show abstract - Show schedule - Proceedings - PDF - Reviews

End-to-end learning with whole-slide digital pathology images is challenging due to their size, which is in the order of gigapixels. In this paper, we propose a novel weakly-supervised learning strategy that combines masked autoencoders (MAE) with multiple instance learning (MIL). We use the output tokens of a self-supervised, pre-trained MAE as instances and design a token selection module to reduce the impact of global average pooling. We evaluate our framework on the assessment of whole-slide image classification on Camelyon16 dataset, showing improved performance compared to the state-of-the-art CLAM algorithm.
Hide abstract

Wednesday 6th July
Poster Session 1.1 - onsite 15:20 - 16:20, virtual 11:00 - 12:00 (UTC+2)
Hide schedule