Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis
Simon Dahan, Abdulah Fawaz, Logan Zane John Williams, Chunhui Yang, Timothy S. Coalson, Matthew Glasser, A David Edwards, Daniel Rueckert, Emma Claire Robinson
Show abstract - Show schedule - Proceedings - PDF - Reviews
The extension of convolutional neural networks (CNNs) to non-Euclidean geometries has led to multiple frameworks for studying manifolds. Many of those methods have shown design limitations resulting in poor modelling of long-range associations, as the generalisation of convolutions to irregular surfaces is non-trivial. Motivated by the success of attention-modelling in computer vision, we translate convolution-free vision transformer approaches to surface data, to introduce a domain-agnostic architecture to study any surface data projected onto a spherical manifold. Here, surface patching is achieved by representing spherical data as a sequence of triangular patches, extracted from a subdivided icosphere. A transformer model encodes the sequence of patches via successive multi-head self-attention layers while preserving the sequence resolution. We validate the performance of the proposed Surface Vision Transformer (SiT) on the task of phenotype regression from cortical surface metrics derived from the Developing Human Connectome Project (dHCP). Experiments show that the SiT generally outperforms surface CNNs, while performing comparably on registered and unregistered data, suggesting the network is capable of achieving transformation invariance. Analysis of transformer attention maps further illustrates the learning of long-range associations between distant cortical regions and offer strong potential to characterise subtle cognitive developmental patterns.
Hide abstract
Wednesday 6th July
Poster Session 1.2 - onsite 11:00 - 12:00, virtual 15:20 - 16:20 (UTC+2)
Hide schedule