Linear evaluation
66 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Linear evaluation models and implementationsMost implemented papers
Bootstrap your own latent: A new approach to self-supervised Learning
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
Emerging Properties in Self-Supervised Vision Transformers
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
Self-Supervised Learning with Swin Transformers
We are witnessing a modeling shift from CNN to Transformers in computer vision.
Contrastive Multi-View Representation Learning on Graphs
We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol.
Solo-learn: A Library of Self-supervised Methods for Visual Representation Learning
This paper presents solo-learn, a library of self-supervised methods for visual representation learning.
Learning Representations by Maximizing Mutual Information Across Views
Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider.
BYOL works even without batch statistics
Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach for image representation.
Matrix Information Theory for Self-Supervised Learning
Inspired by this framework, we introduce Matrix-SSL, a novel approach that leverages matrix information theory to interpret the maximum entropy encoding loss as matrix uniformity loss.
How Useful is Self-Supervised Pretraining for Visual Tasks?
We investigate what factors may play a role in the utility of these pretraining methods for practitioners.