A Hopfield Layer is a module that enables a network to associate two sets of vectors. This general functionality allows for transformer-like self-attention, for decoder-encoder attention, for time series prediction (maybe with positional encoding), for sequence analysis, for multiple instance learning, for learning with point sets, for combining data sources by associations, for constructing a memory, for averaging and pooling operations, and for many more.
In particular, the Hopfield layer can readily be used as plug-in replacement for existing layers like pooling layers (max-pooling or average pooling, permutation equivariant layers, GRU & LSTM layers, and attention layers. The Hopfield layer is based on modern Hopfield networks with continuous states that have very high storage capacity and converge after one update.
Source: Hopfield Networks is All You NeedPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Action Unit Detection | 1 | 20.00% |
Facial Action Unit Detection | 1 | 20.00% |
Immune Repertoire Classification | 1 | 20.00% |
Multiple Instance Learning | 1 | 20.00% |
Retrieval | 1 | 20.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |