Training Techniques | Weight Decay, SGD with Momentum |
---|---|
Architecture | 1x1 Convolution, Fire Module, Batch Normalization, Convolution, Dropout, Global Average Pooling, Residual Connection, ReLU, Max Pooling, Softmax |
ID | squeezenet1_0 |
SHOW MORE |
Training Techniques | Weight Decay, SGD with Momentum |
---|---|
Architecture | 1x1 Convolution, Fire Module, Batch Normalization, Convolution, Dropout, Global Average Pooling, Residual Connection, ReLU, Max Pooling, Softmax |
ID | squeezenet1_1 |
SHOW MORE |
SqueezeNet is a convolutional neural network that employs design strategies to reduce the number of parameters, notably with the use of fire modules that "squeeze" parameters using 1x1 convolutions.
To load a pretrained model:
import torchvision.models as models
squeezenet = models.squeezenet1_0(pretrained=True)
Replace the model name with the variant you want to use, e.g. squeezenet1_0
. You can find the IDs in the model summaries at the top of this page.
To evaluate the model, use the image classification recipes from the library.
python train.py --test-only --model='<model_name>'
You can follow the torchvision recipe on GitHub for training a new model afresh.
@article{DBLP:journals/corr/IandolaMAHDK16,
author = {Forrest N. Iandola and
Matthew W. Moskewicz and
Khalid Ashraf and
Song Han and
William J. Dally and
Kurt Keutzer},
title = {SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and {\textless}1MB
model size},
journal = {CoRR},
volume = {abs/1602.07360},
year = {2016},
url = {http://arxiv.org/abs/1602.07360},
archivePrefix = {arXiv},
eprint = {1602.07360},
timestamp = {Fri, 20 Nov 2020 16:16:06 +0100},
biburl = {https://dblp.org/rec/journals/corr/IandolaMAHDK16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
BENCHMARK | MODEL | METRIC NAME | METRIC VALUE | GLOBAL RANK |
---|---|---|---|---|
ImageNet | SqueezeNet 1.1 | Top 1 Accuracy | 58.19% | # 303 |
Top 5 Accuracy | 80.62% | # 303 | ||
ImageNet | SqueezeNet 1.0 | Top 1 Accuracy | 58.1% | # 304 |
Top 5 Accuracy | 80.42% | # 304 |