Architecture | 1x1 Convolution, Wide Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | wide_resnet101_2 |
SHOW MORE |
Architecture | 1x1 Convolution, Wide Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | wide_resnet50_2 |
SHOW MORE |
Wide Residual Networks are a variant on ResNets where we decrease depth and increase the width of residual networks. This is achieved through the use of wide residual blocks.
To load a pretrained model:
import timm
m = timm.create_model('wide_resnet101_2', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. wide_resnet101_2
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@article{DBLP:journals/corr/ZagoruykoK16,
author = {Sergey Zagoruyko and
Nikos Komodakis},
title = {Wide Residual Networks},
journal = {CoRR},
volume = {abs/1605.07146},
year = {2016},
url = {http://arxiv.org/abs/1605.07146},
archivePrefix = {arXiv},
eprint = {1605.07146},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZagoruykoK16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
BENCHMARK | MODEL | METRIC NAME | METRIC VALUE | GLOBAL RANK |
---|---|---|---|---|
ImageNet | wide_resnet50_2 | Top 1 Accuracy | 81.45% | # 60 |
Top 5 Accuracy | 95.52% | # 60 | ||
ImageNet | wide_resnet101_2 | Top 1 Accuracy | 78.85% | # 140 |
Top 5 Accuracy | 94.28% | # 140 |