Training Techniques | RMSProp, Weight Decay, Gradient Clipping, Label Smoothing |
---|---|
Architecture | Auxiliary Classifier, Average Pooling, 1x1 Convolution, Average Pooling, Batch Normalization, Convolution, Dropout, Dense Connections, Inception-v3 Module, ReLU, Max Pooling, Softmax |
ID | tf_inception_v3 |
SHOW MORE |
Inception v3 is a convolutional neural network architecture from the Inception family that makes several improvements including using Label Smoothing, Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an Inception Module.
To load a pretrained model:
import timm
m = timm.create_model('tf_inception_v3', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. tf_inception_v3
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@article{DBLP:journals/corr/SzegedyVISW15,
author = {Christian Szegedy and
Vincent Vanhoucke and
Sergey Ioffe and
Jonathon Shlens and
Zbigniew Wojna},
title = {Rethinking the Inception Architecture for Computer Vision},
journal = {CoRR},
volume = {abs/1512.00567},
year = {2015},
url = {http://arxiv.org/abs/1512.00567},
archivePrefix = {arXiv},
eprint = {1512.00567},
timestamp = {Mon, 13 Aug 2018 16:49:07 +0200},
biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
BENCHMARK | MODEL | METRIC NAME | METRIC VALUE | GLOBAL RANK |
---|---|---|---|---|
ImageNet | tf_inception_v3 | Top 1 Accuracy | 77.87% | # 170 |
Top 5 Accuracy | 93.65% | # 170 |