pytorch / vision

TOP 1 ACCURACY TOP 5 ACCURACY
SPEED
MODEL CODE PAPER
ε-REPR
CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
AlexNet
(single)
56.1% 57.1% 78.9% -- 376.5 #716
DenseNet-121
74.8% 76.4% 92.2% 93.3% 369.6 #538
DenseNet-161
77.3% -- 93.6% -- 347.9 #445
DenseNet-169
75.9% 77.9% 93.0% 94.1% 361.6 #501
DenseNet-201
77.3% 78.5% 93.5% 94.5% 356.0 #451
Inception V1
70.1% 69.8%
89.7% 89.9% 369.8 #621
Inception V3
69.9% 78.8% 88.9% 94.4% 365.2 #634
MnasNet-A1
73.1% 75.2% 91.3% 92.5% 379.9 #577
MnasNet-A1
(depth multiplier=0.5)
67.2% 68.9% 87.2% -- 377.3 #659
MobileNetV2
71.4% 72.0% 90.1% -- 375.9 #610
ResNet-101
77.3% 78.2% 93.5% 94.0% 358.6 #453
ResNet-152
78.2% 78.6% 94.0% 94.3% 351.1 #390
ResNet-18
69.5% 72.1% 89.1% -- 359.7 #632
ResNet-34 A
73.2% 75.0% 91.3% 92.2% 336.4 #577
ResNet-50
75.9% 77.1% 92.9% 93.3% 372.2 #503
ResNeXt-101 32x8d
79.1% -- 94.4% -- 284.5 #304
ResNeXt-50 32x4d
77.5% -- 93.6% -- 366.6 #439
ShuffleNet V2
(0.5x)
60.0% 60.3% 81.3% -- 384.5 #707
ShuffleNet V2
(1x)
69.0% 69.4% 88.2% -- 384.8 #648
SqueezeNet
57.8% 57.5%
80.3% 80.3% 381.1 #710
SqueezeNet 1.1
57.9% -- 80.4% -- 383.0 #709
VGG-11
68.8% 70.4% 88.6% 89.6% 367.3 #639
VGG-11
(batch-norm)
70.2% 70.4% 89.7% 89.6% 366.2 #620
VGG-13
69.6% 71.3% 89.2% 90.1% 356.2 #624
VGG-13
(batch-norm)
71.4% 71.3% 90.3% 90.1% 362.0 #609
VGG-16
71.4% 74.4% 90.3% 91.9% 356.9 #608
VGG-16
(batch-norm)
73.1% -- 91.4% -- 354.4 #578
VGG-19
72.2% 74.5% 90.7% 92.0% 349.5 #594
VGG-19
(batch-norm)
74.0% -- 91.7% -- 349.5 #564
WRN-101-2-bottleneck
78.6% -- 94.1% -- 323.4 #350
WRN-50-2-bottleneck
78.3% 78.1% 94.0% 94.0% 354.2 #376
See Full Build Details +get badge code
[![SotaBench](https://img.shields.io/endpoint.svg?url=https://sotabench.com/api/v0/badge/gh/deepparrot/vision)](https://sotabench.com/user/PartyParrot/repos/deepparrot/vision)

How the Repository is Evaluated

The full sotabench.py file - source
import PIL
import torch
import torchvision
import tqdm

from torchbench.image_classification import ImageNet
import torchvision.transforms as transforms
import PIL

import torchvision.models as models

# DEEP RESIDUAL LEARNING

# Define the transforms need to convert ImageNet data to expected model input
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 
    std=[0.229, 0.224, 0.225])
input_transform = transforms.Compose([
    transforms.Resize(256, PIL.Image.BICUBIC),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

ImageNet.benchmark(
    model=models.resnet18(pretrained=True),
    paper_model_name='ResNet-18',
    paper_arxiv_id='1512.03385',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.7212}
)

ImageNet.benchmark(
    model=models.resnet34(pretrained=True),
    paper_model_name='ResNet-34 A',
    paper_arxiv_id='1512.03385',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.7497, 'Top 5 Accuracy': 0.9224}
)

ImageNet.benchmark(
    model=models.resnet50(pretrained=True),
    paper_model_name='ResNet-50',
    paper_arxiv_id='1512.03385',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.resnet101(pretrained=True),
    paper_model_name='ResNet-101',
    paper_arxiv_id='1512.03385',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

ImageNet.benchmark(
    model=models.resnet152(pretrained=True),
    paper_model_name='ResNet-152',
    paper_arxiv_id='1512.03385',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

# ALEXNET

ImageNet.benchmark(
    model=models.alexnet(pretrained=True),
    paper_model_name='AlexNet (single)',
    paper_arxiv_id='1404.5997',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.5714}
)   

# VGG

ImageNet.benchmark(
    model=models.vgg11(pretrained=True),
    paper_model_name='VGG-11',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.704, 'Top 5 Accuracy': 0.896}
)

ImageNet.benchmark(
    model=models.vgg13(pretrained=True),
    paper_model_name='VGG-13',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.713, 'Top 5 Accuracy': 0.901}
)   

ImageNet.benchmark(
    model=models.vgg16(pretrained=True),
    paper_model_name='VGG-16',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

ImageNet.benchmark(
    model=models.vgg19(pretrained=True),
    paper_model_name='VGG-19',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

ImageNet.benchmark(
    model=models.vgg11_bn(pretrained=True),
    paper_model_name='VGG-11 (batch-norm)',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.704, 'Top 5 Accuracy': 0.896}
)

ImageNet.benchmark(
    model=models.vgg13_bn(pretrained=True),
    paper_model_name='VGG-13 (batch-norm)',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.713, 'Top 5 Accuracy': 0.901}
)   

ImageNet.benchmark(
    model=models.vgg16_bn(pretrained=True),
    paper_model_name='VGG-16 (batch-norm)',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

ImageNet.benchmark(
    model=models.vgg19_bn(pretrained=True),
    paper_model_name='VGG-19 (batch-norm)',
    paper_arxiv_id='1409.1556',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)   

# SQUEEZENET

ImageNet.benchmark(
    model=models.squeezenet1_0(pretrained=True),
    paper_model_name='SqueezeNet',
    paper_arxiv_id='1602.07360',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.575, 'Top 5 Accuracy': 0.803}
)

ImageNet.benchmark(
    model=models.squeezenet1_1(pretrained=True),
    paper_model_name='SqueezeNet 1.1',
    paper_arxiv_id='1602.07360',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# DENSENET

ImageNet.benchmark(
    model=models.densenet121(pretrained=True),
    paper_model_name='DenseNet-121',
    paper_arxiv_id='1608.06993',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.densenet161(pretrained=True),
    paper_model_name='DenseNet-161',
    paper_arxiv_id='1608.06993',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.densenet169(pretrained=True),
    paper_model_name='DenseNet-169',
    paper_arxiv_id='1608.06993',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.densenet201(pretrained=True),
    paper_model_name='DenseNet-201',
    paper_arxiv_id='1608.06993',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# INCEPTION V3

ImageNet.benchmark(
    model=models.inception_v3(pretrained=True),
    paper_model_name='Inception V3',
    paper_arxiv_id='1512.00567',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# INCEPTION V1 (GOOGLENET)

ImageNet.benchmark(
    model=models.googlenet(pretrained=True),
    paper_model_name='Inception V1',
    paper_arxiv_id='1409.4842',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# SHUFFLENET V2

ImageNet.benchmark(
    model=models.shufflenet_v2_x1_0(pretrained=True),
    paper_model_name='ShuffleNet V2 (1x)',
    paper_arxiv_id='1807.11164',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.694}
)

ImageNet.benchmark(
    model=models.shufflenet_v2_x0_5(pretrained=True),
    paper_model_name='ShuffleNet V2 (0.5x)',
    paper_arxiv_id='1807.11164',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.603}
)

# MOBILENET

ImageNet.benchmark(
    model=models.mobilenet_v2(pretrained=True),
    paper_model_name='MobileNetV2',
    paper_arxiv_id='1801.04381',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.72}
)

# RESNEXT

ImageNet.benchmark(
    model=models.resnext50_32x4d(pretrained=True),
    paper_model_name='ResNeXt-50 32x4d',
    paper_arxiv_id='1611.05431',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.resnext101_32x8d(pretrained=True),
    paper_model_name='ResNeXt-101 32x8d',
    paper_arxiv_id='1611.05431',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# WIDE RESNET

ImageNet.benchmark(
    model=models.wide_resnet50_2(pretrained=True),
    paper_model_name='WRN-50-2-bottleneck',
    paper_arxiv_id='1605.07146',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

ImageNet.benchmark(
    model=models.wide_resnet101_2(pretrained=True),
    paper_model_name='WRN-101-2-bottleneck',
    paper_arxiv_id='1605.07146',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

# MNASNET

ImageNet.benchmark(
    model=models.mnasnet0_5(pretrained=True),
    paper_model_name='MnasNet-A1 (depth multiplier=0.5)',
    paper_arxiv_id='1807.11626',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1,
    paper_results={'Top 1 Accuracy': 0.689}
)

ImageNet.benchmark(
    model=models.mnasnet1_0(pretrained=True),
    paper_model_name='MnasNet-A1',
    paper_arxiv_id='1807.11626',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)





STATUS
BUILD
COMMIT MESSAGE
RUN TIME
1h:24m:21s
0h:07m:56s
Fix model name typo
deepparrot   00c4508  ·  Oct 03 2019
0h:22m:41s
Update sotabench.py
deepparrot   3fb1c84  ·  Oct 03 2019
0h:21m:27s
Update sotabench.py
deepparrot   6914df4  ·  Oct 03 2019
0h:16m:36s
Update sotabench.py
deepparrot   edbaa01  ·  Oct 03 2019
0h:20m:58s
Update sotabench.py
deepparrot   bbfed0c  ·  Oct 03 2019
0h:15m:51s
Update sotabench.py
deepparrot   9419147  ·  Oct 03 2019
0h:34m:08s
Update requirements.txt
deepparrot   8204af0  ·  Oct 03 2019
0h:21m:00s
Update sotabench.py
deepparrot   49b999d  ·  Oct 03 2019
0h:22m:03s
Update sotabench.py
deepparrot   0de24b1  ·  Sep 27 2019
0h:07m:43s
Update sotabench.py
deepparrot   d2b1c44  ·  Sep 20 2019
0h:05m:48s
0h:07m:21s
0h:04m:43s
0h:04m:09s
0h:04m:45s
0h:04m:12s
0h:04m:51s
0h:04m:21s
Update sotabench.py
deepparrot   a173d2c  ·  Sep 20 2019
0h:04m:24s
Update sotabench.py
deepparrot   b730737  ·  Sep 20 2019
0h:04m:46s
Update sotabench.py
deepparrot   ecd2271  ·  Sep 20 2019
0h:06m:10s
Add object detection
deepparrot   abdd6bd  ·  Sep 20 2019
0h:06m:21s
Update sotabench.py
deepparrot   c01c94f  ·  Sep 20 2019
0h:06m:47s
Update sotabench.py
deepparrot   aa757b6  ·  Sep 20 2019
0h:05m:02s
Update sotabench.py
deepparrot   412a442  ·  Sep 20 2019
0h:04m:08s
Update sotabench.py
deepparrot   85521f2  ·  Sep 20 2019
0h:06m:25s
I believe!
deepparrot   48991db  ·  Sep 20 2019
0h:06m:14s
Update sotabench.py
deepparrot   967cdfd  ·  Sep 20 2019
0h:05m:12s
Update sotabench.py
deepparrot   f344204  ·  Sep 20 2019
0h:03m:04s
Update sotabench.py
deepparrot   94383e8  ·  Sep 20 2019
0h:03m:07s
Update sotabench.py
deepparrot   4c630d6  ·  Sep 20 2019
0h:03m:10s
Update requirements.txt
deepparrot   8c19cf2  ·  Sep 20 2019
0h:03m:46s
Update sotabench.py
deepparrot   be0be3a  ·  Sep 20 2019
0h:03m:59s
Update requirements.txt
deepparrot   d93b444  ·  Sep 20 2019
0h:04m:10s
Update requirements.txt
deepparrot   25a9eae  ·  Sep 20 2019
0h:03m:25s
Update sotabench.py
deepparrot   1804315  ·  Sep 20 2019
0h:03m:25s
Rename sotabench_transforms to sotabench_transforms.py
deepparrot   4ff9d07  ·  Sep 20 2019
unknown
Create sotabench_transforms
deepparrot   d273bf2  ·  Sep 20 2019
0h:04m:08s
Update requirements.txt
deepparrot   51d244c  ·  Sep 20 2019
0h:04m:08s
Update requirements.txt
deepparrot   23f282f  ·  Sep 20 2019
0h:03m:24s
Update requirements.txt
deepparrot   6329736  ·  Sep 20 2019
0h:03m:24s
0h:04m:38s