pytorch / vision

TOP 1 ACCURACY TOP 5 ACCURACY
SPEED
MODEL CODE PAPER
ε-REPR
CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
ResNeXt-101-32x8d
79.0% 79.3% 94.4% 94.5% 288.2 #307
See Full Build Details +get badge code
[![SotaBench](https://img.shields.io/endpoint.svg?url=https://sotabench.com/api/v0/badge/gh/rstojnic/vision)](https://sotabench.com/user/rstojnic/repos/rstojnic/vision)

How the Repository is Evaluated

The full sotabench.py file - source
from torchbench.image_classification import ImageNet
from torchvision.models.resnet import resnext101_32x8d
import torchvision.transforms as transforms
import PIL

# Define the transforms need to convert ImageNet data to expected model input
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.2241, 0.225])
input_transform = transforms.Compose([
    transforms.Resize(256, PIL.Image.BICUBIC),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run the benchmark
ImageNet.benchmark(
    model=resnext101_32x8d(pretrained=True),
    paper_model_name='ResNeXt-101-32x8d',
    paper_arxiv_id='1611.05431',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)
STATUS
BUILD
COMMIT MESSAGE
RUN TIME
Update sotabench.py
rstojnic   9cb36f0  ·  Apr 20 2020
0h:07m:27s
Update sotabench.py
rstojnic   4063dc8  ·  Apr 20 2020
0h:15m:59s
Update sotabench.py
rstojnic   0dce73d  ·  Apr 20 2020
0h:13m:04s
Update sotabench.py
rstojnic   8924ec2  ·  Apr 20 2020
0h:13m:55s
Update sotabench.py
rstojnic   865c50c  ·  Apr 20 2020
0h:08m:27s
0h:13m:56s