pytorch / vision

TOP 1 ACCURACY TOP 5 ACCURACY
MODEL CODE PAPER
ε-REPR
CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
ResNeXt-101-32x8d
79.1% 79.3% 94.4% 94.5% #115
See Full Build Details +get badge code
[![SotaBench](https://img.shields.io/endpoint.svg?url=https://sotabench.com/api/v0/badge/gh/rstojnic/vision)](https://sotabench.com/user/rstojnic/repos/rstojnic/vision)

How the Repository is Evaluated

The full sotabench.py file - source
from torchbench.image_classification import ImageNet
from torchvision.models.resnet import resnext101_32x8d
import torchvision.transforms as transforms
import PIL

# Define the transforms need to convert ImageNet data to expected model input
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
input_transform = transforms.Compose([
    transforms.Resize(256, PIL.Image.BICUBIC),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run the benchmark
ImageNet.benchmark(
    model=resnext101_32x8d(pretrained=True),
    paper_model_name='ResNeXt-101-32x8d',
    paper_arxiv_id='1611.05431',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)
STATUS
BUILD
COMMIT MESSAGE
RUN TIME
Trim file
rstojnic   49491db  ·  4 days ago
0h:12m:39s
Null edit
rstojnic   dbe8069  ·  5 days ago
0h:05m:01s
Null edit
rstojnic   35ca059  ·  5 days ago
0h:04m:08s
remove endline
rstojnic   ee530bb  ·  Oct 29 2019
0h:10m:15s
null edit
rstojnic   a060f1f  ·  Oct 29 2019
0h:04m:13s
0h:12m:02s