facebookresearch / WSL-Images

TOP 1 ACCURACY TOP 5 ACCURACY
SPEED
MODEL CODE PAPER
ε-REPR
CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
ResNeXt-101 32x16d
84.2% -- 97.2% -- 135.5 #15
ResNeXt-101 32x32d
85.1% 85.1% 97.4% 97.5% 61.6 #8
ResNeXt-101 32x48d
85.4% 85.4% 97.6% 97.6% 32.0 #5
ResNeXt-101 32x8d
82.7% 82.2%
96.6% 96.4% 274.7 #30
See Full Build Details +get badge code
[![SotaBench](https://img.shields.io/endpoint.svg?url=https://sotabench.com/api/v0/badge/gh/deepparrot/WSL-Images)](https://sotabench.com/user/PartyParrot/repos/deepparrot/WSL-Images)

How the Repository is Evaluated

The full sotabench.py file - source

from torchbench.image_classification import ImageNet
import torchvision.transforms as transforms
import PIL
import torch

# Define Transforms    
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
b0_input_transform = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run Evaluation
ImageNet.benchmark(
    model=torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x48d_wsl'),
    paper_model_name='ResNeXt-101 32x48d',
    paper_arxiv_id='1805.00932',
    paper_pwc_id='exploring-the-limits-of-weakly-supervised',
    input_transform=b0_input_transform,
    batch_size=64,
    num_gpu=1
)

# Define Transforms    
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
b0_input_transform = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run Evaluation
ImageNet.benchmark(
    model=torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x32d_wsl'),
    paper_model_name='ResNeXt-101 32x32d',
    paper_arxiv_id='1805.00932',
    paper_pwc_id='exploring-the-limits-of-weakly-supervised',
    input_transform=b0_input_transform,
    batch_size=64,
    num_gpu=1
)

# Define Transforms    
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
b0_input_transform = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run Evaluation
ImageNet.benchmark(
    model=torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x16d_wsl'),
    paper_model_name='ResNeXt-101 32x16d',
    paper_arxiv_id='1805.00932',
    paper_pwc_id='exploring-the-limits-of-weakly-supervised',
    input_transform=b0_input_transform,
    batch_size=64,
    num_gpu=1
)

# Define Transforms    
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
b0_input_transform = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run Evaluation
ImageNet.benchmark(
    model=torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl'),
    paper_model_name='ResNeXt-101 32x8d',
    paper_arxiv_id='1805.00932',
    paper_pwc_id='exploring-the-limits-of-weakly-supervised',
    input_transform=b0_input_transform,
    batch_size=128,
    num_gpu=1
)

STATUS
BUILD
COMMIT MESSAGE
RUN TIME
0h:58m:16s
0h:36m:07s
0h:34m:47s
0h:12m:56s
0h:12m:07s
0h:11m:45s
0h:09m:09s
0h:09m:16s
0h:08m:37s
0h:10m:35s
0h:09m:06s
0h:36m:11s