pytorch / fairseq

MATCHED MISMATCHED
SPEED
MODEL CODE PAPER
ε-REPR
CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
ROBERTa
89.7% -- 89.9% -- 37.6 #1
TEST PERPLEXITY
SPEED
MODEL CODE PAPER
ε-REPR
PAPER
GLOBAL RANK
Transformer
(Adaptive inputs)
18.70 18.70 1685.4 #4
See Full Build Details +get badge code
[![SotaBench](https://img.shields.io/endpoint.svg?url=https://sotabench.com/api/v0/badge/gh/PiotrCzapla/fairseq)](https://sotabench.com/user/piotr.czapla-priv/repos/PiotrCzapla/fairseq)

How the Repository is Evaluated

The full sotabench.py file - source
import multinli
import language_modelling
if __name__ == '__main__':
    language_modelling.main()
    multinli.main()
    
STATUS
BUILD
COMMIT MESSAGE
RUN TIME
0h:16m:46s
0h:06m:43s
0h:08m:23s
remove deprecated print_stats()
PiotrCzapla   2892fd9  ·  Oct 09 2019
0h:12m:47s
0h:17m:10s
fix the way dataset_size is calculate so it works for both torc…
PiotrCzapla   db97c8f  ·  Oct 07 2019
0h:06m:08s
fix dataset collsion
PiotrCzapla   e9c65cd  ·  Oct 07 2019
0h:05m:19s
Fix MNLI longer examples at 512 tokens
PiotrCzapla   0484493  ·  Oct 08 2019
0h:31m:35s
0h:07m:29s
Fix divi by zero error
PiotrCzapla   46b45db  ·  Oct 07 2019
0h:06m:11s
Add multinli evaluation of ROBERTa
PiotrCzapla   9974404  ·  Oct 07 2019
0h:05m:48s
Add transformer_lm to sotabench
PiotrCzapla   90e422e  ·  Oct 07 2019
0h:05m:39s
0h:13m:04s
Fix setup script pip install -e was failing
PiotrCzapla   5133b44  ·  Oct 05 2019
0h:04m:48s
Add transformer_lm to sotabench
PiotrCzapla   1315ebc  ·  Oct 05 2019
0h:04m:26s
0h:04m:20s