BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Jacob DevlinMing-Wei ChangKenton LeeKristina Toutanova

   Papers with code   Abstract  PDF

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers... (read more)

Benchmarked Models

RANK
MODEL
REPO
CODE RESULT
PAPER RESULT
ε-REPRODUCED
BUILD
1
BERT large
(whole word masking, u…)
86.9%
--
2
BERT large
(whole word masking, c…)
86.3%
--