XLNet: Generalized Autoregressive Pretraining for Language Understanding

Zhilin YangZihang DaiYiming YangJaime CarbonellRuslan SalakhutdinovQuoc V. Le

   Papers with code   Abstract  PDF

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy... (read more)

Benchmarked Models

No benchmarked models yet. Click here to submit a model.