Semantics-aware BERT for Language Understanding

Zhuosheng ZhangYuwei WuHai ZhaoZuchao LiShuailiang ZhangXi ZhouXiang Zhou

   Papers with code   Abstract  PDF

The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings... (read more)

Benchmarked Models

No benchmarked models yet. Click here to submit a model.