QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

Adams Wei YuDavid DohanMinh-Thang LuongRui ZhaoKai ChenMohammad NorouziQuoc V. Le

   Papers with code   Abstract  PDF

Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs... (read more)

Benchmarked Models

No benchmarked models yet. Click here to submit a model.