Machine Translation on WMT2014 English-French
This benchmark is evaluating models on the test set of the WMT 2014 English-French news dataset.
Step 1: Evaluate models locally
First, use our public benchmark library to evaluate your model.
sotabench-eval is a framework-agnostic library that implements the WMT2014 Benchmark. See sotabench-eval docs here.
Once you can run the benchmark locally, you are ready to connect it to our automatic service.
Step 2: Login and connect your GitHub Repository
Connect your GitHub repository to automatically start benchmarking your repository. Once connected we'll re-benchmark your
master branch on every commit, giving your users confidence in using models in your repository and helping you spot any bugs.