nishiwen1214/SuperGLUE-bert4keras

基于bert4keras的SuperGLUE基准代码

31
/ 100
Emerging

This project offers benchmark code for natural language understanding tasks, specifically those found in the SuperGLUE challenge. It takes in SuperGLUE datasets and pre-trained BERT weights to produce experimental results for various language understanding problems. This is for researchers and developers working on advanced English natural language understanding models.

No commits in the last 6 months.

Use this if you are a researcher or developer who needs a reliable baseline for evaluating your own advanced English natural language understanding models against the SuperGLUE benchmark.

Not ideal if you are looking for a ready-to-use application rather than development benchmarks, or if your focus is on Chinese language tasks.

natural-language-understanding NLP-benchmarking text-comprehension AI-research language-model-evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

14

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Jun 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/nishiwen1214/SuperGLUE-bert4keras"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.