nishiwen1214/GLUE-bert4keras

基于bert4keras的GLUE基准代码

35
/ 100
Emerging

This project offers clear and easy-to-understand baseline code for evaluating English language understanding models. It helps researchers and natural language processing practitioners test their models on standard datasets like CoLA, SST-2, and QQP. You provide your language model and the GLUE benchmark datasets, and it outputs performance metrics, enabling you to compare your model's effectiveness against established benchmarks.

No commits in the last 6 months.

Use this if you are a researcher or NLP engineer who needs to benchmark your English language understanding models using the GLUE datasets and want a straightforward, high-performing reference implementation.

Not ideal if you are looking for a pre-trained model for immediate application rather than a framework for evaluating your own models.

natural-language-processing text-classification language-understanding model-benchmarking sentiment-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

65

Forks

6

Language

Python

License

MIT

Last pushed

Jan 30, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/nishiwen1214/GLUE-bert4keras"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.