gulabpatel/NLP_Basics

NLP basic and advance text preprocessing concepts and techniques

27
/ 100
Experimental

When you're evaluating the quality of text generated by AI, such as machine translations, summarized documents, or image descriptions, it can be tricky to tell how good the output really is. This project helps you assess and compare the performance of different AI models by providing standard evaluation metrics. It takes the AI-generated text and compares it against human-written reference text to produce scores like BLEU, WER, and ROUGE, helping you understand where improvements are needed.

No commits in the last 6 months.

Use this if you need to objectively measure and compare the quality of text generated by different natural language processing or image captioning systems.

Not ideal if you are looking for a tool to develop or train new NLP models, rather than evaluate their outputs.

Machine Translation Text Summarization Content Quality Performance Metrics Image Captioning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Jupyter Notebook

License

Last pushed

Mar 29, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gulabpatel/NLP_Basics"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.