gulabpatel/NLP_Basics
NLP basic and advance text preprocessing concepts and techniques
When you're evaluating the quality of text generated by AI, such as machine translations, summarized documents, or image descriptions, it can be tricky to tell how good the output really is. This project helps you assess and compare the performance of different AI models by providing standard evaluation metrics. It takes the AI-generated text and compares it against human-written reference text to produce scores like BLEU, WER, and ROUGE, helping you understand where improvements are needed.
No commits in the last 6 months.
Use this if you need to objectively measure and compare the quality of text generated by different natural language processing or image captioning systems.
Not ideal if you are looking for a tool to develop or train new NLP models, rather than evaluate their outputs.
Stars
9
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gulabpatel/NLP_Basics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
natasha/ipymarkup
NER, syntax markup visualizations
neomatrix369/nlp_profiler
A simple NLP library allows profiling datasets with one or more text columns. When given a...
thepushkarp/nalcos
Search Git commits in natural language
lyeoni/nlp-tutorial
A list of NLP(Natural Language Processing) tutorials
NirantK/NLP_Quickbook
NLP in Python with Deep Learning