FakeNewsChallenge/fnc-1-baseline

A baseline implementation for FNC-1

50
/ 100
Established

This project helps data scientists and researchers working on content analysis to evaluate how well their systems can identify the relationship between a news headline and its article body. You provide a dataset of headlines and article bodies, along with their known relationships (e.g., 'agree', 'disagree', 'discuss', 'unrelated'). The project outputs a score and a confusion matrix, showing how accurately your system categorized these relationships.

139 stars. No commits in the last 6 months.

Use this if you are developing or testing a system for automatically classifying the stance of news articles and need a standard way to evaluate its performance against a known dataset.

Not ideal if you are looking for a complete, production-ready fake news detection application or a tool that generates classifications directly without custom model integration.

content-analysis stance-detection news-verification data-science-evaluation information-credibility
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

139

Forks

101

Language

Python

License

Apache-2.0

Last pushed

Apr 03, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/FakeNewsChallenge/fnc-1-baseline"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.