KwaiKEG/CogGPT

Unleashing the Power of Cognitive Dynamics on Large Language Models

37
/ 100
Emerging

This project helps researchers and developers evaluate how well Large Language Models (LLMs) align with human cognitive processes. It takes article or video information flows as input and outputs scores for 'authenticity' (consistency with human ratings) and 'rationality' (quality of reasoning). Anyone working on designing, evaluating, or improving LLMs to mimic human cognition would use this.

No commits in the last 6 months.

Use this if you are developing or studying Large Language Models and want to systematically measure how their 'thinking' evolves and aligns with human thought patterns over time.

Not ideal if you are looking for an off-the-shelf chatbot, a system for general content generation, or a tool for routine data analysis.

AI-research LLM-evaluation cognitive-modeling natural-language-processing human-AI-interaction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

63

Forks

8

Language

Python

License

Last pushed

Sep 24, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KwaiKEG/CogGPT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.