parameterlab/c-seo-bench
Source code of "C-SEO Bench: Does Conversational SEO Work?" NeurIPS D&B 2025
This project helps SEO specialists and content creators understand if their "Conversational SEO" (C-SEO) strategies for web content are truly effective. It takes your modified web documents and simulates how well they perform in various conversational search engines (like Perplexity.ai or Google AI Search), measuring if they appear higher in the search results or recommendations. Marketing teams, content strategists, and SEO professionals will use this to refine their content for conversational AI.
No commits in the last 6 months.
Use this if you are a marketing professional or content strategist looking to scientifically evaluate whether your Conversational SEO tactics genuinely improve visibility and ranking for your web documents in AI-driven search.
Not ideal if you are solely focused on traditional keyword-based SEO for standard search engines, as this tool specifically addresses conversational AI search environments.
Stars
16
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/parameterlab/c-seo-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)