Quality scores for 220,000+ AI repos
curl "https://pt-edge.onrender.com/api/v1/quality?domain=agents&subcategory=autonomous-research-labs&limit=1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Sample response
{
"data": [{
"full_name": "openai/openai-agents-python",
"name": "openai-agents-python",
"description": "A lightweight, powerful framework for multi-agent workflows",
"stars": 19951, "forks": 3261,
"language": "Python", "license": "MIT",
"subcategory": "agent-framework-patterns",
"pypi_package": "openai-agents",
"downloads_monthly": 21224096,
"quality_score": 98, "quality_tier": "verified",
"maintenance_score": 25, "adoption_score": 25,
"maturity_score": 25, "community_score": 23
}],
"meta": {
"timestamp": "2026-04-11T12:00:00+00:00",
"count": 1,
"query": {"domain": "agents", "limit": 1}
}
}
Quick Start
Pick your language. No signup needed for the first 100 requests/day.
Python
import requests API = "https://pt-edge.onrender.com/api/v1" # Quality scores — no auth needed resp = requests.get(f"{API}/quality", params={"domain": "agents", "quality_tier": "verified", "limit": 5}) for p in resp.json()["data"]: print(f"{p['name']:30s} score={p['quality_score']} stars={p['stars']:,.0f}")
JavaScript
const API = "https://pt-edge.onrender.com/api/v1"; const resp = await fetch(`${API}/quality?domain=agents&quality_tier=verified&limit=5`); const { data } = await resp.json(); data.forEach(p => console.log(`${p.name} — score=${p.quality_score}, stars=${p.stars.toLocaleString()}`));
curl
curl -s "https://pt-edge.onrender.com/api/v1/quality?domain=agents&quality_tier=verified&limit=5" | python3 -m json.tool
With an API key (1,000 requests/day):
Python
# With API key (1,000 requests/day) headers = {"Authorization": "Bearer pte_YOUR_KEY"} resp = requests.get(f"{API}/trending", headers=headers, params={"window": "7d", "limit": 10})
What's In The Data
PT-Edge tracks 220,000+ open-source AI repos across 30 domains. Each repo is scored 0–100 based on four components: maintenance (commit activity, issue response), adoption (downloads, dependents), maturity (age, stability, documentation), and community (stars, forks, contributors). Data flows in daily from GitHub, PyPI, npm, HuggingFace, and Hacker News.
verified (score 80+)
Battle-tested, high adoption, actively maintained
established (60–79)
Solid projects with real usage
emerging (40–59)
Growing adoption, some production use
experimental (0–39)
Early stage, limited adoption
Rate Limits
| Tier | Daily Limit | How to get it |
|---|---|---|
| Anonymous | 100/day | Just call the API |
| Free key | 1,000/day | POST /api/v1/keys |
| Pro | 10,000/day | POST /api/v1/keys with email |
All tiers are free. All tiers get the same data. Resets at midnight UTC.
Get a Key
curl
# 1,000 requests/day — instant, no email curl -X POST https://pt-edge.onrender.com/api/v1/keys # 10,000 requests/day — just add your email curl -X POST https://pt-edge.onrender.com/api/v1/keys \ -H "Content-Type: application/json" \ -d '{"email": "you@company.com"}'
Python
import requests resp = requests.post("https://pt-edge.onrender.com/api/v1/keys") key = resp.json()["data"]["key"] # pte_... print(f"Your key: {key}")
Then pass your key as a Bearer token: Authorization: Bearer pte_...
Core Endpoints
/api/v1/quality
Quality scores for AI projects across all 30 domains. The core endpoint.
| Param | Type | Description |
|---|---|---|
domain | query, required | mcp, agents, rag, ai-coding, voice-ai, diffusion, vector-db, embeddings, prompt-engineering, ml-frameworks, llm-tools, nlp, transformers, generative-ai, computer-vision, data-engineering, mlops, perception, llm-inference, ai-evals, fine-tuning, document-ai, ai-safety, recommendation-systems, audio-ai, synthetic-data, time-series, multimodal, 3d-ai, scientific-ml |
subcategory | query, optional | Filter by category within the domain. Use GET /api/v1/quality?domain=agents to discover available subcategories in the response data, or see the reference table below. |
quality_tier | query, optional | verified, established, emerging, experimental |
min_score | query, optional | Minimum quality score (0–100) |
limit | query, optional | Results per page (1–500, default 50) |
curl "https://pt-edge.onrender.com/api/v1/quality?domain=ml-frameworks&quality_tier=verified&limit=5"
/api/v1/trending
Star velocity leaderboard — projects gaining the most GitHub stars.
| Param | Type | Description |
|---|---|---|
window | query, optional | 7d or 30d (default 7d) |
domain | query, optional | Filter by domain (same values as /quality) |
stack_layer | query, optional | model, inference, orchestration, data, eval, interface, infra |
category | query, optional | Filter by category |
limit | query, optional | 1–50 (default 20) |
curl "https://pt-edge.onrender.com/api/v1/trending?window=7d&domain=agents&limit=5"
/api/v1/projects/{slug}
Full project detail: GitHub metrics, downloads, tier, lifecycle stage, momentum, hype ratio, releases.
curl "https://pt-edge.onrender.com/api/v1/projects/langchain"
/api/v1/datasets/quality
Bulk quality scores for data pipelines. High limits (2,000/page), publicly cacheable, no auth required.
| Param | Type | Description |
|---|---|---|
domain | query, required | Same domains as /quality |
subcategory | query, optional | Filter by subcategory |
quality_tier | query, optional | verified, established, emerging, experimental |
limit | query, optional | 1–2000 (default 500) |
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=mcp&quality_tier=verified&limit=100"
More Endpoints
| GET | /api/v1/quality/{domain}/{repo} | Single repo quality lookup |
| GET | /api/v1/status | Orientation: tables, repo count, domains, freshness |
| GET | /api/v1/tables | Database tables with row counts |
| GET | /api/v1/workflows | Pre-built SQL recipe templates |
| POST | /api/v1/query | Run a read-only SQL query |
| POST | /api/v1/feedback | Submit feedback |
| POST | /api/v1/keys | Create API key (no auth required) |
Recipes
Copy-paste Python examples for common workflows.
Find the best tools in a domain
import requests API = "https://pt-edge.onrender.com/api/v1" # Top verified RAG frameworks resp = requests.get(f"{API}/quality", params={"domain": "rag", "quality_tier": "verified", "limit": 10}) for p in resp.json()["data"]: print(f"{p['full_name']:40s} score={p['quality_score']} downloads={p['downloads_monthly']:,.0f}")
Track what's trending
# This week's fastest-growing projects resp = requests.get(f"{API}/trending", params={"window": "7d", "limit": 10}) for p in resp.json()["data"]: print(f"{p['name']:25s} +{p['stars_7d_delta']:,.0f} stars lifecycle={p['lifecycle_stage']}")
Domains & Enums Reference
Domains (30)
Enum Values
| Field | Values |
|---|---|
| quality_tier | verified, established, emerging, experimental |
| stack_layer | model, inference, orchestration, data, eval, interface, infra |
| lifecycle_stage | emerging, launching, growing, established, fading, dormant |
Response Format
Every response returns a JSON envelope:
{
"data": { ... },
"meta": {
"timestamp": "2026-04-03T12:00:00+00:00",
"count": 20,
"query": { "domain": "agents", "limit": 20 }
}
}
Every response includes rate limit headers:
X-RateLimit-Limit: 100 X-RateLimit-Remaining: 99 X-RateLimit-Reset: 2026-04-04T00:00:00+00:00
Errors return structured JSON:
// 429 Rate limit { "detail": { "error": { "code": "rate_limit_exceeded", "message": "..." } } } // 422 Invalid parameter { "detail": { "error": { "code": "invalid_domain", "message": "..." } } }