Kushalk0677/Inference-Energy-and-Latency-in-AI-Mediated-Education-Green-Audit
Empirical study of inference energy, latency, and pedagogical quality for FP16 vs NF4 edge SLMs in AI tutoring — introducing the Learning-per-Watt (LpW) metric across GPU and CPU platforms.
Stars
2
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Kushalk0677/Inference-Energy-and-Latency-in-AI-Mediated-Education-Green-Audit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
opentensor/bittensor
Internet-scale Neural Networks
trailofbits/fickling
A Python pickling decompiler and static analyzer
benchopt/benchopt
A framework for reproducible, comparable benchmarks
BiomedSciAI/fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code...
mosaicml/streaming
A Data Streaming Library for Efficient Neural Network Training