Yangyi-Chen/PaperList-Trustworthy-Applications
Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, calibration, backdoor learning, robustness, et al.
This is a curated collection of academic papers focused on making AI models more reliable and safe in real-world applications. It helps researchers and practitioners quickly find relevant studies on topics like evaluating model performance, ensuring data privacy, and improving model robustness. You'll find a structured list of papers covering various aspects of trustworthy AI.
No commits in the last 6 months.
Use this if you are an AI researcher or practitioner looking for an organized bibliography of papers on model trustworthiness, evaluation, and security.
Not ideal if you are looking for executable code, a software library, or a tool to directly analyze or build AI models.
Stars
21
Forks
1
Language
—
License
—
Category
Last pushed
May 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Yangyi-Chen/PaperList-Trustworthy-Applications"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...