nesaorg/nesa
Run AI models end-to-end encrypted.
This project allows organizations to use powerful AI models like Llama or Stable Diffusion without ever exposing their sensitive data. You provide your confidential inputs, and the system delivers AI-driven results, all while ensuring your cloud provider or any external party cannot see the original data or your queries. It's designed for enterprises, healthcare providers, or financial institutions handling highly sensitive information.
3,070 stars. No commits in the last 6 months.
Use this if you need to leverage advanced AI models but are legally or ethically restricted from sharing your input data or queries with cloud providers due to privacy concerns.
Not ideal if your AI workload does not involve sensitive or regulated data, or if you prefer managing on-premise hardware for privacy rather than using an API.
Stars
3,070
Forks
239
Language
Python
License
—
Category
Last pushed
Feb 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nesaorg/nesa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tensorflow/privacy
Library for training machine learning models with privacy for training data
meta-pytorch/opacus
Training PyTorch models with differential privacy
tf-encrypted/tf-encrypted
A Framework for Encrypted Machine Learning in TensorFlow
awslabs/fast-differential-privacy
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
privacytrustlab/ml_privacy_meter
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning...