lorenzorovida/FHE-BERT-Tiny
Source code for the paper "Transformer-based Language Models and Homomorphic Encryption: an intersection with BERT-tiny"
This project offers a way for developers to perform sentiment analysis on text data while keeping the content private and encrypted. It takes plain text as input and processes it using a secure neural network, outputting the sentiment classification without exposing the original text. Data privacy officers, cloud service providers, or anyone needing to analyze sensitive text data without compromising confidentiality would find this useful.
Use this if you need to classify the sentiment of text, such as customer feedback or private communications, but regulatory or ethical concerns require the data to remain encrypted during analysis.
Not ideal if your primary concern is high-speed sentiment analysis on publicly available or non-sensitive data, as the encryption process adds computational overhead.
Stars
32
Forks
12
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lorenzorovida/FHE-BERT-Tiny"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/text-generation-inference
Large Language Model Text Generation Inference
OpenMachine-ai/transformer-tricks
A collection of tricks and tools to speed up transformer models
poloclub/transformer-explainer
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
IBM/TabFormer
Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
tensorgi/TPA
[NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6)...