shahrukhx01/bert-probe
BERT Probe: A python package for probing attention based robustness to character and word based adversarial evaluation. Also, with recipes of implicit and explicit defenses against character-level attacks.
This package helps machine learning engineers and researchers assess how robust BERT models are to subtle changes in text, like typos or word substitutions, that adversaries might use. It takes a trained BERT model and a dataset, then exposes how the model's predictions change when presented with adversarial examples. The output provides insights into the model's vulnerabilities and suggests strategies to make it more resilient.
No commits in the last 6 months.
Use this if you need to evaluate the resilience of your BERT-based text classification models against adversarial attacks and implement defenses to improve their robustness.
Not ideal if you are looking for a general-purpose BERT training framework or a tool to simply fine-tune BERT for standard tasks without a focus on adversarial robustness.
Stars
18
Forks
3
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jun 24, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shahrukhx01/bert-probe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tongjilibo/bert4torch
An elegent pytorch implement of transformers
nyu-mll/jiant
jiant is an nlp toolkit
lonePatient/TorchBlocks
A PyTorch-based toolkit for natural language processing
monologg/JointBERT
Pytorch implementation of JointBERT: "BERT for Joint Intent Classification and Slot Filling"
grammarly/gector
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite"...