Zhang-Yihao/Adversarial-Representation-Engineering

Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.

30
/ 100
Emerging

This project helps AI researchers and developers modify the behavior of large language models (LLMs). It takes an LLM and specific instructions for desired behavioral changes (e.g., reducing harmful responses or hallucinations), then outputs a modified LLM that adheres to these new guidelines. This tool is for those who are building or deploying LLMs and need to fine-tune their safety and accuracy.

No commits in the last 6 months.

Use this if you are a researcher or developer working with large language models and need to specifically control their output behavior, such as minimizing harmful content or factual errors.

Not ideal if you are an end-user of an AI application or are looking for a no-code solution to general LLM fine-tuning.

LLM-alignment AI-safety model-editing hallucination-reduction AI-ethics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Python

License

MIT

Last pushed

Dec 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Zhang-Yihao/Adversarial-Representation-Engineering"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.