dkopi/Bitune
Implementation of Bitune: Bidirectional Instruction-Tuning
This project helps machine learning researchers reproduce the results from the Bitune paper on improving large language models. It takes a base language model and instruction-tuning datasets as input. The output is a fine-tuned language model that can perform better on various downstream tasks. This is primarily for academic researchers or engineers working on cutting-edge language model development.
No commits in the last 6 months.
Use this if you are a researcher aiming to validate or build upon the 'Bitune: Bidirectional Instruction-Tuning' paper's findings using the original implementation.
Not ideal if you are looking for a user-friendly tool to fine-tune a language model without deep technical knowledge of the underlying research and implementation details.
Stars
25
Forks
2
Language
Python
License
—
Category
Last pushed
Jun 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dkopi/Bitune"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...