aniquetahir/JORA
JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)
This tool helps machine learning engineers and researchers efficiently fine-tune large language models, specifically Llama-2 and Gemma, for retrieval-based tasks like Retrieval Augmented Generation (RAG). It takes your existing large language model and custom dataset as input, and outputs a fine-tuned model optimized for specific downstream applications. This is ideal for those working with extensive prompt sequences who need to adapt LLMs without consuming excessive GPU memory.
No commits in the last 6 months.
Use this if you need to fine-tune large language models for RAG or other retrieval-based tasks and are constrained by GPU memory or desire significantly faster training times.
Not ideal if you are looking for a general-purpose LLM training library that doesn't focus on tensor-parallelism or specific memory optimization for retrieval tasks, or if you prefer a PyTorch-only ecosystem.
Stars
35
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aniquetahir/JORA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.