FineTuningLLMs and LLM-Finetuning
These are complements: dvgodoy/FineTuningLLMs provides educational content and practical examples for the fine-tuning workflows that ashishpatel26/LLM-Finetuning implements using PEFT (Parameter-Efficient Fine-Tuning), so users often reference the book's guidance while applying the repository's code patterns.
About FineTuningLLMs
dvgodoy/FineTuningLLMs
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
This hands-on guide helps data scientists and machine learning engineers develop specialized Large Language Models (LLMs) from existing base models. It takes raw text data, applies techniques like quantization and low-rank adaptation, and outputs a custom-tuned LLM ready for specific tasks. This is for professionals who need to adapt powerful AI models to unique datasets or niche applications.
About LLM-Finetuning
ashishpatel26/LLM-Finetuning
LLM Finetuning with peft
This project helps machine learning engineers and researchers adapt large language models (LLMs) like Llama 2, Falcon, or GPT-Neo-X to perform specific tasks using their own custom datasets. You provide an existing LLM and your unique text data, and it outputs a specialized version of that model ready for tasks such as answering domain-specific questions, generating tailored text, or improving chatbot performance. This is for professionals who need to customize powerful AI models without starting from scratch.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work