sinanuozdemir/oreilly-optimizing-llms
Optimizing LLMs with Fine-Tuning and Prompt Engineering
This project provides practical code examples for machine learning engineers and software developers to enhance large language models (LLMs). It helps tailor LLMs to specific tasks by fine-tuning them on custom datasets and improves output quality by mastering prompt engineering techniques. Users will learn to optimize LLMs like GPT to be more precise and relevant in real-world applications.
Use this if you are a machine learning engineer or software developer looking to improve the performance and precision of large language models for specific applications.
Not ideal if you are new to machine learning or large language models and are looking for a conceptual introduction rather than hands-on optimization techniques.
Stars
88
Forks
65
Language
Jupyter Notebook
License
—
Category
Last pushed
Dec 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sinanuozdemir/oreilly-optimizing-llms"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase