daekeun-ml/genai-ko-LLM
This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infrastructure.
This project helps developers and MLOps engineers efficiently deploy and fine-tune large Korean language models on AWS. It provides step-by-step guides for preparing instruction datasets, debugging fine-tuning locally, and then scaling up training on SageMaker. It also offers methods for serving these models with various optimized containers for fast, distributed inference.
No commits in the last 6 months.
Use this if you are an AI/ML developer or MLOps engineer looking to build applications with Korean large language models and need guidance on fine-tuning and deploying them efficiently on AWS infrastructure.
Not ideal if you are looking for a pre-built application that uses Korean LLMs, rather than tools and guidance for developing and deploying such applications yourself.
Stars
26
Forks
8
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Feb 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/daekeun-ml/genai-ko-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
keanteng/sesame-csm-elise
Fine-Tuning Sesame CSM Wth Elise. Enjoy the voice ( ̄︶ ̄)↗
ksm26/Quantization-Fundamentals-with-Hugging-Face
Learn linear quantization techniques using the Quanto library and downcasting methods with the...
just4give/llm-sagemaker-fargate-api
This repository contains two major projects that work together to deploy and serve Large...
simran-padam/FineTuningLlama
FineTuning Llama to create a versatile chatbot
jwest33/lora_craft
An open-source web application for fine-tuning large language models using Low-Rank Adaptation...