VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
This project provides access to pre-trained large language models specifically designed for source code. You provide a snippet of code, and the model can complete it with plausible code suggestions. This tool is for software developers, researchers, or anyone working with code who needs assistance with code generation, completion, or analysis.
1,842 stars. No commits in the last 6 months.
Use this if you need to integrate powerful, pre-trained code generation capabilities into your development workflow or research projects, particularly for tasks like autocompletion or code suggestion.
Not ideal if you are looking for an out-of-the-box, end-user application for code generation without any programming or machine learning setup.
Stars
1,842
Forks
265
Language
Python
License
MIT
Category
Last pushed
Jul 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VHellendoorn/Code-LMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.
SmallDoges/small-doge
Doge Family of Small Language Models