SmerkyG/gptcore

Fast modular code to create and train cutting edge LLMs

38
/ 100
Emerging

This project helps machine learning researchers and practitioners rapidly experiment with and train large language models (LLMs). You can take state-of-the-art model architectures, customize them with various components, and train them using publicly available datasets streamed directly from the web. The output is a trained LLM and insights into its performance, enabling quick iteration on model design.

No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer focused on developing, training, and comparing new or existing large language models quickly and efficiently.

Not ideal if you are looking for a simple, out-of-the-box solution for deploying pre-trained LLMs for direct application rather than for research and development.

LLM training AI research Deep learning experimentation Model architecture design Natural Language Processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

68

Forks

10

Language

Python

License

Apache-2.0

Last pushed

May 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SmerkyG/gptcore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.