loke-x/jam-gpt

An Experimental Reimplementation of LLM models for research and development process

33
/ 100
Emerging

This project helps AI researchers and developers explore and understand the inner workings of Large Language Models (LLMs). You can input your own datasets to train and fine-tune experimental Generative Pretrained Transformers (GPT) models, gaining insights into their architecture and design. It's designed for individuals building or experimenting with LLM models for research purposes.

No commits in the last 6 months.

Use this if you are an AI researcher or developer looking to experiment with LLM architectures, train custom models with your own data, and understand their underlying principles.

Not ideal if you need a pre-trained, production-ready LLM model for immediate use or if you are not interested in the detailed architecture and training process.

LLM-research Generative-AI-development Deep-learning-experimentation Transformer-model-design Custom-model-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

21

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Aug 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/loke-x/jam-gpt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.