mspronesti/llm.sycl

llm.c, but in SYCL/Intel oneAPI!

20
/ 100
Experimental

This project helps machine learning engineers and researchers accelerate the training of large language models. It takes existing model architectures, specifically GPT-2, and enables their execution on SYCL-compatible hardware, including Intel, NVIDIA, and AMD GPUs. The output is a more efficient training process for deep learning models.

No commits in the last 6 months.

Use this if you need to train large language models like GPT-2 more efficiently across different GPU architectures, particularly those supported by SYCL/Intel oneAPI.

Not ideal if you are looking for a high-level API for general machine learning tasks or if you are not comfortable with command-line compilation and execution of deep learning kernels.

deep-learning large-language-models gpu-acceleration model-training high-performance-computing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

C++

License

MIT

Last pushed

Aug 05, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mspronesti/llm.sycl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.