Uncategorized Transformer Models
There are 25 uncategorized models tracked. 1 score above 70 (verified tier). The highest-rated is Dao-AILab/flash-attention at 86/100 with 23,131 stars. 1 of the top 10 are actively maintained.
Get all 25 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=uncategorized&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
Dao-AILab/flash-attention
Fast and memory-efficient exact attention |
|
Verified |
| 2 |
wuwangzhang1216/abliterix
Fully automatic censorship removal for language models. LoRA abliteration +... |
|
Emerging |
| 3 |
lucidrains/deep-cross-attention
Implementation of the proposed DeepCrossAttention by Heddes et al at Google... |
|
Emerging |
| 4 |
modelscope/mcore-bridge
MCore-Bridge: Providing Megatron-Core model definitions for state-of-the-art... |
|
Emerging |
| 5 |
assembly-automation-hub/repo-governance
⚙️ Reusable GitHub repository governance kit: CI/CD workflows, CodeQL SAST,... |
|
Emerging |
| 6 |
zhongkaifu/TensorSharp
A C# inference engine for running large language models (LLMs) locally using... |
|
Emerging |
| 7 |
hqhq1025/ai-course-notes
📚 220+ 份 AI/LLM 公开课中文讲义 PDF | Stanford CS336·CS224R·CS25·CS231N | Berkeley... |
|
Emerging |
| 8 |
antonalth/cs2-transformer-agent
Training a Transformer to play Counter Strike |
|
Emerging |
| 9 |
P-r-e-m-i-u-m/PROXY
Self-hosted OpenAI-compatible reverse proxy with multi-provider load balancing |
|
Experimental |
| 10 |
Shekswess/tiny-think
Reasoning-first post-training for tiny language models (140M) on a single GPU. |
|
Experimental |
| 11 |
Kevo-03/AttentionNet
AttentionNet: Encrypted Network Traffic Classification Solution with... |
|
Experimental |
| 12 |
aidendorian/Marcella-60M-SLM
A 66M parameter decoder-only transformer language model implemented from... |
|
Experimental |
| 13 |
Lucien2468/Ollama-TurboQuant-Integration
TurboQuant: Native 3-Bit Quantization for Ollama - Achieve 25-28% better... |
|
Experimental |
| 14 |
jagmarques/nexusquant
Training-free KV cache compression for LLMs. 10-33x compression via E8... |
|
Experimental |
| 15 |
ArturPen/ab-transformers-timeskip-exploit
Python + ADB automation script for the Time Skip exploit in Angry Birds Transformers. |
|
Experimental |
| 16 |
a1exus/koda
Local LLM orchestration — run GGUF models via llama.cpp with one command |
|
Experimental |
| 17 |
JexanJoel/VoiceIQ-Backend
AI engine for VoiceIQ - transcribes Hinglish & Tanglish call recordings via... |
|
Experimental |
| 18 |
RMA-MUN/LangChain-RAG-FastAPI-Service
基于微服务架构的智能对话服务,采用 Django(用户管理)+ FastAPI(RAG/Agent 核心),独立数据库部署;结合 LangChain... |
|
Experimental |
| 19 |
Prajwalsrinvas/nimble_LLM_web_scraping_challenge
Web scraping + LLMs |
|
Experimental |
| 20 |
mtecnic/research-test-Qwen3-Coder-Next-REAP-AWQ
Research Test: REAP expert pruning + AWQ quantization of Qwen3-Coder-Next MoE model |
|
Experimental |
| 21 |
yongmmin/hwp-docs-editor
HWP / HWPX files are a web-based editor that can be opened and edited... |
|
Experimental |
| 22 |
mni-ml/transformer
A minimal transformer created using mni-ml/framework |
|
Experimental |
| 23 |
sashvat-bharat/model-accelerator
The fastest, most efficient library for running GGUF models with maximum... |
|
Experimental |
| 24 |
SuryanshSinha-suryanshsinha/medical-slm-from-scratch
Building a 92M parameter biomedical language model from scratch in PyTorch —... |
|
Experimental |
| 25 |
Shoaib-33/Web-Scrapper-using-LLM
A web scraping tool using LLM |
|
Experimental |