thu-nics/C2C

[ICLR'26] The official code implementation for "Cache-to-Cache: Direct Semantic Communication Between Large Language Models"

52
/ 100
Established

This project helps large language models (LLMs) communicate with each other directly, sharing their 'thoughts' (KV-Caches) rather than just exchanging text. It takes two or more LLMs and allows them to pool their understanding, resulting in more accurate and faster answers. Anyone building or working with advanced LLM applications, especially those requiring precise and efficient multi-model collaboration, would use this.

361 stars.

Use this if you need to combine the intelligence of multiple LLMs to get better, faster, and more nuanced responses than a single model or text-based communication could provide.

Not ideal if you are working with a single LLM and do not need to integrate insights from other models at a deep, semantic level.

LLM orchestration AI model collaboration semantic understanding multi-model AI natural language processing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 17 / 25

How are scores calculated?

Stars

361

Forks

41

Language

Python

License

Apache-2.0

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thu-nics/C2C"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.