thu-nics/C2C
[ICLR'26] The official code implementation for "Cache-to-Cache: Direct Semantic Communication Between Large Language Models"
This project helps large language models (LLMs) communicate with each other directly, sharing their 'thoughts' (KV-Caches) rather than just exchanging text. It takes two or more LLMs and allows them to pool their understanding, resulting in more accurate and faster answers. Anyone building or working with advanced LLM applications, especially those requiring precise and efficient multi-model collaboration, would use this.
361 stars.
Use this if you need to combine the intelligence of multiple LLMs to get better, faster, and more nuanced responses than a single model or text-based communication could provide.
Not ideal if you are working with a single LLM and do not need to integrate insights from other models at a deep, semantic level.
Stars
361
Forks
41
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thu-nics/C2C"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
WindyLab/ConsensusLLM-code
Source code of our paper "Multi-Agent Consensus Seeking via Large Language Models".
Traffic-Alpha/iLLM-TSC
This repository contains the code for the paper“iLLM-TSC: Integration reinforcement learning and...
Korde-AI/Multi-User-LLM-Agent
Official code for the paper: "Multi-User Large Language Model Agents"
NyanCyanide/PokeLLM-Battle
Turn-based Pokemon battle environment where Large Language Models (LLMs) control Pokémon and...