d-wwei/openclaw-nim-skill

OpenClaw skill for offloading heavy tasks to NVIDIA NIM models and saving context tokens

25
/ 100
Experimental

This tool helps AI agents like OpenClaw handle very long documents or complex reasoning tasks without running out of 'memory' (context tokens). It takes your agent's heavy workload, processes it using powerful NVIDIA models like GLM-5 or Kimi-k2.5, and returns a concise result, freeing up your agent to focus on the main conversation. This is ideal for anyone managing AI agents that frequently process large amounts of text.

Use this if your AI assistant (like OpenClaw) frequently needs to summarize long documents, provide detailed explanations, or perform deep analysis, and you want to reduce the 'cost' of these complex operations by saving context tokens.

Not ideal if your AI agent primarily handles short, simple queries or if you are not using an AI agent that benefits from external token-saving mechanisms.

AI agent management large language model operations context window optimization text summarization deep reasoning
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Feb 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/d-wwei/openclaw-nim-skill"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.