wangyuxinwhy/lmclient

Python client designed specifically for large-scale requests to the openai interface

29
/ 100
Experimental

This is a Python client for interacting with various large language models (LLMs) like OpenAI, Azure, or Baidu. It helps developers send many requests efficiently, making it suitable for tasks such as generating large datasets or performing bulk translations. The tool takes a list of prompts as input and returns the corresponding LLM responses, managing the flow to prevent overloading the service. This is for software developers who are building applications that need to use LLMs at scale.

No commits in the last 6 months.

Use this if you are a developer building an application that needs to send a high volume of requests to an LLM provider and require features like rate limiting, concurrency control, or disk caching.

Not ideal if you are a non-developer looking for a no-code solution or if you only need to send a few ad-hoc requests to an LLM.

LLM-integration API-client data-generation bulk-translation AI-application-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

23

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wangyuxinwhy/lmclient"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.