FAQs
What are the differences between the Multi-LLM Gateway and both Amazon Bedrock and Google Vertex AI ?
- The Multi-LLM Gateway efficiently provides the major LLM services from OpenAI and Anthropic, and exclusively offers Telco-specific LLM.
- It ensures high backward compatibility and reliable LLM service through an integrated interface.
- The platform supports prompt-based deployment and offers tools that facilitate AB testing and prompt performance evaluation.
Why does Multi-LLM adopt the OpenAI spec for an integrated interface?
- The OpenAI spec, being the most feature-rich among other LLM providers, was chosen as the superset spec.
- As it's the most widely used spec, developers can migrate with minimal cost.
How are unsupported parameters from a specific model provider handled?
- Each model has a defined set of supported parameters provided through documentation or the models API.
- Any unsupported parameters passed will likely be ignored. this policy might be revised in the future to throw errors for unsupported parameters instead of simply ignoring them.
- For example, the
top_k
parameter is only supported by Anthropic and not by OpenAI. When calling the OpenAI model with thetop_k
parameter, it will be ignored, and it will only work with the Anthropic model. Conversely, parameters likepresence_penalty
andfunction_call
, which are only valid in OpenAI, are ignored in Anthropic but valid in OpenAI.
Will the support for the OpenAI spec continue?
- No. It should be thought of as a forked version of the OpenAI spec.
- The goal is to adopt the most widely used initial design, reducing the initial adoption costs for developers.
- Subsequent designs may differ.
Updated about 1 year ago