Foundations
LLMs are large language models that can be used to generate responses from text. They are trained on internet-scale datasets of text and other multi-modal data and can be used to generate responses to a wide range of tasks.Types
LLMConfiguration
AnLLMConfiguration is a configuration that can be used to generate responses from an LLM. It contains the provider of the LLM, a model which acts as the identifier of the specific model to use, and a params object which contains additional parameters that can be used to configure the LLM.
The provider of the LLM. This is the name of the provider that will be used to generate responses from the LLM.This field is required and must be one of the following values:
"openai""anthropic""google""google-vertex""azure""amazon-bedrock""xai"
The model of the LLM. This is the identifier of the specific model that will be used to generate responses from the LLM.
The parameters that will be used to configure the LLM. This is a JSON object that can contain any additional parameters that can be used to configure the LLM.For the full list of parameters that can be used to configure the LLM, we mirror those of the Vercel AI SDK.

