LLMs are large language models that can be used to generate responses from text.
LLMConfiguration
is a configuration that can be used to generate responses from an LLM. It contains the provider
of the LLM, a model
which acts as the identifier of the specific model to use, and a params
object which contains additional parameters that can be used to configure the LLM.
"openai"
"anthropic"
"google"
"google-vertex"
"azure"
"amazon-bedrock"
"xai"