llm_config {LLMR} | R Documentation |
Create LLM Configuration
Description
Create LLM Configuration
Usage
llm_config(
provider,
model,
api_key,
troubleshooting = FALSE,
base_url = NULL,
embedding = NULL,
...
)
Arguments
provider |
Provider name (openai, anthropic, groq, together, voyage, gemini, deepseek) |
model |
Model name to use |
api_key |
API key for authentication |
troubleshooting |
Prints out all api calls. USE WITH EXTREME CAUTION as it prints your API key. |
base_url |
Optional base URL override |
embedding |
Logical indicating embedding mode: NULL (default, uses prior defaults), TRUE (force embeddings), FALSE (force generative) |
... |
Additional provider-specific parameters |
Value
Configuration object for use with call_llm()
See Also
The main ways to use a config object:
-
call_llm
for a basic, single API call. -
call_llm_robust
for a more reliable single call with retries. -
chat_session
for creating an interactive, stateful conversation. -
llm_fn
for applying a prompt to a vector or data frame. -
call_llm_par
for running large-scale, parallel experiments. -
get_batched_embeddings
for generating text embeddings.
Examples
## Not run:
cfg <- llm_config(
provider = "openai",
model = "gpt-4o-mini",
api_key = Sys.getenv("OPENAI_API_KEY"),
temperature = 0.7,
max_tokens = 500)
call_llm(cfg, "Hello!") # one-shot, bare string
## End(Not run)