call_llm_compare {LLMR} | R Documentation |
Parallel API calls: Multiple Configs, Fixed Message
Description
Compares different configurations (models, providers, settings) using the same message.
Perfect for benchmarking across different models or providers.
This function requires setting up the parallel environment using setup_llm_parallel
.
Usage
call_llm_compare(configs_list, messages, ...)
Arguments
configs_list |
A list of llm_config objects to compare. |
messages |
A character vector or a list of message objects (same for all configs). |
... |
Additional arguments passed to |
Value
A tibble with columns: config_index (metadata), provider, model, all varying model parameters, response_text, raw_response_json, success, error_message.
Parallel Workflow
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call
setup_llm_parallel()
once at the start of your script.Run one or more parallel experiments (e.g.,
call_llm_broadcast()
).Call
reset_llm_parallel()
at the end to restore sequential processing.
See Also
setup_llm_parallel
, reset_llm_parallel
Examples
## Not run:
# Compare different models
config1 <- llm_config(provider = "openai", model = "gpt-4o-mini",
api_key = Sys.getenv("OPENAI_API_KEY"))
config2 <- llm_config(provider = "openai", model = "gpt-4.1-nano",
api_key = Sys.getenv("OPENAI_API_KEY"))
configs_list <- list(config1, config2)
messages <- "Explain quantum computing"
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_compare(configs_list, messages)
reset_llm_parallel(verbose = TRUE)
## End(Not run)